flutter_ai_agent_sdk 1.1.1 copy "flutter_ai_agent_sdk: ^1.1.1" to clipboard
flutter_ai_agent_sdk: ^1.1.1 copied to clipboard

A high-performance, extensible AI Agent SDK for Flutter with voice interaction, LLM integration, and tool execution.

Flutter AI Agent SDK #

A high-performance, extensible AI Agent SDK for Flutter with voice interaction, LLM integration, and tool execution capabilities.

✨ Features #

  • πŸŽ™οΈ Voice Interaction: Built-in STT/TTS with native platform support
  • πŸ€– Multiple LLM Providers: OpenAI, Anthropic, and custom providers
  • πŸ› οΈ Tool/Function Calling: Execute custom functions from AI responses
  • πŸ’Ύ Conversation Memory: Short-term and long-term memory management
  • πŸ”„ Streaming Support: Real-time response streaming
  • 🎯 Turn Detection: VAD, push-to-talk, and hybrid modes
  • πŸ“¦ Pure Dart: No platform-specific code required
  • ⚑ High Performance: Optimized for mobile devices

πŸš€ Installation #

Add to your pubspec.yaml:

dependencies:
  flutter_ai_agent_sdk:
    path: ../flutter_ai_agent_sdk

πŸ“– Quick Start #

1. Create an LLM Provider #

import 'package:flutter_ai_agent_sdk/flutter_ai_agent_sdk.dart';

final llmProvider = OpenAIProvider(
  apiKey: 'your-api-key',
  model: 'gpt-4-turbo-preview',
);

2. Configure Your Agent #

final config = AgentConfig(
  name: 'My Assistant',
  instructions: 'You are a helpful AI assistant.',
  llmProvider: llmProvider,
  sttService: NativeSTTService(),
  ttsService: NativeTTSService(),
  turnDetection: TurnDetectionConfig(
    mode: TurnDetectionMode.vad,
    silenceThreshold: Duration(milliseconds: 700),
  ),
  tools: [
    // Add your custom tools here
  ],
);

3. Create and Use Agent #

final agent = VoiceAgent(config: config);
final session = await agent.createSession();

// Send text message
await session.sendMessage('Hello!');

// Start voice interaction
await session.startListening();

// Listen to events
session.events.listen((event) {
  if (event is MessageReceivedEvent) {
    print('Assistant: ${event.message.content}');
  }
});

// Listen to state changes
session.state.listen((status) {
  print('State: ${status.state}');
});

πŸ› οΈ Creating Custom Tools #

final weatherTool = FunctionTool(
  name: 'get_weather',
  description: 'Get current weather for a location',
  parameters: {
    'type': 'object',
    'properties': {
      'location': {'type': 'string', 'description': 'City name'},
      'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']},
    },
    'required': ['location'],
  },
  function: (args) async {
    final location = args['location'];
    final unit = args['unit'] ?? 'celsius';
    
    // Your weather API call here
    return {
      'temperature': 22,
      'condition': 'sunny',
      'location': location,
      'unit': unit,
    };
  },
);

πŸ—οΈ Architecture #

flutter_ai_agent_sdk/
β”œβ”€β”€ lib/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ core/           # Core agent logic
β”‚   β”‚   β”‚   β”œβ”€β”€ agents/     # VoiceAgent, config
β”‚   β”‚   β”‚   β”œβ”€β”€ sessions/   # Session management
β”‚   β”‚   β”‚   β”œβ”€β”€ events/     # Event system
β”‚   β”‚   β”‚   └── models/     # Data models
β”‚   β”‚   β”œβ”€β”€ voice/          # Voice processing
β”‚   β”‚   β”‚   β”œβ”€β”€ stt/        # Speech-to-text
β”‚   β”‚   β”‚   β”œβ”€β”€ tts/        # Text-to-speech
β”‚   β”‚   β”‚   β”œβ”€β”€ vad/        # Voice activity detection
β”‚   β”‚   β”‚   └── audio/      # Audio utilities
β”‚   β”‚   β”œβ”€β”€ llm/            # LLM integration
β”‚   β”‚   β”‚   β”œβ”€β”€ providers/  # Provider implementations
β”‚   β”‚   β”‚   β”œβ”€β”€ chat/       # Chat context
β”‚   β”‚   β”‚   └── streaming/  # Stream processing
β”‚   β”‚   β”œβ”€β”€ tools/          # Tool execution
β”‚   β”‚   β”œβ”€β”€ memory/         # Memory management
β”‚   β”‚   └── utils/          # Utilities
β”‚   └── flutter_ai_agent_sdk.dart
β”œβ”€β”€ example/                # Example app
└── test/                   # Tests

πŸ”Œ Supported LLM Providers #

OpenAI #

OpenAIProvider(
  apiKey: 'sk-...',
  model: 'gpt-4-turbo-preview',
)

Anthropic #

AnthropicProvider(
  apiKey: 'sk-ant-...',
  model: 'claude-3-sonnet-20240229',
)

Custom Provider #

class MyCustomProvider extends LLMProvider {
  @override
  String get name => 'MyProvider';
  
  @override
  Future<LLMResponse> generate({
    required List<Message> messages,
    List<Tool>? tools,
    Map<String, dynamic>? parameters,
  }) async {
    // Your implementation
  }
  
  @override
  Stream<LLMResponse> generateStream({
    required List<Message> messages,
    List<Tool>? tools,
    Map<String, dynamic>? parameters,
  }) async* {
    // Your streaming implementation
  }
}

🎯 Turn Detection Modes #

  • VAD (Voice Activity Detection): Automatic speech detection
  • Push-to-Talk: Manual button control
  • Server VAD: Server-side detection (e.g., OpenAI Realtime)
  • Hybrid: Combined VAD + silence detection

πŸ“± Platform Support #

  • βœ… iOS
  • βœ… Android
  • βœ… Web (limited voice features)
  • βœ… macOS
  • βœ… Windows
  • βœ… Linux

πŸ§ͺ Testing #

flutter test

πŸ“„ License #

MIT License

🀝 Contributing #

Contributions welcome! Please read CONTRIBUTING.md first.

πŸ“ž Support #

1
likes
70
points
22
downloads

Publisher

unverified uploader

Weekly Downloads

A high-performance, extensible AI Agent SDK for Flutter with voice interaction, LLM integration, and tool execution.

Repository (GitHub)
View/report issues

Documentation

Documentation
API reference

License

MIT (license)

Dependencies

async, audioplayers, collection, dio, flutter, flutter_sound, flutter_tts, http, json_annotation, logger, path_provider, record, rxdart, speech_to_text, uuid, web_socket_channel

More

Packages that depend on flutter_ai_agent_sdk

Packages that implement flutter_ai_agent_sdk