flutter_ai_agent_sdk 1.1.1
flutter_ai_agent_sdk: ^1.1.1 copied to clipboard
A high-performance, extensible AI Agent SDK for Flutter with voice interaction, LLM integration, and tool execution.
Flutter AI Agent SDK #
A high-performance, extensible AI Agent SDK for Flutter with voice interaction, LLM integration, and tool execution capabilities.
β¨ Features #
- ποΈ Voice Interaction: Built-in STT/TTS with native platform support
- π€ Multiple LLM Providers: OpenAI, Anthropic, and custom providers
- π οΈ Tool/Function Calling: Execute custom functions from AI responses
- πΎ Conversation Memory: Short-term and long-term memory management
- π Streaming Support: Real-time response streaming
- π― Turn Detection: VAD, push-to-talk, and hybrid modes
- π¦ Pure Dart: No platform-specific code required
- β‘ High Performance: Optimized for mobile devices
π Installation #
Add to your pubspec.yaml:
dependencies:
flutter_ai_agent_sdk:
path: ../flutter_ai_agent_sdk
π Quick Start #
1. Create an LLM Provider #
import 'package:flutter_ai_agent_sdk/flutter_ai_agent_sdk.dart';
final llmProvider = OpenAIProvider(
apiKey: 'your-api-key',
model: 'gpt-4-turbo-preview',
);
2. Configure Your Agent #
final config = AgentConfig(
name: 'My Assistant',
instructions: 'You are a helpful AI assistant.',
llmProvider: llmProvider,
sttService: NativeSTTService(),
ttsService: NativeTTSService(),
turnDetection: TurnDetectionConfig(
mode: TurnDetectionMode.vad,
silenceThreshold: Duration(milliseconds: 700),
),
tools: [
// Add your custom tools here
],
);
3. Create and Use Agent #
final agent = VoiceAgent(config: config);
final session = await agent.createSession();
// Send text message
await session.sendMessage('Hello!');
// Start voice interaction
await session.startListening();
// Listen to events
session.events.listen((event) {
if (event is MessageReceivedEvent) {
print('Assistant: ${event.message.content}');
}
});
// Listen to state changes
session.state.listen((status) {
print('State: ${status.state}');
});
π οΈ Creating Custom Tools #
final weatherTool = FunctionTool(
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
'type': 'object',
'properties': {
'location': {'type': 'string', 'description': 'City name'},
'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']},
},
'required': ['location'],
},
function: (args) async {
final location = args['location'];
final unit = args['unit'] ?? 'celsius';
// Your weather API call here
return {
'temperature': 22,
'condition': 'sunny',
'location': location,
'unit': unit,
};
},
);
ποΈ Architecture #
flutter_ai_agent_sdk/
βββ lib/
β βββ src/
β β βββ core/ # Core agent logic
β β β βββ agents/ # VoiceAgent, config
β β β βββ sessions/ # Session management
β β β βββ events/ # Event system
β β β βββ models/ # Data models
β β βββ voice/ # Voice processing
β β β βββ stt/ # Speech-to-text
β β β βββ tts/ # Text-to-speech
β β β βββ vad/ # Voice activity detection
β β β βββ audio/ # Audio utilities
β β βββ llm/ # LLM integration
β β β βββ providers/ # Provider implementations
β β β βββ chat/ # Chat context
β β β βββ streaming/ # Stream processing
β β βββ tools/ # Tool execution
β β βββ memory/ # Memory management
β β βββ utils/ # Utilities
β βββ flutter_ai_agent_sdk.dart
βββ example/ # Example app
βββ test/ # Tests
π Supported LLM Providers #
OpenAI #
OpenAIProvider(
apiKey: 'sk-...',
model: 'gpt-4-turbo-preview',
)
Anthropic #
AnthropicProvider(
apiKey: 'sk-ant-...',
model: 'claude-3-sonnet-20240229',
)
Custom Provider #
class MyCustomProvider extends LLMProvider {
@override
String get name => 'MyProvider';
@override
Future<LLMResponse> generate({
required List<Message> messages,
List<Tool>? tools,
Map<String, dynamic>? parameters,
}) async {
// Your implementation
}
@override
Stream<LLMResponse> generateStream({
required List<Message> messages,
List<Tool>? tools,
Map<String, dynamic>? parameters,
}) async* {
// Your streaming implementation
}
}
π― Turn Detection Modes #
- VAD (Voice Activity Detection): Automatic speech detection
- Push-to-Talk: Manual button control
- Server VAD: Server-side detection (e.g., OpenAI Realtime)
- Hybrid: Combined VAD + silence detection
π± Platform Support #
- β iOS
- β Android
- β Web (limited voice features)
- β macOS
- β Windows
- β Linux
π§ͺ Testing #
flutter test
π License #
MIT License
π€ Contributing #
Contributions welcome! Please read CONTRIBUTING.md first.
π Important Links #
- Email: chief.stategist.j@gmail.com
- Phone: +91 9664920749
- Medium: scaibu
- LinkedIn: Chief J
- Twitter/X: ChiefErj
- Instagram: chief._.jaydeep
π Support #
- Documentation: Project Wiki
- Issues: GitHub Issues
- Discussions: GitHub Discussions