isar_agent_memory 0.2.3
isar_agent_memory: ^0.2.3 copied to clipboard
Universal, local-first cognitive memory package for LLMs and AI agents. Graph-based, explainable, LLM-agnostic. Inspired by Cognee/Graphiti.
π§ isar_agent_memory #
π§ BETA: This package is in active development. API may change. Feedback and PRs are welcome!
π Quickstart #
1. Add dependency (pubspec.yaml) #
isar_agent_memory: ^0.2.3
isar: ^3.1.0+1
# ObjectBox is the default vector backend.
# onnxruntime is used for on-device embeddings.
2. Basic Usage #
import 'package:isar/isar.dart';
import 'package:isar_agent_memory/isar_agent_memory.dart';
import 'package:isar_agent_memory/src/gemini_embeddings_adapter.dart';
// 1. Initialize the embeddings adapter (e.g., Gemini)
final adapter = GeminiEmbeddingsAdapter(apiKey: '<YOUR_GEMINI_API_KEY>');
// 2. Open Isar database with schemas
final isar = await Isar.open([
MemoryNodeSchema, MemoryEdgeSchema
], directory: './exampledb');
// 3. Initialize MemoryGraph
final graph = MemoryGraph(isar, embeddingsAdapter: adapter);
// 4. Store a node with embedding (automatically indexed)
final nodeId = await graph.storeNodeWithEmbedding(content: 'The quick brown fox jumps over the lazy dog.');
// 5. Semantic search (ANN)
final queryEmbedding = await adapter.embed('A fox jumps over a dog');
final results = await graph.semanticSearch(queryEmbedding, topK: 3);
for (final result in results) {
print('Node: ${result.node.content}');
print('Distance: ${result.distance.toStringAsFixed(3)}');
print('Provider: ${result.provider}');
}
// 6. Explain recall
if (results.isNotEmpty) {
final explanation = await graph.explainRecall(results.first.node.id, queryEmbedding: queryEmbedding);
print('Explain: $explanation');
}
π On-Device Embeddings (Local Privacy) #
You can run embeddings entirely on-device using ONNX Runtime (e.g., with all-MiniLM-L6-v2).
1. Download Model and Vocab #
- Download the ONNX model (e.g.,
model.onnxormodel_quantized.onnx) from Hugging Face or similar. - Download the
vocab.txtused by the model (WordPiece vocabulary).
2. Usage #
import 'package:isar_agent_memory/isar_agent_memory.dart';
final adapter = OnDeviceEmbeddingsAdapter(
modelPath: 'assets/model.onnx',
vocabPath: 'assets/vocab.txt',
dimension: 384, // Default for MiniLM-L6-v2
);
// Initialize (loads model and vocab)
await adapter.initialize();
final graph = MemoryGraph(isar, embeddingsAdapter: adapter);
Note: For mobile apps (Flutter), ensure you add the
.onnxand.txtfiles to yourpubspec.yamlassets.
π§ͺ Testing #
Running Unit Tests #
dart test
Running On-Device Adapter Tests #
To run tests that require the ONNX model files, you must first download the test resources:
-
Download Test Resources:
dart run tool/setup_on_device_test.dartThis will download
model.onnxandvocab.txtto thetest_resources/directory. -
Run the Tests:
dart test test/on_device_embeddings_adapter_test.dart
𧬠Features #
- Universal Graph API: Store, recall, relate, search, and explain memories.
- Fast ANN Search: Uses ObjectBox (HNSW) as the default vector backend.
- Pluggable Vector Index: Swap ObjectBox for a custom backend if needed.
- Pluggable Embeddings: Adapters for Gemini, OpenAI, or On-Device (ONNX).
- Explainability: Semantic distance, activation (recency/frequency), and path tracing.
- Hybrid Search: Combine vector similarity with full-text search (BM25-like) for better recall.
- Robust Testing: comprehensive test suite and real-world examples.
- Extensible: Add metadata, new adapters, or future sync/export capabilities.
π οΈ Integrations #
- Isar: Local, fast NoSQL DB for Dart/Flutter.
- ObjectBox: On-device vector search (HNSW) with floatVector & HNSW index (default).
- LangChain: LLM/agent workflows.
- Gemini: Embeddings provider.
- ONNX Runtime: On-device inference.
π οΈ Troubleshooting #
Isar Native Library (isar.dll) Loading Failure in Tests #
Problem:
When running flutter test within the isar_agent_memory_tests subproject on Windows, tests may fail with Invalid argument(s): Failed to load dynamic library '...\isar.dll'.
Solution:
The test suite (test/memory_graph_test.dart) includes a workaround that automatically locates isar_flutter_libs and copies the correct isar.dll to the project root if it's missing. This ensures tests run reliably on Windows.
β οΈ Known Issues #
- Gemini Tests: Require an API key.
export GEMINI_API_KEY=<YOUR_KEY> dart test - Windows DLLs: Handled automatically by the test runner as described above.
π¦ Publishing #
- This package is BETA.
- To publish:
dart pub publish --dry-run
π€ Contributing #
PRs, issues, and feedback are welcome! See CONTRIBUTING.md.
βοΈ License #
MIT
isar_agent_memory is not affiliated with Isar, LangChain, Gemini, or OpenAI. Names/logos are for reference only.
π·οΈ Tags #
isar langchain embeddings memory agents llm flutter dart
Overview #
isar_agent_memory provides a robust, explainable, and extensible memory system for agents and LLMs. It combines a universal graph (nodes, edges, metadata) with efficient vector search, pluggable embeddings, and advanced explainability.
- Universal Graph: Store facts, messages, concepts, and relations.
- Efficient Semantic Search: ANN (HNSW) for context retrieval.
- Pluggable Embeddings: Gemini, OpenAI, or custom.
- Explainability: Trace why a memory was recalled.
- LLM-Agnostic: Use with any agent, chatbot, or LLM workflow.
graph TD
A[Agent / LLM] --> B[MemoryGraph API]
B --> C[Isar Graph DB]
B --> D[ObjectBox ANN Vector DB]
C --> E[Nodes, Edges, Embeddings, Index]
D --> E
E --> F[Metadata HNSW, fast search]
- MemoryGraph: Main API.
- Isar: Stores nodes, edges, metadata.
- ObjectBox: Provides fast semantic search (HNSW).
- EmbeddingsAdapter: Interface for embedding providers.
Embeddings: Pluggable Providers #
- Use
GeminiEmbeddingsAdapteror implementEmbeddingsAdapter. - Example (Gemini):
final adapter = GeminiEmbeddingsAdapter(apiKey: '<YOUR_GEMINI_API_KEY>');
- Custom Provider (e.g., OpenAI):
class MyEmbeddingsAdapter implements EmbeddingsAdapter {
@override
String get providerName => 'my_provider';
@override
Future<List<double>> embed(String text) async {
// Call your embedding API here
}
}
Fallback to Gemini (Cloud) #
Compose adapters with FallbackEmbeddingsAdapter to prefer on-device/local models and fall back to cloud (Gemini) on failure.
import 'dart:io';
import 'package:isar_agent_memory/isar_agent_memory.dart';
final local = OnDeviceEmbeddingsAdapter(modelPath: '...', vocabPath: '...');
final gemini = GeminiEmbeddingsAdapter(
apiKey: Platform.environment['GEMINI_API_KEY'] ?? '',
);
final adapter = FallbackEmbeddingsAdapter(
primary: local,
fallback: gemini,
fallbackOnEmpty: true,
);
final graph = MemoryGraph(isar, embeddingsAdapter: adapter);
Environment Variables #
- Use a
.envfile (andflutter_dotenv) or system environment variables for API keys.
export GEMINI_API_KEY=xxxx
Semantic Search (ANN) #
- Uses ObjectBox (HNSW) by default.
final queryEmbedding = await adapter.embed('search phrase');
final results = await graph.semanticSearch(queryEmbedding, topK: 5);
Hybrid Search #
Combine vector search with full-text search (Isar filter) for better recall.
final results = await graph.hybridSearch('search phrase', topK: 5, alpha: 0.5);
π Pluggable Vector Index Backends #
- ObjectBox (Default): On-device HNSW.
Usage with default ObjectBox:
final graph = MemoryGraph(isar, embeddingsAdapter: adapter);
Usage with Custom/External ObjectBox:
final index = ObjectBoxVectorIndex.open(
directory: './obxdb',
namespace: 'default',
);
final graph = MemoryGraph(isar, embeddingsAdapter: adapter, index: index);
ObjectBox Notes:
- The
ObxVectorDocentity uses@HnswIndex(dimensions: 768, ...). - If you use embeddings with different dimensions (e.g., OpenAI's 1536), you must modify the entity and regenerate code.
Explainability #
- Every recall result can be explained via:
- Semantic Distance: How close to the query?
- Provider: Which model generated the embedding?
- Activation: Recency, frequency, importance.
- Path Tracing: Why did this memory surface in the graph?
final explanation = await graph.explainRecall(nodeId, queryEmbedding: queryEmbedding);
print(explanation);
Extensibility #
- Add new embedding providers.
- Store arbitrary metadata.
- Sync/export planned.
Roadmap #
- β
Pluggable
VectorIndex+ObjectBoxVectorIndexdefault. - β
GeminiEmbeddingsAdapter+FallbackEmbeddingsAdapter. - β
InMemoryVectorIndexfor tests. - β
OnDeviceEmbeddingsAdapter(ONNX) for Android/iOS/Desktop. - β Benchmarks via GitHub Actions.
- β Hybrid Retrieval (Dense + Isar Filter).
- β Sync & Privacy (Encryption).
βοΈ Dependency Management & Testing #
This repository uses a split-project architecture to avoid dependency conflicts between isar_generator and flutter_test.
isar_agent_memory: Main project (logic + code gen).isar_agent_memory_tests: Dedicated test project (runsflutter test).
Running Tests #
cd ../isar_agent_memory_tests
flutter test
π Continuous Dependency Updates #
Uses Dependabot for automated PRs and Coderabbit for AI-assisted reviews. Merges to main require passing CI checks.