ai_sensitive_content_classifier
A Dart/Flutter package that classifies text and images for sensitive content using the Google Generative AI (Gemini) API.
Supports input as plain text, Uint8List, ui.Image, or Flutter ImageProvider.
π Features
- π Classifies content(text/images or bytes directly) into types like
gore,violence,nudity,racism,hateSpeech,offensive, ornotSensitive - π§ Powered by Google Gemini (
gemini-2.0-flash-liteby default) - πΌοΈ Supports direct analysis from Flutter UI images (πΈ Support for
ui.Image,Uint8List, orImageProvider(e.g.,AssetImage,NetworkImage)) - π§ͺ Easily customizable model config (temperature, topP, maxOutputTokens, etc.)
- β JSON schema validation for safer AI response handling
- π§ JSON schema validation for structured AI responses
- π‘ No content filtering: all Gemini safety filters disabled to ensure full sensitivity analysis
π Installation
Add this to your pubspec.yaml:
dependencies:
ai_sensitive_content_classifier: ^0.1.3
Then run
flutter pub get
Usage:
import 'package:ai_sensitive_content_classifier/ai_sensitive_content_classifier.dart';
final classifier = AiSensitiveContentDetector(
apiKey: 'your-gemini-api-key',
);
final result = await classifier.analyseIsSensitiveContent(
text: 'This is a violent message',
);
print(result?.isSensitive); // true/false
print(result?.textClassification); // e.g., "violence"
Configurations:
AiSensitiveContentDetector({
required String apiKey,
String model = 'gemini-2.0-flash-lite',
double temperature = 0.1,
double topP = 0.95,
int topK = 64,
int maxOutputTokens = 8192,
})
Libraries
- ai_sensitive_content_classifier
- A classifier for detecting sensitive content in text or images using Google's Gemini API.
- interfaces/generative_expense_ai_datasource
- models/ai_classification_response