face_detection_tflite library

Face detection and landmark inference utilities backed by MediaPipe-style TFLite models for Flutter apps.

Classes

AlignedFace
Holds an aligned face crop and metadata used for downstream landmark models.
AlignedFaceFromMat
Aligned face crop data holder for OpenCV-based processing.
AlignedRoi
Rotation-aware region of interest for cropped eye landmarks.
BoundingBox
Face bounding box with corner points in pixel coordinates.
DecodedBox
Decoded detection box and keypoints straight from the TFLite model.
DecodedRgb
RGB image payload decoded off the UI thread.
Detection
Raw detection output from the face detector containing the bounding box and keypoints.
Eye
Comprehensive eye tracking data including iris center, iris contour, and eye mesh.
EyePair
Eye tracking data for both eyes including iris and eye mesh landmarks.
Face
Outputs for a single detected face.
FaceDetection
Runs face box detection and predicts a small set of facial keypoints (eyes, nose, mouth, tragions) on the detected face(s).
FaceDetectionTfliteDart
Flutter plugin registration stub for Dart-only initialization.
FaceDetector
A complete face detection and analysis system using TensorFlow Lite models.
FaceLandmark
Predicts the full 468-point face mesh (x, y, z per point) for an aligned face crop. Coordinates are normalized before later mapping back to image space.
FaceLandmarks
Facial landmark points with convenient named access.
FaceMesh
A 468-point face mesh with optional depth information.
ImageTensor
Image tensor plus padding metadata used to undo letterboxing.
ImageUtils
Utility functions for image preprocessing and transformations using OpenCV.
IrisLandmark
Estimates dense iris keypoints within cropped eye regions and lets callers derive a robust iris center (with fallback if inference fails).
IsolateWorker
A long-lived background isolate for image processing operations.
Mat
OutputTensorInfo
Holds metadata for an output tensor (shape plus its writable buffer).
PerformanceConfig
Configuration for TensorFlow Lite interpreter performance.
Point
A point with x, y, and optional z coordinates.
RectF
Axis-aligned rectangle with normalized coordinates.

Enums

FaceDetectionMode
Controls which detection features to compute.
FaceDetectionModel
Specifies which face detection model variant to use.
FaceLandmarkType
Identifies specific facial landmarks returned by face detection.
PerformanceMode
Performance modes for TensorFlow Lite delegate selection.

Constants

eyeLandmarkConnections → const List<List<int>>
Connections between eye contour landmarks for rendering the visible eyeball outline.
IMREAD_COLOR → const int
kMaxEyeLandmark → const int
Number of eye contour points that form the visible eyeball outline.
kMeshPoints → const int
The expected number of landmark points in a complete face mesh.

Functions

allocTensorShape(List<int> shape) Object
Allocates a nested list structure matching the given tensor shape.
collectOutputTensorInfo(Interpreter itp) Map<int, OutputTensorInfo>
Collects output tensor shapes (and their backing buffers) for an interpreter.
convertImageToTensor(Image src, {required int outW, required int outH}) ImageTensor
Converts an RGB image to a normalized tensor with letterboxing.
convertImageToTensorFromMat(Mat src, {required int outW, required int outH, Float32List? buffer}) ImageTensor
Converts a cv.Mat image to a normalized tensor with letterboxing.
createNHWCTensor4D(int height, int width) List<List<List<List<double>>>>
Creates a 4D tensor in NHWC format (batch=1, height, width, channels=3).
cropFromRoi(Image src, RectF roi) Future<Image>
Crops a region of interest from an image using normalized coordinates.
cropFromRoiMat(Mat src, RectF roi) Mat
Crops a rectangular region from a cv.Mat using normalized coordinates.
cropFromRoiWithWorker(Image src, RectF roi, IsolateWorker? worker) Future<Image>
Crops a region from an image using a worker if provided.
decodeImageWithWorker(Uint8List bytes, IsolateWorker? worker) Future<DecodedRgb>
Decodes an image using a worker if provided, otherwise spawns a new isolate.
extractAlignedSquare(Image src, double cx, double cy, double size, double theta) Future<Image>
Extracts a rotated square region from an image with bilinear sampling.
extractAlignedSquareFromMat(Mat src, double cx, double cy, double size, double theta) Mat?
Extracts a rotated square region from a cv.Mat using OpenCV's warpAffine.
extractAlignedSquareWithWorker(Image src, double cx, double cy, double size, double theta, IsolateWorker? worker) Future<Image>
Extracts an aligned square from an image using a worker if provided.
faceDetectionToRoi(RectF boundingBox, {double expandFraction = 0.6}) RectF
Converts a face detection bounding box to a square region of interest (ROI).
fillNHWC4D(Float32List flat, List<List<List<List<double>>>> input4dCache, int inH, int inW) → void
Fills a 4D NHWC tensor cache from a flat Float32List.
flattenDynamicTensor(Object? out) Float32List
Flattens a nested numeric tensor (dynamic output) into a Float32List.
imageToTensorWithWorker(Image src, {required int outW, required int outH, IsolateWorker? worker}) Future<ImageTensor>
Converts an image to a tensor using a worker if provided.
imdecode(Uint8List buf, int flags, {Mat? dst}) Mat
imdecode reads an image from a buffer in memory. The function imdecode reads an image from the specified buffer in memory. If the buffer is too short or contains invalid data, the function returns an empty matrix. @param buf Input array or vector of bytes. @param flags The same flags as in cv::imread, see cv::ImreadModes.
testClip(double v, double lo, double hi) double
Test-only access to _clip for verifying value clamping behavior.
testCollectOutputTensorInfo(Interpreter itp) Map<int, OutputTensorInfo>
Test-only access to collectOutputTensorInfo for verifying output tensor collection.
testDecodeImageOffUi(Uint8List bytes) Future<DecodedRgb>
testDetectionLetterboxRemoval(List<Detection> dets, List<double> padding) List<Detection>
testImageFromDecodedRgb(DecodedRgb d) → Image
testNms(List<Detection> dets, double iouThresh, double scoreThresh, {bool weighted = true}) List<Detection>
testSigmoidClipped(double x, {double limit = _rawScoreLimit}) double
testUnpackLandmarks(Float32List flat, int inW, int inH, List<double> padding, {bool clamp = true}) List<List<double>>