This Android application demonstrates object detection using TensorFlow Lite. It allows users to perform real-time object detection using the device's camera or analyze static sample images. Detected objects are highlighted with bounding boxes and labeled with their class and confidence score.
- Real-time Object Detection: Utilizes the device camera to detect objects in the live feed.
- Static Image Analysis: Allows users to select sample images for object detection.
- Visual Feedback: Displays bounding boxes around detected objects and shows their labels and confidence scores.
- Built with Modern Android Technologies:
- Jetpack Compose for the user interface.
- CameraX for camera operations.
- TensorFlow Lite for on-device machine learning.
The application has two main modes:
-
Real-time Detection:
- When the "Start Real-time Detection" button is pressed, the app requests camera permission if not already granted.
- It then displays a live camera preview.
- Frames from the camera are processed by an
ObjectDetectorusing a TensorFlow Lite model (1.tflite, in this case it isEfficientDet-D0). - Detected objects, their bounding boxes, and labels are overlaid on the camera preview.
-
Static Image Detection:
- Users can select from predefined sample images.
- The selected image is processed by the
ObjectDetector. - The image is then displayed with the detection results (bounding boxes and labels) drawn on it.
The core object detection logic is handled in the runObjectDetection method within MainActivity.kt.
The application requests the following permissions:
android.permission.CAMERA: To access the device camera for real-time detection.android.permission.WRITE_EXTERNAL_STORAGE/android.permission.READ_EXTERNAL_STORAGE: Potentially for accessing images or models, though primary model loading is from assets. (As seen in AndroidManifest.xml)
MainActivity.kt: The main entry point of the application, handling UI, camera setup, permission requests, and object detection logic.CameraScreen: Composable function for the real-time camera view and detection overlay.MainScreen: Composable function for the static image selection and display.ObjectDetectionAnalyzer: AnImageAnalysis.Analyzerimplementation that processes camera frames.
1.tflite: The TensorFlow Lite model file (EfficientDet-D0) (expected to be in theassetsfolder) used for object detection.- Jetpack Compose: Used for building the entire user interface.
- CameraX: Used for camera interactions and image analysis.
- TensorFlow Lite Task Library (Vision): Simplifies the integration of the TFLite model for object detection.
- Clone the repository.
- Ensure you have a TensorFlow Lite model file named
1.tflitein theapp/src/main/assets/directory. - Open the project in Android Studio.
- Build and run the application on an Android device or emulator.