Skip to content
/ DLOD Public

Android App for Real-time Object Detection with Tensorflow Lite

Notifications You must be signed in to change notification settings

fjsuarez/DLOD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DLOD (Deep Learning Object Detection a.k.a. Don't Let Others Decide)

This Android application demonstrates object detection using TensorFlow Lite. It allows users to perform real-time object detection using the device's camera or analyze static sample images. Detected objects are highlighted with bounding boxes and labeled with their class and confidence score.

Features

  • Real-time Object Detection: Utilizes the device camera to detect objects in the live feed.
  • Static Image Analysis: Allows users to select sample images for object detection.
  • Visual Feedback: Displays bounding boxes around detected objects and shows their labels and confidence scores.
  • Built with Modern Android Technologies:
    • Jetpack Compose for the user interface.
    • CameraX for camera operations.
    • TensorFlow Lite for on-device machine learning.

How it Works

The application has two main modes:

  1. Real-time Detection:

    • When the "Start Real-time Detection" button is pressed, the app requests camera permission if not already granted.
    • It then displays a live camera preview.
    • Frames from the camera are processed by an ObjectDetector using a TensorFlow Lite model (1.tflite, in this case it is EfficientDet-D0).
    • Detected objects, their bounding boxes, and labels are overlaid on the camera preview.
  2. Static Image Detection:

    • Users can select from predefined sample images.
    • The selected image is processed by the ObjectDetector.
    • The image is then displayed with the detection results (bounding boxes and labels) drawn on it.

The core object detection logic is handled in the runObjectDetection method within MainActivity.kt.

Permissions Required

The application requests the following permissions:

  • android.permission.CAMERA: To access the device camera for real-time detection.
  • android.permission.WRITE_EXTERNAL_STORAGE / android.permission.READ_EXTERNAL_STORAGE: Potentially for accessing images or models, though primary model loading is from assets. (As seen in AndroidManifest.xml)

Key Components

  • MainActivity.kt: The main entry point of the application, handling UI, camera setup, permission requests, and object detection logic.
    • CameraScreen: Composable function for the real-time camera view and detection overlay.
    • MainScreen: Composable function for the static image selection and display.
    • ObjectDetectionAnalyzer: An ImageAnalysis.Analyzer implementation that processes camera frames.
  • 1.tflite: The TensorFlow Lite model file (EfficientDet-D0) (expected to be in the assets folder) used for object detection.
  • Jetpack Compose: Used for building the entire user interface.
  • CameraX: Used for camera interactions and image analysis.
  • TensorFlow Lite Task Library (Vision): Simplifies the integration of the TFLite model for object detection.

Setup

  1. Clone the repository.
  2. Ensure you have a TensorFlow Lite model file named 1.tflite in the app/src/main/assets/ directory.
  3. Open the project in Android Studio.
  4. Build and run the application on an Android device or emulator.

About

Android App for Real-time Object Detection with Tensorflow Lite

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages