A computer vision system that detects left and right hands to assist in recognizing American Sign Language (ASL) gestures. This project uses OpenCV and Mediapipe to identify and crop hand regions from webcam video in real time, saving them for dataset creation and further training.
โ
Real-time detection of left and right hands
โ
Automatic cropping and saving of detected hands
โ
Adjustable offset and image size for better framing
โ
Label assignment to different ASL gestures
โ
Built with Python, OpenCV, and Mediapipe
๐ Project Overview The primary goal of this system is to lay the foundational groundwork for interpreting ASL gestures from real-time video feeds.
Currently, the system is capable of:
Real-time Hand Detection: Identifies hands (specifically distinguishing between left and right hands) in video streams.
Preparatory for Gesture Recognition: Establishes the initial detection capabilities necessary for future sign recognition.
Pre-trained Model Utilization: Leverages the power of the TensorFlow Object Detection API with transfer learning for efficient model training.
โจ Features Real-time Processing: Designed for live video input from a webcam.
Hand Bounding Box Detection: Accurately locates and identifies left and right hands within the frame.
Foundational Hand Identification: Provides the base for integrating specific ASL sign recognition in future iterations.
Python & OpenCV: Core implementation uses Python for logic and OpenCV for video processing.
TensorFlow Integration: Utilizes TensorFlow for the underlying machine learning model.
๐ ๏ธ Technologies Used Python 3.x
OpenCV
TensorFlow / Keras
TensorFlow Object Detection API
NumPy
Mediapipe (If used for hand tracking/landmarks)
labelImg (for annotation)
๐ฆ Installation & Setup To get this project up and running on your local machine, follow these steps:
Clone the repository:
git clone https://github.com/your-username/HandSignDetectionn.git cd HandSignDetectionn
(Replace your-username with your GitHub username)
Create a Virtual Environment (Recommended):
python -m venv venv
Activate the Virtual Environment:
Windows:
.\venv\Scripts\activate
macOS / Linux:
source venv/bin/activate
Install Dependencies: Once your virtual environment is active, install the required Python packages:
pip install -r requirements.txt
Download Pre-trained Model:
Prepare Dataset:
Run the data collection script:
python dataCollection.py
(This script might collect images or process existing ones, refer to its internal logic)
Run the detection/recognition script:
python test.py
(This script should open your webcam feed and perform real-time hand detection and identification of left/right hands.)
๐ Future Enhancements Expand the ASL vocabulary to recognize specific signs beyond left/right hand detection.
Improve model robustness to variations in lighting, background, and signer styles.
Explore advanced deep learning architectures for higher accuracy in sign recognition.
Develop a more user-friendly graphical interface (GUI).
Integrate with text-to-speech for audible interpretation of signs.
๐ค Contributing Contributions are welcome! If you have suggestions or improvements, please:
Fork the repository.
Create a new branch (git checkout -b feature/YourFeature).
Make your changes.
Commit your changes (git commit -m 'feat: Add new feature').
Push to the branch (git push origin feature/YourFeature).
Open a Pull Request.
๐ง Contact For any questions or inquiries, please reach out: [Your Name/Email]