The Sign Language to English Translator is a Python-based application designed to facilitate communication with individuals who are unable to speak or hear. This tool uses Computer Vision and Natural Language Processing (NLP) to recognize hand gestures (sign language) and translate them into spoken English.
- Detects and interprets common hand gestures using a webcam.
- Converts recognized gestures into text and speech.
- Uses MediaPipe for hand landmark detection.
- Implements real-time translation to bridge communication gaps.
- Programming Language: Python
- Libraries:
- Python 3.7 or higher.
- A functional webcam for gesture detection.
- Install the required libraries:
pip install mediapipe opencv-python pyttsx3
- Clone the repository or download the project files.
- Install the required Python packages:
pip install -r requirements.txt
- Connect a webcam to your computer.
- Run the script:
python sign_language.py
- Hand Detection: The application uses MediaPipe to detect hand landmarks in the video feed.
- Gesture Recognition: Custom conditions are applied to identify gestures like:
- Victory: Hand forming a "V" shape.
- Thumbs Up/Down: Specific finger positions.
- Other gestures like "OK", "Call Me", and "Smile".
- Translation: Recognized gestures are mapped to their respective English phrases.
- Speech Output: The text is converted to speech using pyttsx3 for auditory feedback.
Gesture | Output Text | Description |
---|---|---|
Victory | "Victory" | Hand forming a "V". |
Thumbs Up | "Thumbs Up" | Thumb pointing upwards. |
Thumbs Down | "Thumbs Down" | Thumb pointing downwards. |
Smile | "Smile" | Smile gesture with hand. |
Call Me | "Call Me" | Hand mimicking a phone shape. |
Pain | "Pain" | Gesture indicating discomfort. |
- Launch the application.
- Position your hand in front of the camera.
- Perform one of the supported gestures.
- The application will:
- Display the corresponding text on the screen.
- Announce the phrase using text-to-speech.
- Add support for a broader range of gestures.
- Improve gesture recognition accuracy.
- Integrate NLP to allow users to customize responses.
- Deploy the application as a web or mobile app for accessibility.
Contributions are welcome! If you'd like to contribute:
- Fork the repository.
- Create a feature branch (
git checkout -b feature-name
). - Commit your changes (
git commit -m 'Add feature'
). - Push to the branch (
git push origin feature-name
). - Open a pull request.
This project is licensed under the MIT License.
We hope this project helps foster better communication and inclusivity. Feel free to reach out with any feedback or suggestions!