This project aims to recognize the American Sign Language alphabet from an image.Ability to read hand gestures expands access to the differently abled. In addition, gesture recognition finds applications in a wide range of verticals from medicine to home automation.
The initial set of training and validation images were obtained from Kaggle There was no significant class imbalance. J and Z alphabets involve movement and are hence not present in this data set
The CNN model was built using Keras. Image augmentation techniques are used to maximise the use of our few training examples to generalize better. Batch Normalization and Dropout layers are used to avoid overfitting. Notebook for Data prep and model
Our final model has a 97.29% accuracy with test data.
Streamlit script loads a page that accepts an image , uses the model above to identify the alphabet
Although we are seeing a great testing accuracy. The model is not performing as well on images taken in other conditions.
- Data Augemntation : The current data augemntation has zoom , rotation , height and width shift. Change brightness levels as well in the train data
- Use more images taken in different conditions.