Emulates mouse functions using Camera and Hand Gestures. Uses EmguCV and Accord.NET.
- Visual Studio 2015
You can acquire these dependencies by using NuGet Package Manager.
- Accord.3.8.0
- Accord.MachineLearning.3.8.0
- Accord.Math.3.8.0
- Accord.Statistics.3.8.0
- EMGU.CV.3.3.0.2824
- ZedGraph.5.1.5
This is done by the following steps
-
Loads necessary data for the following:
- Look Up Table for Skin Color Detection (HSV Color Space)[1].
- Support Vector Machine (SVM)[2] Model for classifying the differences between Actual Hand, Arm, and Head contours. Trained using Hu-Moments as training dataset.
- K-Nearest Neighbor (KNN)[3] Model for classifying specific Hand Gestures such as Open, Up, Down, Left, and Right. Trained using their Hand Convex Defects training dataset.
-
Start the image acquisition from the camera.
-
Apply Look Up Table based Skin Detection on each pixel of the acquired image frame. This will generate a Binary Image that contains Skin Image Blobs.
-
Find the ROI of each contour of blobs. For simplification, it's assumed that Hand is Located in the largest contour blobs.
-
After finding the largest blob, compute its Hu-Moment and classify it with SVM.
- If it's classified as the head, return to step 3.
- If it's classified as arm, localize its hand then proceed to step 6.
- If it's classified as hand, proceed to step to step 6.
-
After localizing the hand, recompute its Convex Defects[4] and classify it using KNN in order to distinguish gestures with the following
Fig. 2 - Open gesture classification
Fig. 3 - Left gesture classification
Fig. 4 - Right gesture classification
-
Then proceed to trigger mouse functions programmatically
- If classified as Open perform moving the mouse cursor.
- If classified as Left perform a left click.
- If classified as Right perform a right click.
- If classified as Up perform scroll-up.
- If classified as Down perform scroll-down.
-
Repeat step 3 until image acquisition is stopped.