Control and move the pointer, and type, using head movements and facial gestures.
- This software isn't intended for human life-critical decisions, nor for medical use.
- This software works by recognising face landmarks only. Face landmarks don't provide facial recognition nor identification.
- This software doesn't store any unique face representation.
- Download the FaceCommander-Installer.exe from the Release section.
- Install it.
- Run from your Windows shortcuts or desktop.
-
Install Python 3.10 (or higher) for Windows from the official Python website.
-
Install Poetry if you don't have it already. You can install Poetry by running:
curl -sSL https://install.python-poetry.org | python3 -
Alternatively, on Windows, you can use:
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
Ensure that Poetry is available in your PATH. You can verify this by running:
poetry --version
-
Clone the Repository if you haven't already:
git clone https://github.com/AceCentre/FaceCommander.git cd FaceCommander
-
Create a Virtual Environment and Install Dependencies:
Poetry automatically handles virtual environments, so you don't need to manually create one. Simply run:
poetry install
This command will:
- Create a virtual environment in the
.venv
directory within your project. - Install all dependencies listed in
pyproject.toml
and lock them inpoetry.lock
.
- Create a virtual environment in the
-
Activate the Virtual Environment (if needed):
While Poetry typically handles this automatically, you can activate the virtual environment manually if required:
poetry shell
-
Run the Application:
With the virtual environment active, you can run the application directly:
poetry run python FaceCommander.py
This ensures that the Python interpreter and dependencies used are from the Poetry-managed environment.
-
Adding Dependencies: To add new dependencies, use:
poetry add <package_name>
-
Updating Dependencies: To update all dependencies to their latest versions (within the constraints defined):
poetry update
-
Exiting the Virtual Environment: To exit the Poetry shell (virtual environment), simply type:
exit
If you encounter any issues, refer to the Developer Guide for detailed instructions and troubleshooting tips.
camera_id | Default camera index on your machine. |
tracking_vert_idxs | Tracking points for controlling cursor (see) |
spd_up | Cursor speed in the upward direction |
spd_down | Cursor speed in downward direction |
spd_left | Cursor speed in left direction |
spd_right | Cursor speed in right direction |
pointer_smooth | Amount of cursor smoothness |
shape_smooth | Reduces the flickering of the action |
tick_interval_ms | interval between each tick of the pipeline in milliseconds |
hold_trigger_ms | Hold action trigger delay in milliseconds |
rapid_fire_interval_ms | interval between each activation of the action in milliseconds |
auto_play | Automatically begin playing when you launch the program |
enable | Enable cursor control |
mouse_acceleration | Make the cursor move faster when the head moves quickly |
use_transformation_matrix | Control cursor using head direction (tracking_vert_idxs will be ignored) |
The config parameters for keybinding configuration are in this structure.
gesture_name: [device_name, action_name, threshold, trigger_type]
gesture_name | Face expression name, see the list |
device_name | "meta", "mouse", or "keyboard" |
action_name | name of the action e.g. "left" for mouse. e.g. "ctrl" for keyboard e.g. "pause" for meta |
threshold | The action trigger threshold has values ranging from 0.0 to 1.0. |
trigger_type | "single" for a single trigger "hold" for ongoing action. "dynamic" for a mixture of single and hold. It first acts like single and after passing the amount of miliseconds from hold_trigger_ms like hold. Note: this is the default behaviour for mouse buttons "toggle" to switch an action on and off "rapid" trigger an action every "rapid_fire_interval_ms" |
Blink graphics in the user interface are based on Eye icons created by Kiranshastry - Flaticon.
MediaPipe Face Landmark Detection API Task Guide
MediaPipe BlazeFace Model Card
MediaPipe FaceMesh Model Card
Mediapipe Blendshape V2 Model Card