This HA custom integration lets you use any compatible OpenAI API (OpenAI, GroqCloud, others coming ...) for computing speech-to-text in cloud, reducing workload on Home Assistant server.
- OpenAI
- GroqCloud
- others coming ...
whisper-1
- At the moment is the only model available, despite the name this is the whisper-large-v2 model
Currently all GroqCloud Whisper models are free up to 28800 audio seconds per day!
whisper-large-v3
distil-whisper-large-v3-en
- optimzed version of whisper-large-v3 only for english language
Before configuring the integration you must first install the custom_integration
. You can do it through HACS or manually
-
Add ➕ this repository to your HACS repositories:
-
Install 💻 the
OpenAI Whisper Cloud
integration -
Restart 🔁 Home Assistant
- Download this repository
- Copy everything inside the
custom_components
folder into your Home Assistant'scustom_components
folder. - Restart Home Assistant
These are the parameters that you can configure:
api_key
: (Required) api keymodel
: (Required) Check your source APItemperature
: (Optional) Sampling temperature between 0 and 1. Default0
prompt
: (Optional) Can be used to improve speech recognition of words or even names. Default""
You have to provide a list of words or names separated by a comma,
Example:"open, close, Chat GPT-3, DALL·E"
.
Now you can set it up through your Home Assistant Dashboard (YAML configuration not supported).