Create LLM-enabled FastAPI applications without build configuration.
FastApi Gen works on macOS and Linux.
If something doesn’t work, please file an issue.
pip3 install fastapi-gen
fastapi-gen my_app
cd my_app
make start-dev
or
pipx run fastapi-gen my_app
cd my_app
make start-dev
If you've previously installed fastapi-gen
globally via pip3 install fastapi-gen
, we recommend you reinstall the package using pip3 install --upgrade --force-reinstall fastapi-gen
or pipx upgrade fastapi-gen
to ensure that you use the latest version.
Then open http://localhost:8000/docs to see your app OpenAPI documentation.
Available templates:
- Default - basic template with GET/POST examples.
- NLP - natural language processing template with examples how to use local Hugginface models for summarization, named-entity recognition and text generation using LLM.
- Langchain - template with examples how to use LangChain with local Hugginface models (LLMs) for text generation and question answering.
- Llama - template with examples how to use llama.cpp and llama-cpp-python with local Llama 2 for question asnwering.
Important notes:
- Langchain template requires hardware to run and will automatically download required models, be patient.
- Llama template will download around 4GB model from Hugginface and >4GB of RAM.
Each template includes not only code, but also tests.
You don’t need to install or configure depencendeices like FastApi or Pytest.
They are preconfigured and hidden so that you can focus on the code.
Create a project, and you’re good to go.
You’ll need to have Python 3.7+ or later version on your local development machine. We recommend using the latest LTS version. You can use pyenv (macOS/Linux) to switch Python versions between different projects.
pip3 install fastapi-gen
fastapi-gen my_app
or
pip3 install fastapi-gen
fastapi-gen my_app --template hello_world
pip install fastapi-gen
fastapi-gen my_app --template nlp
pip install fastapi-gen
fastapi-gen my_app --template Langchain
pip install fastapi-gen
fastapi-gen my_app --template llama
Inside the newly created project, you can run some built-in commands:
Runs the app in development mode.
Open http://localhost:8000/docs to view OpenAPI documentation in the browser.
The page will automatically reload if you make changes to the code.
Runs tests.
By default, runs tests related to files changed since the last commit.
fastapi-gen
is distributed under the terms of the MIT license.