The backend supports running LangGraph-GUI workflow json using localLLM such ollama.
For more infomation, please see official site: LangGraph-GUI.github.io
To install the required dependencies for LangGraph and server, run:
pip install -r requirements.txt
To run a local language model, first start Ollama in a separate terminal:
ollama serve
At another thread, up the server
mkdir src/workspace
cd src/workspace
python ../server.py