totaUI is a beautiful and interactive web interface designed for local Language Models (LLMs). This application allows users to interact with their chosen model in a chat-like format, enabling real-time text generation.
- Real-time Streaming: Receive responses word by word as they are generated.
- Model Selector: Choose from multiple available models.
Before you begin, ensure you have the following installed:
- Python 3.6+
- Pip (Python package manager)
Follow these steps to set up the project locally:
-
Clone the Repository
git clone https://github.com/yourusername/totaUI.git cd totaUI
-
Set Up the Backend
Navigate to the backend directory and install required packages:
cd backend pip install flask flask-cors requests
-
Set Up the Frontend
Navigate to the frontend directory. For basic usage, you might not need additional setup.
-
Run the Application
You can use the provided
run.sh
script to start both the backend and frontend:chmod +x run.sh ./run.sh
Alternatively, you can start them manually:
-
In one terminal, run the backend:
cd backend python app.py
-
In another terminal, run the frontend:
cd frontend python -m http.server 8000
-
-
Access the Web Interface
Open your web browser and go to
http://localhost:8000
to interact with the LLM!
By default, totaUI is set to work with a general local model. If you'd like to replace this with your own model, simply edit the configuration file in the backend to point to your preferred model files or API. This allows users to easily integrate any locally installed models or other APIs as needed.
Once the application is running, you can:
- Select your preferred model from the dropdown in the top right.
- Type your messages in the input box and hit "Send".
Contributions are welcome! If you would like to contribute, please fork the repository and submit a pull request.