This project is currently a work in progress. It's actively being developed, and features may change frequently.
Learning application for trying out AI-related features and building simple REST API.
- docker and compose 24+
⚠️ Warning Running AI models locally is resource intensive task. Your machine could run out of resources. Make sure you have at least 8GB of free RAM, 15GB free storage space (for LLMs) and your CPU doesn't running any other heavy tasks.
🛑 Important This is not intended for production. Run this on your machine only.
- Create the environment variables file and set all the required values
$ cp .env.example .env
- Start the application
$ docker-compose up
Service | Port |
---|---|
localai | 8080 |
app | 4321 |
- Initial setup for full-stack application
- REST API for interacting with AI models which are running locally
- Serverless database integration
- Passwordless authentication using Github (Social login)
- Add UI framework(s) for client-side interaction
- Add UI library
- Logic for public and private pages
- Develop UIs and page structure
- Private AI chat page
- CRUD operations for a resource in the app and in the REST API
- Conversation history - AI takes in consideration previous questions
- Stream responses - Display each piece of assistants answer while it is generating it
- Generate image with prompt
- Passwordless authentication using passkeys