Skip to content

The OllaLab-Lean project is designed to help both novice and experienced developers rapidly set up and begin working on LLM-based projects.

License

Notifications You must be signed in to change notification settings

GSA/FedRAMP-OllaLab-Lean

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

74 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

FedRAMP

OllaLab-Lean

Accelerating Local LLM-based Research and Development

Documentation python Static Badge GitHub License

OllaLab-Lean is a lean stack designed to help both novice and experienced developers rapidly set up and begin working on LLM-based projects. This is achievable via simplified environment configuration and a cohesive set of tools for Research and Development (R&D). The project includes several key components.

  • Pre-made prompt templates and applications to accelerate research and developments.
  • Ollama for managing local openweight Large Language Models (LLMs).
  • LangChain for orchestrating LLM pipelines, allowing users to seamlessly connect, manage, and optimize their workflows.
  • Streamlit server to locally host dynamic LLM-based web applications.
  • Jupyter Lab server as the integrated development environment (IDE), providing users with an interactive space to write, test, and iterate code efficiently.
  • Neo4J vector database supporting retrieval-augmented generation (RAG) tasks.
  • Data analysis, AI, ML tools such as: DuckDB, AutoGluon, AutoViz, GenSim, etc.
  • Monitoring and Logging tools such as: Elastic Search, Kibana, Grafana, Prometheus.

Β 

Main Components of OllaLab Lean

Latest News

  • [2024/09/23] πŸš€ Project Initialized

Table of Contents

Usage

OllaLab-Lean supports most LLM and Data Science R&D activities. A few sample use cases are:

  • Use pre-made prompt templates and the provided Simple Chat Streamlit application to generate initial codes for R&D projects in any language.
  • Use the provided "Chat with Local Folder" to interact with multiple documents stored in a local folder for research and learning purposes.
  • Use Jupyter Lab and the provided Jupyter Notebooks to learn and experiment with cutting edge topics such as Graph-based Retreival Augmented Generation (RAG), other advanced RAG techniques, knowledge graph algorithms, and so on.
  • Use Jupyter Lab and the installed AutoML, AutoViz packages to efficiently execute Data Science/AI/ML tasks.

Important

Please refer to the project's Security Note for basic threat model and recommended proper uses.

Installation

You should be familiar with the command line interface, have Docker or Podman, Git, and other supporting CLI tools installed. If you are planning to use nVidia GPUs, you should have installed all nVidia supporting software. We will provide a detailed pre-installation instruction focusing on nVidia supporting stack at a later time.

For installing Docker, please check out Installing Docker Desktop on Windows or Installing Docker Desktop on MAC.

For installing Podman, please check out Podman Desktop Download and follow the Podman's installation instructions to properly set up both Podman and Podman Compose.

Important

You need to copy env.sample file to .env and set the default passwords for the services listed in that file. On Mac, you may have to open a terminal, run command "cp env.sample .env" and then "nano .env" to be able to edit the file.

Note

If you just want to install the Streamlit Apps without installing the whole OllaLab stack, you can go to OllaLab - Streamlit Apps for more guidance.

The below installation steps passed the test for AMD64 architecture, 12GB nVidia GPU, and Docker Compose for Windows on WSL2.

  1. Test for installed Container management

Test for Docker and Docker Compose with the following commands

docker --version
docker info
docker-compose --version

Test for Podman and Podman Compose with the following commands

podman version
podman compose version
  1. Clean up Container Management System (optional but recommended)

To clean up Docker

docker system prune -f
docker rmi -f $(docker images -a -q)

To clean up Podman

podman container prune
podman image prune
podman volume prune
podman network prune

There are also Podman System Prune

  1. Clone this repository to your selected current working folder
git clone https://github.com/GSA/FedRAMP-OllaLab-Lean.git
  1. Rename the "env.sample" file to ".env". Change the default password/token/secret/salt values in the .env file
  2. Build the project

If you are using Podman, you can skip this step.

If you are using Docker Compose:

  • Build with cache
docker-compose build
  • Build without cache (recommended)
docker-compose build --no-cache

NOTE: If you use podman-compose in place of docker-compose, you will need to explicitly configure podman-compose to interpret the Dockerfiles as the Docker format, not the standard OCI format, to properly to process CMD-SHELL, HEALTHCHECK, and SHELL directives by running the commands like below.

podman compose --podman-build-args='--format docker' build
podman compose --podman-build-args='--format docker' build --no-cache
  1. Run the compose project project

The below commands are for Docker Compose. If you use Podman, substitude "docker-compose" with "podman compose"

Run the stack with Default Services only (recommended for the lean-est stack)

docker-compose up

Run the stack with Default Services and Monitoring Services

docker-compose --profile monitoring up

Run the stack with Default Services and Logging Services

docker-compose --profile logging up

Run the stack with Default Services, Monitoring Services, and Logging Services

docker-compose --profile monitoring --profile logging up
  1. Verify the set up

Your running stack should look similar to this

In Docker Desktop

OllaLab-Lean Default Stack In Docker Desktop

In Podman Desktop

OllaLab-Lean Default Stack In Podman Desktop

  1. Download llama3.1:8b

If you are using Docker Desktop, you can click on the Ollama instance and get to the "Exec" tab to get to the instance CLI. If you are using Podman Desktop, choose Containers tab, click "Ollama" container, and then choose the "Terminal" tab.

In the CLI, run:

ollama pull llama3.1:8b

A successful model pull looks similar to this in Podman

Successful model pull

After it is done, run the following command and verify if llama3.1:8b was successfully pulled.

ollama list

You may pull other models and interact with them via the CLI. However, llama3.1:8b must be pulled for the provided Streamlit apps to work. In the next release, we will allow the Streamlit apps to ask you for which LLMs you want to work with.

  1. Run the Simple Chat web app

Go to localhost:8501/Simple_Chat to chat with the LLM. Please note:

  • You may need to go to host.docker.internal:8501/Simple_Chat
  • If you have no GPU, getting chat bot responses may take a while depending on your computer hardware.
  • If you have GPU, you may need to do pre-installation step to make sure Docker Desktop or Podman Desktop can leverage the GPU. Once it is done, chat bot response speed should be significantly faster.
  • If your pulled model size is greater than the available GPU-ram capacity, Docker Desktop or Podman Desktop may not use the GPU for LLM inference.

Submodules

Git submodules allow you to keep a Git repository as a subdirectory of another Git repository. Git submodules are simply a reference to another repository at a particular snapshot in time. Currently, OllaLab leverages submodules for sample datasets in streamlit_app/app/datasets

After cloning this repository, you can initialize and update the submodules with

  git submodule update --init --recursive

If the submodules get updates, you can pull changes in the submodules and then commit those changes in your main repository.

  cd submodules/some-repo
  git pull origin main
  cd ../..
  git add .
  git commit -m "Updated some-repo"
  git push origin main

To add a submodule, use the following command:

git submodule add https://github.com/other-user/other-repo.git local_path/to/submodule
git submodule update --init --recursive

Planned Items

  • Add tutorials for advanced usecases of the OllaLab-Lean stack
  • Add more prompt templates for R&D
  • Adjust the Jupyter Notebooks to be compatible with OllaLab-Lean stack
  • Add LLM-based JSON extractor app
  • Add "Chat with Git Repo" app
  • Add "Chat with API" app
  • Add tutorials for R&D use cases with OllaLab-Lean

File Structure

OllaLab_Lean/
β”œβ”€β”€ docker-compose.yml          # Main Docker Compose file
β”œβ”€β”€ env.sample                  # Sample .env file, need to be changed to .env with proper values
β”œβ”€β”€ images/                     # Relevant charts and images
β”œβ”€β”€ jupyter_lab/
β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”œβ”€β”€ notebooks/
β”‚   β”‚   └── *.ipynb             # Curated notebooks for LLM R&D
β”‚   └── requirements.txt                
β”œβ”€β”€ prompt-templates/           # Prompt templates for LLM-driven R&D
β”œβ”€β”€ streamlit_app/
β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ main.py             # Streamlit app home
β”‚   β”‚   β”œβ”€β”€ data_unificator     # Data Unificator app folder
β”‚   β”‚   └── pages/     
β”‚   β”‚       β”œβ”€β”€ Data_Unificator # App to merge data source files 
β”‚   β”‚       β”œβ”€β”€ folder_chat/    # Storing folders created by Folder Chat app
β”‚   β”‚       β”œβ”€β”€ Folder_Chat.py  # App to chat with a folder's content
β”‚   β”‚       β”œβ”€β”€ API_Chat.py     # App to chat with requested API data (underdevelopment)
β”‚   β”‚       β”œβ”€β”€ Simple_Chat.py  # App to chat
β”‚   β”‚       └── Git_Chat.py      # Chat with a git repository    
β”‚   └── requirements.txt
β”œβ”€β”€ ollama/                     # LLM management and inference API
β”œβ”€β”€ monitoring/
β”‚   β”œβ”€β”€ prometheus/
β”‚   └── grafana/
β”œβ”€β”€ logging/
β”‚   β”œβ”€β”€ elasticsearch/
β”‚   β”œβ”€β”€ logstash/
β”‚   └── kibana/
β”œβ”€β”€ tests/
β”œβ”€β”€ scripts/
β”‚   └── firewall_rules.sh       # Host-level firewall configurations
└── .gitignore

Contributing

We welcome contributions to OllaLab-Lean especially on the Planned Items!

Please see our Contributing Guide for more information on how to get started.

License

CC0 1.0 Universal

Code of Conduct

Please note that this project is released with a Contributor Code of Conduct. By participating in this project and/or cloning the project, you agree to abide by its terms.