Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment.
-
High-level API: Abstracts the complexity of a Retrieval Augmented Generation (RAG) pipeline. Handles document ingestion, chat, and completions.
-
Low-level API: For advanced users to implement custom pipelines. Includes features like embeddings generation and contextual chunks retrieval.
Privacy is the key motivator! Private-AI addresses concerns in data-sensitive domains like healthcare and legal, ensuring your data stays under your control.
Private-Ai Installation Guide
-
Install Python 3.11 (or 3.12)
-
Using apt(Debian base linux like-kali,Ubantu etc. )
sudo apt-get install python3.11 apt install python3.11-venv
-
Using pyenv:
pyenv install 3.11 pyenv local 3.11
-
Install Poetry for dependency management.
sudo apt install python3-poetry
sudo apt install python3-pytest
- Git clone Private-Ai repository:
git clone https://github.com/AryanVBW/Private-Ai
cd Private-Ai && \
python3.11 -m venv .venv && source .venv/bin/activate && \
pip install --upgrade pip poetry && poetry install --with ui,local && ./scripts/setup
python3.11 -m private_gpt
- forRunAgain jutsGoTo Private Ai directoy anr run following comand:
make run
- For Private-Ai to run fully locally GPU acceleration is required (CPU execution is possible, but very slow)
- Git clone Private-Ai repository:
git clone https://github.com/AryanVBW/Private-Ai
cd Private-Ai
- Install make (OSX:
brew install make
, Windows:choco install make
). - Install dependencies:
poetry install --with ui
- Install extra dependencies for local execution:
poetry install --with local
- Use the setup script to download embedding and LLM models:
poetry run python scripts/setup
- Installation of private Ai:
make
- Run
make run
orpoetry run python -m private_gpt
. - Open http://localhost:8001 to see Gradio UI with a mock LLM echoing input.
- Customize low-level parameters in
private_gpt/components/llm/llm_component.py
. - Configure LLM options in
settings.yaml
.
-
OSX: Build llama.cpp with Metal support.
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
-
Windows NVIDIA GPU: Install VS2022, CUDA toolkit, and run:
$env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
-
Linux NVIDIA GPU and Windows-WSL: Install CUDA toolkit and run:
CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
- Check GPU support and dependencies for your platform.
- For C++ compiler issues, follow troubleshooting steps.
Note: If any issues, retry in verbose mode with -vvv
during installations.
Troubleshooting C++ Compiler:
- Windows 10/11: Install Visual Studio 2022 and MinGW.
- OSX: Ensure Xcode is installed or install clang/gcc with Homebrew.
-
FastAPI-Based API: Follows the OpenAI API standard, making it easy to integrate.
-
LlamaIndex Integration: Leverages LlamaIndex for the RAG pipeline, providing flexibility and extensibility.
-
Present and Future: Evolving into a gateway for generative AI models and primitives. Stay tuned for exciting new features!
Contributions are welcome! Check the ProjectBoard for ideas. Ensure code quality with format and typing checks (run make check
).
Supported by Qdrant, Fern, and LlamaIndex. Influenced by projects like LangChain, GPT4All, LlamaCpp, Chroma, and SentenceTransformers.
π Thank you for contributing to the future of private and powerful AI with Private-AI! π License: Apache-2.0
This is a modified version of PrivateGPT. All rights and licenses belong to the PrivateGPT team.
Β© 2023 PrivateGPT Developers. All rights reserved.