Interactive LLM based chat application in TypeScript, wrapped with Electron, an utilizing Vite + React.
-
Updated
Mar 21, 2023 - TypeScript
Interactive LLM based chat application in TypeScript, wrapped with Electron, an utilizing Vite + React.
Self-hosted chat UI for running Alpaca models locally, built with MERN stack and based on llama.cpp
A desktop tool to install stable diffusion webui and chat with it
llama.cpp Desktop Client Demo
Obsidian Local LLM is a plugin for Obsidian that provides access to a powerful neural network, allowing users to generate text in a wide range of styles and formats using a local LLM.
Filling in the missing gaps with langchain, and creating OO wrappers to simplify some workloads.
An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.
.NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
A frontend for large language models like 🐨 Koala or 🦙 Vicuna running on CPU with llama.cpp, using the API server library provided by llama-cpp-python. NOTE: I had to discontinue this project because its maintenance takes more time than I can and want to invest. Feel free to fork :)
A bunch of LLaMa model investigations, including recreating generative agents (from the paper Generative Agents: Interactive Simulacra of Human Behavior)
A Discord Bot for chatting with LLaMA, Vicuna, Alpaca, MPT, or any other Large Language Model (LLM) supported by text-generation-webui or llama.cpp.
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."