Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
-
Updated
Feb 25, 2024 - Jupyter Notebook
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Examples of RAG using Llamaindex with local LLMs in Linux - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Build LLM-enabled FastAPI applications without build configuration.
An Android App recreating the Simon Says game. Uses MediaPipe to run an LLM on device
This Repo contains How to Finetune Google's New Gemma LLm model using your custom instuction dataset. I have finetuned Gemma 2b instuct Model on 20k medium articles data for 5hrs on kaggle p100 GPU
An iOS App recreating the Simon Says game. Uses MediaPipe to run an LLM on device
第八届全国职工职业技能大赛人工智能训练师赛项
Video Summarization Experiments with Open LLMs
Explore practical fine-tuning of LLMs with Hands-on Lora. Dive into examples that showcase efficient model adaptation across diverse tasks.
A Personalized Assistant with Gemma 2b
A langchain application that use open-source gemma2:2b LLM model.
Fine-tune the Gemma2B language model on a climate-related question-answer dataset to improve its domain-specific knowledge using LoRA (Low Rank Adaptation).
AskPdf is a Streamlit-based application for question-answering over PDF documents using Retrieval-Augmented Generation (RAG) with conversational history. Upload your PDFs and interactively ask questions to get concise, contextually relevant answers, leveraging LangChain, Chroma for vector storage, and HuggingFace embeddings.
Evaluation of Google's Instruction Tuned Gemma-2B, an open-source Large Language Model (LLM). Aimed at understanding the breadth of the model's knowledge, its reasoning capabilities, and adherence to ethical guardrails, this project presents a systematic assessment across a diverse array of domains.
ML Bot is a RAG Application built using google/gemma-2b-it local LLM
This project demonstrates the steps required to fine-tune the Gemma model for tasks like code generation. We use qLora quantization to reduce memory usage and the SFTTrainer from the trl library for supervised fine-tuning.
ECE-5424 Advanced Machine Learning Final Project - LLM Prompt Recovery task
Add a description, image, and links to the gemma-2b topic page so that developers can more easily learn about it.
To associate your repository with the gemma-2b topic, visit your repo's landing page and select "manage topics."