Skip to content

Supervised Instruction Finetuning for Mistral 7B using Dolly-15K dataset

Notifications You must be signed in to change notification settings

ducnh279/Instruction-Finetuning-for-LLMs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

Instruction Fine-tuning

Supervised Instruction Finetuning for Mistral 7B using Dolly-15K dataset

Techniques

  • LoRA (Low-Rank Adaptation):

    • Rank: 8
    • Alpha: 16
    • Dropout: 0.1
  • Quantization:

    • 4-bit quantization

Hardware

  • A single GPU P100, equipped with 16GB of memory

About

Supervised Instruction Finetuning for Mistral 7B using Dolly-15K dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published