Instruction Fine-tuning Supervised Instruction Finetuning for Mistral 7B using Dolly-15K dataset Techniques LoRA (Low-Rank Adaptation): Rank: 8 Alpha: 16 Dropout: 0.1 Quantization: 4-bit quantization Hardware A single GPU P100, equipped with 16GB of memory