Skip to content

bayesianinstitute/gpt_finetune

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Fine-tuning GPT-2 for Story Generation

This repository contains code and instructions for fine-tuning the GPT-2 model to generate stories using a dataset from Kaggle.

Getting Started

  1. Download the Dataset: You can download the dataset using the following commands:

    pip install kaggle
    kaggle datasets download -d emily2008/story-text
    mkdir -p data
    mv story-text.zip data/  # Move the downloaded dataset to the data folder
  2. Fine-tuning the Model: Follow the instructions in the notebook to fine-tune the GPT-2 model on the downloaded dataset.

Architecture

image png

We use the GPT-2 model from the Transformers library and fine-tune it for story generation.

Results

image png

References

  1. Story Text Dataset:
    Emily2008, Kaggle. Story Text dataset. Available at: https://www.kaggle.com/datasets/emily2008/story-text

  2. Transformers Library:
    Hugging Face. Transformers: State-of-the-art Natural Language Processing. Available at: https://huggingface.co/transformers

  3. PyTorch Framework:
    PyTorch. An open-source machine learning framework. Available at: https://pytorch.org

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published