This repository houses the collaborative project work of Dipika Kumar, Jack Forlines, and Raymond Hung. It encompasses the components and supporting materials developed as part of our collaborative effort, which is focused on addressing the key challenges involved in fine-tuning large language models (LLMS) for generating headlines from social media text. Within this repository, you'll find a collection of Jupyter notebooks, datasets, code implementations, and documentation that reflect our collective efforts in tackling key research questions, experimenting with different methodologies, and deriving insightful conclusions.
This study utilizes pre-trained LLMs to generate headlines from social media posts, emphasizing accuracy and engagement. By employing models like PEGASUS, T5, and BART on Reddit data, we aim to preserve semantic content while enhancing summarization. Evaluation metrics, including ROUGE, BLEU, and semantic similarity scores, alongside human feedback, highlight the T5 model's superior performance. Despite computational constraints and model-specific challenges, our research underscores the importance of innovative approaches to evaluation of headline generation from user-generated content.
-
The notebooks directory contains all the Jupyter notebooks used during the project, including those for data exploration, model training, and evaluation.
-
The data directory houses all the raw data files utilized in our analysis, as well as the processed datasets and any intermediary results generated during the project.
-
Our final project write-up provides a comprehensive overview of our research, methodologies, findings, and conclusions. It encapsulates the culmination of our efforts and serves as a detailed documentation of our project.