##Background This is a team assignment that was completed for the George Washington University Data Analytics Bootcamp, focused on ETL in Python, Jupyter Notebook, and Postgresql.
##Organization Inside of this repository, you will find one postgresql file, one Jupyter Notebook file, this readme, and two folders labeled "Resources" and "Screenshots". The resources folder holds 6 files: two excel files which were used as the original data source from which we extracted our data and 4 CSV's which represent what we transformed our original data into before putting it into SQL. In the "Screenshots" folder, you will find one screenshot of the ERD that we created in QuickDBD and screenshots of each of the tables that we created in Postgresql.
##Description My team built an ETL pipeline using Python, Pandas, and regular expressions to extract and transform data from two Excel files. After we transformed the data in Jupyter Notebook, we created four CSV files and used the CSV file data to create an ERD and a table schema. Finally, we uploaded the CSV file data into a Postgres database and took screenshots of our results.