Skip to content

ta-data-mexpt/project-data-extraction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

IronHack Logo

Project: API and Web Data Scraping

Overview

The goal of this project is for you to practice what you have learned in the APIs and Web Scraping chapter of this program. For this project, you will choose both an API to obtain data from and a web page to scrape. For the API portion of the project will need to make calls to your chosen API, successfully obtain a response, request data, convert it into a Pandas data frame, and export it as a CSV file. For the web scraping portion of the project, you will need to scrape the HTML from your chosen page, parse the HTML to extract the necessary information, and either save the results to a text (txt) file if it is text or into a CSV file if it is tabular data.

You will be working individually for this project, but we'll be guiding you along the process and helping you as you go. Show us what you've got!


Technical Requirements

The technical requirements for this project are as follows:

  • You must obtain data from an API using Python.
  • You must scrape and clean HTML from a web page using Python.
  • The results should be two files - one containing the tabular results of your API request and the other containing the results of your web page scrape.
  • Your code should be saved in a Jupyter Notebook and your results should be saved in a folder named output.
  • You should include a README.md file that describes the steps you took and your thought process for obtaining data from the API and web page.

Necessary Deliverables

The following deliverables should be pushed to your Github repo for this chapter.

  • A Jupyter Notebook (.ipynb) file that contains the code used to work with your API and scrape your web page.
  • An output folder containing the outputs of your API and scraping efforts.
  • A README.md file containing a detailed explanation of your approach and code for retrieving data from the API and scraping the web page as well as your results, obstacles encountered, and lessons learned.

Suggested Ways to Get Started

  • Find an API to work with - a great place to start looking would be API List and Public APIs. If you need authorization for your chosen API, make sure to give yourself enough time for the service to review and accept your application. Have a couple back-up APIs chosen just in case!
  • Find a web page to scrape and determine the content you would like to scrape from it - blogs and news sites are typically good candidates for scraping text content, and Wikipedia is usually a good source for HTML tables (search for "list of...").
  • Break the project down into different steps - note the steps covered in the API and web scraping lessons, try to follow them, and make adjustments as you encounter the obstacles that are inevitable due to all APIs and web pages being different.
  • Use the tools in your tool kit - your knowledge of intermediate Python as well as some of the things you've learned in previous chapters. This is a great way to start tying everything you've learned together!
  • Work through the lessons in class & ask questions when you need to! Think about adding relevant code to your project each night, instead of, you know... procrastinating.
  • Commit early, commit often, don’t be afraid of doing something incorrectly because you can always roll back to a previous version.
  • Consult documentation and resources provided to better understand the tools you are using and how to accomplish what you want.

Useful Resources

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published