Empowering users to access tailored book selections from the Ebooks website. This web application, developed with Python and Streamlit, streamlines the process of downloading books that match their preferences.
Overview • Prerequisites • Architecture • Demo • Support • License
The primary goal of this project revolves around the retrieval of comprehensive book data from the Ebooks website.
The web application has been meticulously designed to cater to on-demand web scraping. In essence, it selectively extracts essential book information based on the user's specified choices regarding category, subject, and topic.
Once the user designates a category, the application promptly generates a list of associated subjects for the user to select from. Likewise, upon selecting a subject, the application dynamically populates a dropdown menu with relevant topics (if available).
Armed with these three choices, users can effortlessly obtain their desired information in the form of a downloadable CSV file, simply by clicking the "Get Data" button.
The project repository exhibits the following structure:
Ebooks-Extractor-App/
└─ 📁.streamlit/
├─ ⚙️config.toml
├─ 🐍app.py
├─ 🐍scraper_functions.py
├─ 🗒️readme.md
├─ 🗒️requirements.txt
├─ 📜.gitignore
├─ 🔑LICENSE
└─ 📁images/
├─ 🖼️books_image.jpg
├─ 🖼️ebooks_logo.png
├─ 🖼️process_workflow.png
├─ 🖼️webapp_graphic.gif
├─ 🖼️webapp_image.png
├─ 🖼️website_snippet.png
The Streamlit application is driven by two fundamental Python scripts:
-
🐍app.py: This script capitalizes on functions from the scraper_functions.py file, enabling seamless web scraping. Moreover, it stands as the cornerstone of the Streamlit application.
-
🐍scraper_functions.py: This file houses a collection of functions specifically designed for data extraction via web scraping techniques.
To fully grasp the concepts and processes involved in this project, it is recommended to have a solid understanding of the following skills:
- Fundamental knowledge of Python, APIs, Streamlit
- Familiarity with the Python libraries listed in the 🗒️requirements.txt file
- Basic familiarity with browser developer tools
Having these skills as a foundation will help to ensure a smooth and effective experience while working on this project.
The selection of applications and their installation process may differ depending on personal preferences and computer configurations.
The architectural design of this project is transparent and can be readily comprehended with the assistance of the accompanying diagram illustrated below:
The project's architectural framework encompasses the following key steps:
The user initiates the process by selecting their desired category from the available options. Based on the chosen category, the web application dynamically scrapes and presents a list of related subjects for the user's selection.
Upon subject selection, the web app proceeds to scrape topics associated with the selected subject (if available).
The user can then finalize their selection by choosing "Get Data"
Subsequently, the web application conducts a comprehensive scraping operation to gather book-related information. This gathered data is then structured into a CSV file format.
The user is provided with a downloadable CSV file containing the acquired book data, facilitating easy access to the information they require.
The following illustration demonstrates the process of collecting data by providing necessary inputs to the web application:
Access the web application by clicking here: Ebooks Extractor App
If you have any questions, concerns, or suggestions, feel free to reach out to me through any of the following channels:
If you find my work valuable, you can show your appreciation by buying me a coffee
This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.