We were asked to do the following:
Develop an efficient Search Engine with the following features it should have distributed crawlers to crawl the private/air-gapped networks (data sources in these networks might include websites, files, databases) and must work behind sections of networks secured by firewalls
It should use AI/ML/NLP/BDA for better search (queries and results) It should abide by the secure coding practices (
and SANS Top 25 web vulnerability mitigation techniques.) feel free to improvise your solution and be creative with your approach Goal
Have a search engine which takes keyword/expression as an input and crawls the web (internal network or internet) to get all the relevant information. The application shouldn't have any vulnerabilities, make sure it complies with OWASP Top 10 Outcome Write a code which will scrape data, match it with the query and give out relevant/related information. Note - Make search as robust as possible (eg, it can correct misspelt query, suggest similar search terms, etc) be creative in your approach. Result obtained from search engine should display all the relevant matches as per search query/keyword along with the time taken by search engine to fetch that result There is no constraint on programming language.
To Submit: - A Readme having steps to install and run the application - Entire code repo - Implement your solution/model in Dockers only. - A video of the working search engine
- Corrected Spelling suggestions
- Auto Suggested
- 3 different types of crawler
- Distributed crawlers
- A site submit form
- Blazingly fast And so on...
It address the following SANS Top 25 Most Dangerous Software Errors and OWASP Top 10 Vulnerabilities
- Injection
- Broken Authentication
- Sensitive Data Exposure
- XML External Entities
- Broken Access Control
- Security Misconfiguration
- Cross-Site Scripting
- Insecure Deserialization
- Using Components with Known Vulnerabilities
- Insufficient Logging and Monitoring
- Improper Restriction of Operations within the Bounds of a Memory Buffer
- Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
- Improper Input Validation
- Information Exposure
- Out-of-bounds Read
- Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection')
- Use After Free
- Integer Overflow or Wraparound
- Cross-Site Request Forgery (CSRF)
- Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
- Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')
- Out-of-bounds Write
- Improper Authentication
- NULL Pointer Dereference
- Incorrect Permission Assignment for Critical Resource
- Unrestricted Upload of File with Dangerous Type
- Improper Restriction of XML External Entity Reference
- Improper Control of Generation of Code ('Code Injection')
- Use of Hard-coded Credentials
- Uncontrolled Resource Consumption
- Missing Release of Resource after Effective Lifetime
- Untrusted Search Path
- Deserialization of Untrusted Data
- Improper Privilege Management
- Improper Certificate Validation
Just run
docker build .
Also check this out If you wish you can do teh necessary image tagging.
After building the image install the docker image.
To run Konohagakure Search you need python3.9, latest version of golang, postgres, rabbitmq and redis
See their installation instruction and download it properly.
After downloading the above mentioned softwares, now run the following commands in console after opening the terminal:
Clone the repository using git
git clone https://github.com/Sainya-Ranakshetram-Submission/search-engine.git
pip install --upgrade virtualenv
cd search-engine
virtualenv env
env/scripts/activate
pip install --upgrade -r requirements.min.txt
pip install --upgrade django
python -m spacy download en_core_web_md
python -m nltk.downloader stopwords
python -m nltk.downloader words
go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
Rename the example.env to .env
and setup the environment variables according to your choice.
Now open pgadmin
and create a database named search_engine
. After creating the database reassign the DATABASE_URL
value acordingly in .env
file.
Note please read this also
Read their docs regarding how to start them. redis rabbitmq
python manage.py migrate
And to migrate the 10 Lakh dataset of the website for the crawler to crawl, do
python manage.py migrate_default_to_be_crawl_data
I have also given some crawled datasets for the reference, you can see it here data_backup
Run the following commands to compress the static files (This step is not there in youtube video):
python manage.py collectcompress
python manage.py createsuperuser
It asks for some necessary information, give it then it will create a superuser for the site.
Now run this command in ther terminal
python manage.py add_celery_tasks_in_panel
Now, open two different terminals And run these commands respectively :-
celery -A search_engine worker --loglevel=INFO
celery -A search_engine beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler
Before running the application don't forget to start the redis also :)
- For
windows
,mac-os
,linux
Without IP address bound
uvicorn search_engine.asgi:application --reload --lifespan off
IP address bound
uvicorn search_engine.asgi:application --reload --lifespan off --host 0.0.0.0
If you are in Linux
OS then you can run this command also instead of the above one:
gunicorn search_engine.asgi:application -k search_engine.workers.DynamicUvicornWorker --timeout 500
add_celery_tasks_in_panel
: Add the celery tasks to the django panelcrawl_already_crawled
: Scraps already scrapped/crawled sites in databasecrawl_to_be_crawled
: Scraps newly entered sites in database || The sites that needs to be crawled ||migrate_default_to_be_crawl_data
: Enters BASE data of the websites that needs to be crawled, its about 10 Lakh sites
For the distributed web crawlers refer to the following scrapy documentation link
There are 3 different ways in order to achieve this
This is custom django management command and it starts crawling the already crawled and stored sites and then updates it
python manage.py crawl_already_crawled
This is custom django management command and it starts crawling the site which were entered using either the migrate_default_to_be_crawl_data
custom command or it was entered using submit_site/
endpoint
python manage.py crawl_to_be_crawled
This is a scrapy project that crawls the site using the command line
Here in example.com
replace it with the site you want to crawl (without http
or https`)
scrapy crawl konohagakure_to_be_crawled_command_line -a allowed_domains=example.com