Skip to content

Using MaskRCNN on Custom Dataset with Detectron2 on a Jupyter Notebook

License

Notifications You must be signed in to change notification settings

ihamdi/Lada-Detectron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lada Detectron

Using MaskRCNN on Custom Dataset with Detectron2 on a Jupyter Notebook

  1. Create conda environment
conda create --name env-name ipykernel gitpython
  1. Clone Github
from git import Repo
Repo.clone_from("https://github.com/ihamdi/Lada-Detectron.git","/your/directory/")

or download and extract a copy of the files.

  1. Install PyTorch according to your machine. For example:
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
  1. Install Detectron2: Visit the official website to install Detectron2 according to your operating system and Pytorch/Cudatoolkit version.

For example, I had to run the following line on my Linux machine with Pytorch 1.10.0 and Cudatoolkit 10.2:

python -m pip install detectron2 -f   https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.10/index.html
  1. Install dependencies from requirements.txt file:
pip install -r requirements.txt

Dataset:

Custom dataset is included on the Github page with the code. Images for the custom dataset were obtained from Google Images, and the annotations were created and exported to json using VGG Image Annotator. The dataset has a total of 41 images: 29 for training and 12 for validation.

How to Use:

Press Run All on the Jupyter Notebook to load the data and then train and validate a model on the dataset.

If you'd like to create and use your own custom dataset, all you have to do is follow the same directory structure and place the json files produced by VGG Image Annotator in the corresponding folder. The Notebook is specifically designed to handle the json files produced by that program.

Results:

As seen in the validation section of the Notebook, the model generally does very well at segmenting Ladas even when the it is partly covered by an object. However, it seems to get a little confused when there are more than one in the image.
2 3 4 6

Ideally, the training data would contain images where there are more than one car in them.


Changes made to Installation Tutorial

  1. Inside the get_xxxx_dicts function,
for _, anno in annos.items():
          assert not anno["region_attributes"]
          anno = anno["shape_attributes"]

was replaced with

annos = annos[0]["shape_attributes"]

since the json file obtained from VGG Image Annotator returns a list instead of a dictionary.


Background:

This was done purely for learning purposes (and fun) and to get more familiar with Detectron2. Detectron2 is already quite powerful in detecting people, cars, umbrellas, ... etc, but I was curious to see how to train it on a new object. The car model Lada was used simply because it was the easiest to annotate due to its straight-line shape.

For future work, I would like to see how good it can be at differentiating car models. Another thing I would like to try is detecting and reading number plates as some sort of rudimentary algorithm for photo-radar.


Contact:

For any questions or feedback, please feel free to post comments or contact me at ibraheem.hamdi@mbzuai.ac.ae


References:

Getting Started as Detectron2 was used as base for this code.

VGG Image Annotator by Visual Geometry Group at University of Oxford.

Detectron2's Github page by Facebook Research.