π₯News: our team got the 3rd place in the AICity 2021 Challenge on Track 4
This is the source code for Team WHU_IIP for track 4 Anomaly Detection in AICity 2021 Challenge.
Our experiments conducted on the Track 4 testset yielded a result of 0.9302 F1-Score, and 3.4039 root mean square error (RMSE), which performed 3rd place in the challenge.
Fig1 Rank of our team
More implementation details are displayed in the paperββ Dual-Modality Vehicle Anomaly Detection via Bilateral Trajectory Tracing
The paper link will be added after CVPRW2021. Here we only show the flow chart for a better understanding of the following procedures.
Fig2 Flow Chart
- Linux (tested on Ubuntu 16.04.5)
- Packages (listed in the requirements.txt)
We have annotated 3657 images selected from training dataset. The training set and testing set are randomly split at the ratio of 4:1. The link for annotation is included as follows.
Annotations link: Google drive
cd bg_code
python ex_bg_mog.py
The original videos and their frames are put under ../PreData/Origin-Test
and ../PreData/Origin-Frame
folders, respectively. And the background modeling results are placed under the ../PreData/Forward-Bg-Frame
folder.
All these files are organized for the Detect Step later. Then the detection results based on background modeling will be saved under ../PreData/Bg-Detect-Result/Forward_full
for each video while ../PreData/Bg-Detect-Result/Forward
is kept in frames separated from full videos.
The detailed structure is shown below.
βββ Bg-Detect-Result
β βββ Forward
β βββ 1
β βββtest_1_00000.jpg.npy
β βββtest_1_00001.jpg.npy
β βββtest_1_00002.jpg.npy
β βββ ...
β βββ 2
β βββ 3
β βββ ...
β βββ Forward_full
β βββ 1.npy
β βββ 2.npy
β βββ 3.npy
β βββ ...
βββ Forward-Bg-Frame
β βββ 1.mp4
β βββ 2.mp4
β βββ 3.mp4
β βββ ...
βββ Origin-Frame
β βββ 1
β βββ1_00001.jpg
β βββ1_00002.jpg
β βββ1_00003.jpg
β βββ ...
β βββ 2
β βββ 3
β βββ ...
βββ Origin-Test
β βββ 1.mp4
β βββ 2.mp4
β βββ 3.mp4
β βββ ...
For detection model training instruction, please view the official yolov5 repo
cd mask_code
python mask_frame_diff.py start_num end_num
python mask_track.py video_num
python mask_fuse.py video_num
cd pixel_track/coarse_ddet
python pixel-level_tracking.py start_num end_num
cd pixel_track/post_process
python similar.py start_num end_num
python filter.py
python pixel_fuse.py
python timeback_pixel.py type_num start_num end_num
python sync.py
We mainly contribute this to trace the exact time of crashing since what's done before can only be used to locate the time when abnormal vehicles become static.
cd car_crash
python crash_track.py
Statistically, vehicle crashes often come up with sharp turns, which is the primary reaction of drivers when encountering such anomalies. Here we list some typical scenarios to display that.
@InProceedings{Chen_2021_CVPR,
author = {Chen, Jingyuan and Ding, Guanchen and Yang, Yuchen and Han, Wenwei and Xu, Kangmin and Gao, Tianyi and Zhang, Zhe and Ouyang, Wanping and Cai, Hao and Chen, Zhenzhong},
title = {Dual-Modality Vehicle Anomaly Detection via Bilateral Trajectory Tracing},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2021},
pages = {4016-4025}
}
If you have any question, please feel free to contact us. (jchen157@u.rochester.edu and yuchen_yang@whu.edu.cn)