HO3V: The Dataset for Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data
We present the HO3V datasets for arbitrary view action recognition, as described in the paper Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data. HO3V includes 2 sub-datasets tailored for N-UCLA and IXMAS, each consisting of a set of AVI videos and BVH motion files.
The AVI video files can be viewed by a video player. The BVH motion data can be visualised using Autodesk MotionBuilder. YouTube tutorials are available.
By using this dataset, you agree to cite the following research publication in all related project documents/publications:
Jingtian Zhang, Lining Zhang, Hubert P. H. Shum and Ling Shao, "Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data," in Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016.
@inproceedings{zhang16arbitrary,
author={Zhang, Jingtian and Zhang, Lining and Shum, Hubert P. H. and Shao, Ling},
booktitle={Proceedings of the 2016 IEEE International Conference on Robotics and Automation},
series={ICRA '16},
title={Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data},
year={2016},
month={5},
pages={1678--1684},
numpages={8},
doi={10.1109/ICRA.2016.7487309},
publisher={IEEE},
location={Stockholm, Sweden},
}