Welcome to the UA-DETRAC Benchmark Suite!


UA

UA-DETRAC is a challenging real-world multi-object detection and multi-object tracking benchmark. The dataset consists of 10 hours of videos captured with a Cannon EOS 550D camera at 24 different locations at Beijing and Tianjin in China. The videos are recorded at 25 frames per seconds (fps), with resolution of 960×540 pixels. There are more than 140 thousand frames in the UA-DETRAC dataset and 8250 vehicles that are manually annotated, leading to a total of 1.21 million labeled bounding boxes of objects. We also perform benchmark tests of state-of-the-art methods in object detection and multi-object tracking, together with evaluation metrics detailed in this website.

* We have released the annotations and evaluation tools for the UA-DETRAC test set! You can evaluate your algorithms offline.

* UA-DETRAC challenge is now partner with AI City Challenge!


Demo

This demo video illustrates annotated frames in the DETRAC datasets. Colors of the bounding boxes reflect the occlusion status, including fully visible (red), partially occluded by other vehicles (blue), or partially occluded by background (pink). The vehicle ID, orientation, vehicle type and the truncation ratio are presented in the bounding boxes. Black opaque regions which represent general background region are ignored in the benchmark. The weather condition, camera status and the vehicle density are presented in the bottom left corner of each frame.

The demo video may take a few seconds to load.

The following figures analyze the performance of object detection and tracking algorithms using attributes including the vehicle category, weather, scale, occlusion ratio, and trunction ratio.

UA

  • Vehicle category   We classify the vehicles into four categories, i.e., car, bus, van, and others.
  • Weather   We consider four categories of weather conditions, i.e., cloudy, night, sunny, and rainy.
  • Scale   We define the scales of the annotated vehicles as the square root of their area in pixels. We group vehicles into three scales: small (0-50 pixels), medium (50-150 pixels), and large (more than 150 pixels).
  • Occlusion ratio   We use the fraction of vehicle bounding box being occluded to define the degree of occlusion. We classify the degree of occlusion into three categories: no occlusion, partial occlusion, and heavy occlusion. Specifically, we define the partial occlusion, if the occlusion ratio of a vehicle is between 1%-50%, and the heavy occlusion, if the occlusion ratio is larger than 50%.
  • Truncation ratio   The truncation ratio indicates degree of vehicle parts outside the frame, which is used in training sample selection.

Citation

Please include the following citation if you use this dataset in your research work.

            @article{CVIU_UA-DETRAC,
             author    = {Longyin Wen and Dawei Du and Zhaowei Cai and Zhen Lei and Ming{-}Ching Chang and
               Honggang Qi and Jongwoo Lim and Ming{-}Hsuan Yang and Siwei Lyu},
             title     = { {UA-DETRAC:} {A} New Benchmark and Protocol for Multi-Object Detection and Tracking},
             journal   = {Computer Vision and Image Understanding},
             year      = {2020}
             }          
            
            @inproceedings{lyu2018ua,
              title={UA-DETRAC 2018: Report of AVSS2018 \& IWT4S challenge on advanced traffic monitoring},
              author={Lyu, Siwei and Chang, Ming-Ching and Du, Dawei and Li, Wenbo and Wei, Yi and Del Coco, Marco and Carcagn{\`\i}, Pierluigi and Schumann, Arne and Munjal, Bharti and Choi, Doo-Hyun and others},
              booktitle={2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)},
              pages={1--6},
              year={2018},
              organization={IEEE}
            }
            
            @inproceedings{lyu2017ua,
              title={UA-DETRAC 2017: Report of AVSS2017 \& IWT4S Challenge on Advanced Traffic Monitoring},
              author={Lyu, Siwei and Chang, Ming-Ching and Du, Dawei and Wen, Longyin and Qi, Honggang and Li, Yuezun and Wei, Yi and Ke, Lipeng and Hu, Tao and Del Coco, Marco and others},
              booktitle={Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE International Conference on},
              pages={1--7},
              year={2017},
              organization={IEEE}
            }
            


News

  • Jan. 19 2020: Our benchmark paper is accepted to the CVIU journal.
  • Aug. 21 2019: The annotations for testing set are released.
  • Jun. 14 2019: The new annotations for vehicle type and color are released.
  • Aug. 23 2016: The toolkit for evaluation and source codes of several state-of-the-art trackers are available in our new version of the website.
  • Nov. 22 2015: The DETRAC Benchmark Suite goes online, starting with the releasing of vehicle detection and tracking datasets.
  • Nov. 15 2015: Our arXiv paper describing the DETRAC Benchmark Dataset is available for download.


Privacy

This dataset is made available for academic use only. We have done our best to exclude identifiable information from the data to protect privacy. If you find your vehicle or personal information in this dataset, please contact us and we will remove the corresponding information from our dataset. We are not responsible for any actual or potential harm as the result of using this dataset.


Copyright

The UA-DETEC dataset described on this page is distributed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License, which implies that you must: (1) attribute the work as specified by the original authors, (2) may not use this work for commercial purposes (for commercial use, please contact us), and (3) if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license. The dataset is provided "as it is" and we are not responsible for any subsequence from using this dataset.


Acknowledgement

This work is partly supported by the National Science Foundation under Grant No. CCF-1319800. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. The website environment is provided and supported by the Research IT Group of the University at Albany, State University of New York (SUNY). We also thank the following individuals for their contributions to data annotations: Zhidan Wang, Fenfen Sheng, Yunteng Zhang, Yuxin Chen, Bin Liu, Lejie Chang, Yunxia Wang, Yuping Zhang, Jialun Chen and Abhineet Kumar Pandey.