1. To submit the tracking results, there is a 'speed' item in the
instruction. What is that? Will it be considered in the evaluation?
In the tracking results, the 'speed' item is used to describe the running speed of the tracker (i.e., fps), which is regarded as an additional measure. If the trackers A and B have similar performance, the faster tracker is regarded as the better one.
2. What is the relationship between the .txt files and the .csv files?
The detections for the trackers are provided in txt format. The data in the txt files are organized in CSV (Comma Separated Value) style, which means the number is separated by comma in each line.
3. What is the relationship between the XML annotation and the MAT annotation?
The XML annotation contains full annotations with the attribute information (e.g., vehicle category, weather and scale), which is used for detection training. The MAT annotation contains position information of target trajectories out of the general background regions ignored in the benchmark, which is used for detection and tracking evaluation. Besides, the MAT annotation employs the foot position instead of top-left corner position in the XML counterpart. If you use the DETRAC-MOT toolkit, we suggest you download the MAT annotation to avoid the additional XML2MAT procedure.
4. Why does the 'OutOfMemory' error occur in the toolkit when I load the dataset?
This may appear when you only download the XML annotations. Thus an extra XML2MAT procedure will be conducted. To avoid the 'OutOfMemory' error when transferring the large XML file, please set the Java memory as 1GB in the path 'reference>general>java heap memory>' of the Matlab software at first.
5. Why the detection performance of raw results I upload is poor?
We think this problem is derived from the training strategy. Specifically, for our current baseline detectors, we collect the positive samples from the DETRAC-Train set, and the negative samples from both the DETRAC-Train and KITTI-Train sets. Besides, the invaild detection responses (e.g., too small or big size, low confidence) can be dropped also. In this way, the performance will be improved.
6. Are the results in the ignored regions automatically filtered in the evaluation?
The bounding boxes and the continuous target trajectories overlapping the ignored region will be filtered in the detection and tracking evaluation, respectively. The overlapping thrshold is 0.5. Such a process is automated by our toolkit.
7. Can I submit multiple results for different compared methods?
If you would like to report results in their papers for multiple versions of their algorithm (e.g., parameters or features), this must be done on the training data and only the best performing setting of the novel method may be submitted for evaluation to our server. If comparisons to baselines from third parties (which have not been evaluated on the benchmark website) are desired, please contact us for a discussion.
8. I found an error in the provided annotations and the toolkit.
It is possible that there are some deficiencies in the annotations and bugs in the beta toolkit. If you have any further inquiries, questions, or comments, please do not hesitate to contact us. Besides, the QQ Group is available to facilitate discussion. We do appreciate all kinds of feedbacks and comments, thank you in advance.