This workshop assesses current evaluation procedures for object detection, highlights their shortcomings and opens discussion for possible improvements.
Through a focus on evaluation using challenges, the object detection community has been able to quickly identify which methods are effective by examining performance metrics. However, as this technological boom progresses, it is important to assess whether our evaluation metrics and procedures adequately align with how object detection will be used in practical applications. Quantitative results should be easily reconciled with a detector’s performance in applied tasks. This workshop provides a forum to discuss these ideas and evaluate whether current standards meet the needs of the object detection community.
In addition, this workshop is hosting the latest iteration of the Probabilistic Object Detection (PrOD) Challenge which requires competitors to estimate semantic and spatial uncertainty.
Program: 28 August 2020
- 09:00 - 09:10 - Welcome, Introduction
- 09:10 - 09:35 - Invited Talk: Emre Akbas (Middle East Technical University)
- 09:35 - 10:00 - Invited Talk: Larry Zitnick (Facebook AI Research)
- 10:00 - 10:30 - Coffee Break
- 10:30 - 10:55 - Invited Talk: TBA
- 10:55 - 11:15 - PrOD Challenge Overview and Discussion of Results
- 11:15 - 11:30 - Contributed Talk: (PrOD Challenge Winner) - T.B.A.
- 11:30 - 11:35 - Closing Remarks
- 11:35 - 12:10 - Poster Session
Call for Papers
We invite authors to contribute papers to the workshop. Topics of interest comprise, but are not limited to:
- New evaluation measures/metrics for object detection
- New evaluation/visualization tools to analyze object detection systems
- New evaluation procedures for better understanding object detection performance
- Examinations of current evaluation procedures
- New datasets designed to examine specific challenges in object detection
- New detection methods that provide contributions/insights unrewarded by current evaluation procedures (e.g. improved detector calibration, probabilistic object detection, etc.)
- Submissions must follow the ECCV format and be up to 4 pages in length including references
- It is accepted if this is an abbreviated version of a larger paper published elsewhere if properly referenced
- Submit your paper through CMT
- Accepted papers will be presented at a poster session
Participate in the Competition
To participate in the competition, and for more information around the data and submission format, please go to our Codalab Page.
Our challenge requires participants to detect objects in video data (from high-fidelity simulation). As a novelty, our evaluation metric rewards accurate estimates of spatial and semantic uncertainty using probabilistic bounding boxes. We developed a new probability-based detection quality (PDQ) evaluation measure for this challenge, please see the arxiv paper for more details.
Submissions must be accompanied by a maximum 4 page paper (including references) explaining the method and external data used. Please use the ECCV paper format (no need to keep it double-blind) and submission details will be provided closer to the date. Top performing submissions from the challenge will be invited to present their methods at the workshop.
- 14 July 2020 Final Submissions to the Evaluation Server via Codalab
- 21 July 2020 Paper Submission via CMT
- 28 July 2020 Winner Announcements and Workshop Invitations
- 28 August 2020 Workshop at ECCV
The workshop organisers are with the Australian Centre for Robotic Vision