This workshop assesses current evaluation procedures for object detection, highlights their shortcomings and opens discussion for possible improvements.

Through a focus on evaluation using challenges, the object detection community has been able to quickly identify which methods are effective by examining performance metrics. However, as this technological boom progresses, it is important to assess whether our evaluation metrics and procedures adequately align with how object detection will be used in practical applications. Quantitative results should be easily reconciled with a detector’s performance in applied tasks. This workshop provides a forum to discuss these ideas and evaluate whether current standards meet the needs of the object detection community.

In addition, this workshop is hosting the latest iteration of the Probabilistic Object Detection (PrOD) Challenge which requires competitors to estimate semantic and spatial uncertainty.

Online Workshop Details: 28th August 2020

Due to the current COVID-19 crisis, ECCV is going to be held online with pre-recorded video presentations and two interactive sessions. We plan to have 45 min presentations by invited speakers and shorter videos from contributed papers and PrOD Challenge competitors (TBC). Further details on interactive sessions is yet to come so check regularly for updates on this and presentation details. You can also check the latet details from the ECCV COVID updates page here.

Planned Video Presentations:

Call for Papers

We invite authors to contribute papers to the workshop. Topics of interest comprise, but are not limited to:

Author Instructions:

Participate in the Competition

To participate in the competition, and for more information around the data and submission format, please go to our Codalab Page.

Our challenge requires participants to detect objects in video data (from high-fidelity simulation). As a novelty, our evaluation metric rewards accurate estimates of spatial and semantic uncertainty using probabilistic bounding boxes. We developed a new probability-based detection quality (PDQ) evaluation measure for this challenge, please see the arxiv paper for more details.

Submissions must be accompanied by a maximum 4 page paper (including references) explaining the method and external data used. Please use the ECCV paper format (no need to keep it double-blind) and submission details will be provided closer to the date. Top performing submissions from the challenge will be invited to present their methods at the workshop.

Important Dates


The workshop organisers are with the Australian Centre for Robotic Vision

David Hall
Queensland University of Technology
Niko Sünderhauf
Queensland University of Technology
Feras Dayoub
Queensland University of Technology
Gustavo Carneiro
University of Adelaide
Chunhua Shen
University of Adelaide