This workshop assesses current evaluation procedures for object detection, highlights their shortcomings and opens discussion for possible improvements.

Through a focus on evaluation using challenges, the object detection community has been able to quickly identify which methods are effective by examining performance metrics. However, as this technological boom progresses, it is important to assess whether our evaluation metrics and procedures adequately align with how object detection will be used in practical applications. Quantitative results should be easily reconciled with a detector’s performance in applied tasks. This workshop provides a forum to discuss these ideas and evaluate whether current standards meet the needs of the object detection community.

In addition, this workshop hosts the latest iteration of the Probabilistic Object Detection (PrOD) Challenge which requires competitors to estimate semantic and spatial uncertainty.

Online Workshop Details: 28th August 2020

Due to the current COVID-19 crisis, ECCV was held online with pre-recorded video presentations and two interactive sessions. To access all recordings and papers, login to the ECCV online platform here.

Video Presentations

We have approximately 45 min presentations by invited speakers and shorter (max 10 min) videos from contributed papers and PrOD Challenge competitors. Videos are currently available through the ECCV virtual platform and on YouTube. Full links to the papers are currently only available through ECCV.

Please note that ECCV links given below require you to be logged-in to the ECCV conference.

A full YouTube playlist of all presentations and interactive sessions can be found here

Invited Speaker/Organizer Presentations:

Workshop Papers

Probabilistic Object Detection (PrOD) Challenge Papers

Interactive Sessions

We had 2 seperate interactive sessions where ECCV delegates could talk with our invited speakers, accepted paper authors, and some workshop organizers. Each session was an interactive panel with questions submitted by members of the community either within the session or in advance.

Each interactive session was recorded and you can find YouTube links to the sessions below.

Session 1 28th August 00:00-02:00 UTC+1

Session 2 28th August 08:00-10:00 UTC+1

Call for Papers

We invited authors to contribute papers to the workshop. Topics of interest comprise, but are not limited to:

Participate in the Competition

To participate in the competition, and for more information around the data and submission format, please go to our Codalab Page.

Our challenge requires participants to detect objects in video data (from high-fidelity simulation). As a novelty, our evaluation metric rewards accurate estimates of spatial and semantic uncertainty using probabilistic bounding boxes. We developed a new probability-based detection quality (PDQ) evaluation measure for this challenge, please see the arxiv paper for more details.

Submissions must be accompanied by a paper explaining the method and external data used. Top performing submissions from the challenge will be invited to present their methods for the workshop.

This version of the competition is now over but you can test your methods on our continuous evaluation server in preparation for the next iteration of the challenge

Important Dates


The workshop organisers are with the Australian Centre for Robotic Vision

David Hall
Queensland University of Technology
Niko Sünderhauf
Queensland University of Technology
Feras Dayoub
Queensland University of Technology
Gustavo Carneiro
University of Adelaide
Chunhua Shen
University of Adelaide