The Probabilistic Object Detection Challenge

Overview

Our first challenge requires participants to detect objects in video data produced from high-fidelity simulations. The novelty of this challenge is that participants are rewarded for providing accurate estimates of both spatial and semantic uncertainty for every detection using probabilistic bounding boxes.

Accurate spatial and semantic uncertainty estimates are rewarded by our newly developed probability-based detection quality (PDQ) measure. Full details about this new measure are available in our arxiv paper.

We invite anyone who is interested in object detection and appreciates a good challenge to please participate and compete in the competition so that we may continue to push the state-of-the-art in object detection in directions more suited to robotics applications. We also appreciate any and all feedback about the challenge itself and look forward to hearing from you.

Challenge Participation and Presentation of Results

We maintain two evaluation servers on Codalab: Following a major crash on codalab, our challenge servers are unavailable until September 2019.

Ongoing Evaluation Server

We maintain an ongoing evaluation server with a public leaderboard that can be used year-round to benchmark your approach for probabilistic object detection.

IROS 2019 Competition Evaluation Server

We organise a workshop at IROS 2019 (8 November) on the topic of The Importance of Uncertainty in Deep Learning for Robotics. For that workshop, we will run a second round of the probabilistic object detection challenge. The competition evaluation server is now open for submissions until 10 October. You can also use our ongoing evaluation server along with the available validation and test-dev datasets to improve your algorithms.

CVPR 2019 Competition Evaluation Server

We are organised a competition and workshop at CVPR 2019. Four participating teams presented their approaches and results. More details and links to their papers can be found on the workshop website.

How to Cite

When using the dataset and evaluation in your publications, please cite:

@inproceedings{hall2020probability,
  title={Probabilistic Object Detection: Definition and Evaluation},
  author={Hall, David and Dayoub, Feras and Skinner, John, and Zhang, Haoyang and Miller, Dimity and Corke, Peter and Carneiro, Gustavo and Angelova, Anelia and S{\"u}nderhauf, Niko},
  booktitle={IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2020}

What is Probabilistic Object Detection?

For robotics applications, detections must not just provide information about where and what an object is, but must also provide a measure of spatial and semantic uncertainty. Failing to do so can lead to catastrophic consequences from over or under-confident detections.

Semantic uncertainty can be provided as a categorical distribution over labels. Spatial uncertainty in the context of object detection can be expressed by augmenting the commonly used bounding box format with covariances for their corner points. That is, a bounding box is represented as two Gaussian distributions. See below for an illustration.

Left: Probabilistic object detections provide bounding box corners as Gaussians (corner point with covariance). Right: This results in a per-pixel probability of belonging to the detected object. Our evaluation takes this spatial uncertainty into account.

Datasets

For this challenge, we use realistic simulated data from a domestic robot scenario. The dataset contains scenes with cluttered surfaces, and day and night lighting conditions. We simulate domestic service robots of multiple sizes, resulting in sequences with three different camera heights above the ground plane.

Following a major crash on codalab, our challenge servers are unavailable until September 2019.

We maintain three dataset splits:

All datasets use the same subset of the Microsoft COCO classes:
['bottle', 'cup', 'knife', 'bowl', 'wine glass', 'fork', 'spoon', 'banana', 'apple', 'orange', 'cake', 'potted plant', 'mouse', 'keyboard', 'laptop', 'cell phone', 'book', 'clock', 'chair', 'dining table', 'couch', 'bed', 'toilet', 'television', 'microwave', 'toaster', 'refrigerator', 'oven', 'sink', 'person']

Example scenes from the dataset.
Scenes from the validation dataset with labeled objects.

New Evaluation Measure - PDQ

We developed a new probability-based detection quality (PDQ) evaluation measure for this challenge, please see the arxiv paper for more details.

PDQ is a new visual object detector evaluation measure which not only assesses detection quality, but also accounts for the spatial and label uncertainties produced by object detection systems. Current evaluation measures such as mean average precision (mAP) do not take these two aspects into account, accepting detections with no spatial uncertainty and using only the label with the winning score instead of a full class probability distribution to rank detections.

To overcome these limitations, we propose the probability-based detection quality (PDQ) measure which evaluates both spatial and label probabilities, requires no thresholds to be predefined, and optimally assigns groundtruth objects to detections.

Our experimental evaluation shows that PDQ rewards detections with accurate spatial probabilities and explicitly evaluates label probability to determine detection quality. PDQ aims to encourage the development of new object detection approaches that provide meaningful spatial and label uncertainty measures.

Organisers, Support, and Acknowledgements

Stay in touch and follow us on Twitter for news and announcements: @robVisChallenge.

Niko Sünderhauf
Queensland University of Technology
Feras Dayoub
Queensland University of Technology
David Hall
Queensland University of Technology
John Skinner
Queensland University of Technology
Haoyang Zhang
Queensland University of Technology



The Robotic Vision Challenges organisers are with the Australian Centre for Robotic Vision at Queensland University of Technology (QUT) in Brisbane, Australia.

This project was supported by a Google Faculty Research Award to Niko Sünderhauf in 2018.

Supporters

We thank the following supporters for their valuable input and engaging discussions.

Gustavo Carneiro
University of Adelaide
Anelia Angelova
Google Brain
Anton van den Hengel
University of Adelaide