The Probabilistic Object Detection Challenge

Overview

Our first challenge requires participants to detect objects in video data produced from high-fidelity simulations. The novelty of this challenge is that participants are rewarded for providing accurate estimates of both spatial and semantic uncertainty for every detection using probabilistic bounding boxes.

Accurate spatial and semantic uncertainty estimates are rewarded by our newly developed probability-based detection quality (PDQ) measure. Full details about this new measure are available in our arxiv paper.

We invite anyone who is interested in object detection and appreciates a good challenge to please participate and compete in the competition so that we may continue to push the state-of-the-art in object detection in directions more suited to robotics applications. We also appreciate any and all feedback about the challenge itself and look forward to hearing from you.

Challenge Participation and Presentation of Results

We maintain two evaluation servers on Codalab:

CVPR 2019 Competition Evaluation Server

We are organising a competition and workshop at CVPR 2019 in June. Participants can present their results and we will announce the challenge winners, distributing $5000 AUD in prize money (sponsored by the Australian Centre for Robotic Vision). Please head to our competition evaluation server to participate, download the training/validation and test dataset, and find out further information around the dataset and submission format.

Ongoing Evaluation Server

We maintain an ongoing evaluation server with a public leaderboard that can be used year-round to benchmark your approach for probabilistic object detection.

How to Cite

When using the dataset and evaluation in your publications, please cite:

@article{hall2018probability,
  title={Probabilistic Object Detection: Definition and Evaluation},
  author={Hall, David and Dayoub, Feras and Skinner, John and Corke, Peter and Carneiro, Gustavo and Angelova, Anelia and S{\"u}nderhauf, Niko},
  journal={arXiv preprint arXiv:1811.10800},
  year={2018}
}

What is Probabilistic Object Detection?

For robotics applications, detections must not just provide information about where and what an object is, but must also provide a measure of spatial and semantic uncertainty. Failing to do so can lead to catastrophic consequences from over or under-confident detections.

Semantic uncertainty can be provided as a categorical distribution over labels. Spatial uncertainty in the context of object detection can be expressed by augmenting the commonly used bounding box format with covariances for their corner points. That is, a bounding box is represented as two Gaussian distributions. See below for an illustration.

Left: Probabilistic object detections provide bounding box corners as Gaussians (corner point with covariance). Right: This results in a per-pixel probability of belonging to the detected object. Our evaluation takes this spatial uncertainty into account.

Datasets

For this challenge, we use realistic simulated data from a domestic robot scenario. The dataset contains scenes with cluttered surfaces, and day and night lighting conditions. We simulate domestic service robots of multiple sizes, resulting in sequences with three different camera heights above the ground plane.

We maintain three dataset splits:

All datasets use the same subset of the Microsoft COCO classes:
['bottle', 'cup', 'knife', 'bowl', 'wine glass', 'fork', 'spoon', 'banana', 'apple', 'orange', 'cake', 'potted plant', 'mouse', 'keyboard', 'laptop', 'cell phone', 'book', 'clock', 'chair', 'dining table', 'couch', 'bed', 'toilet', 'television', 'microwave', 'toaster', 'refrigerator', 'oven', 'sink', 'person']

Example scenes from the dataset.
Scenes from the validation dataset with labeled objects.

New Evaluation Measure - PDQ

We developed a new probability-based detection quality (PDQ) evaluation measure for this challenge, please see the arxiv paper for more details.

PDQ is a new visual object detector evaluation measure which not only assesses detection quality, but also accounts for the spatial and label uncertainties produced by object detection systems. Current evaluation measures such as mean average precision (mAP) do not take these two aspects into account, accepting detections with no spatial uncertainty and using only the label with the winning score instead of a full class probability distribution to rank detections.

To overcome these limitations, we propose the probability-based detection quality (PDQ) measure which evaluates both spatial and label probabilities, requires no thresholds to be predefined, and optimally assigns groundtruth objects to detections.

Our experimental evaluation shows that PDQ rewards detections with accurate spatial probabilities and explicitly evaluates label probability to determine detection quality. PDQ aims to encourage the development of new object detection approaches that provide meaningful spatial and label uncertainty measures.