Big benchmark challenges like ILSVRC or COCO supported much of the remarkable progress in computer vision and deep learning over the past years.

We aim to recreate this success for robotic vision.

We develop a set of new benchmark challenges specifically for robotic vision, and evaluate:

We combine the variety and complexity of real-world data with the flexibility of synthetic graphics and physics engines.

Active Challenges

Probabilistic Object Detection Challenge

Our first challenge requires participants to detect objects in video data (from high-fidelity simulation). As a novelty, our evaluation metric rewards accurate estimates of spatial and semantic uncertainty using probabilistic bounding boxes. We developed a new probability-based detection quality (PDQ) evaluation measure for this challenge, please see the arxiv paper for more details.

To participate and for more information around the data and submission format, please go to our Codalab page.

Example Data

For this challenge, we use simulated data and vary both the lighting conditions (day and night), as well as the camera height (to simulate domestic service robots of different size).

Coming Soon …

Stay tuned for more challenges, focussing on active vision, and active and continuous learning in 2019.


December 2018: We released our first Robotic Vision object detection challenge, requiring object detection on video data and rewarding accurate estimates of spatial and semantic uncertainty.

June 2018: We presented our initial ideas for new benchmarks and metrics at two workshops during CVPR and RSS. Thanks to all who engaged in discussions and shared their thoughts during the workshops on Real-World Challenges and New Benchmarks for Deep Learning in Robotic Vision at CVPR, and New Benchmarks, Metrics, and Competitions for Robotic Learning at RSS.

Stay in touch and follow us on Twitter for news and announcements: @robVisChallenge.


Big computer vision challenges and competitions like ILSVRC or COCO had a significant influence on the advancements in object recognition, object detection, semantic segmentation, image captioning, and visual question answering in recent years. These challenges posed motivating problems to the research community and proposed datasets and evaluation metrics that allowed to compare different approaches in a standardized way.

However, visual perception for robotics faces challenges that are not well covered or evaluated by the existing benchmarks. These challenges comprise calibrated uncertainty estimation, continuous learning for domain adaptation and incorporation of novel classes, active learning, and active vision.

There is currently a lack of meaningful standardised evaluation protocols and benchmarks for these research challenges. This is a significant roadblock for the evolution of robotic vision, and impedes reproducible and comparable research.

We believe that by posing a new robotic vision challenge to the research community, we can motivate computer vision and robotic vision researchers around the world to develop solutions that lead to more capable, more robust, and more widely applicable robotic vision systems.

Organisers, Support, and Acknowledgements

Stay in touch and follow us on Twitter for news and announcements: @robVisChallenge.

Niko Sünderhauf
Queensland University of Technology
Feras Dayoub
Queensland University of Technology
David Hall
Queensland University of Technology
John Skinner
Queensland University of Technology

The Robotic Vision Challenges organisers are with the Australian Centre for Robotic Vision. This project is supported by a Google Faculty Research Award to Niko Sünderhauf in 2018.


We thank the following supporters for their valuable input and engaging discussions.

Anton van den Hengel
University of Adelaide
Gustavo Carneiro
University of Adelaide
Anelia Angelova
Google Brain