Overview

Our workshop will discuss the current progress, applications, and limitations of robotic scene understanding and semantic simultaneous localization and mapping (SLAM). We are motivated by the quickly accelerating appearance new research outputs that investigate how classical SLAM techniques and deep-learning based visual object detection or segmentation can be combined in innovative ways, and used to support scene understanding, navigation, and manipulation.

In addition, the workshop will host a new research challenge and competition: The Robotic Vision Scene Understanding Challenge evaluates how well a robotic vision system can understand the semantic and geometric aspects of its environment.

Call for Papers

We invite authors to submit contributed papers to the workshop. The topics of interest comprise, but are not limited to:

Author Instructions

Call for Participation in the Semantic SLAM Challenge

We will organise a challenge and competition for semantic SLAM and scene understanding in conjunction with the workshop. More information coming soon. Meanwhile, the video below provides an overview of what to expect.

Program

More information coming soon.

Confirmed invited speakers comprise Andrew Davison, Dieter Fox, Stefanie Tellex and Cesar Cadena.

Organisers

The Robotic Vision Challenges organisers are with the Australian Centre for Robotic Vision at Queensland University of Technology (QUT), Monash University, the University of Adelaide, and Google AI.

Niko Sünderhauf
Queensland University of Technology
Feras Dayoub
Queensland University of Technology
Anelia Angelova
Google Brain
Alexander Toshev
Google Brain
Ronnie Clark
Imperial College London
Yasir Latif
University of Adelaide
Ian Reid
University of Adelaide
Jana Kosecka
George Mason University



Sponsors and Supporters