Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/138420
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Semantic–geometric visual place recognition: a new perspective for reconciling opposing views
Author: Garg, S.
Suenderhauf, N.
Milford, M.
Citation: International Journal of Robotics Research, 2022; 41(6):573-598
Publisher: SAGE Publications
Issue Date: 2022
ISSN: 0278-3649
1741-3176
Statement of
Responsibility: 
Sourav Garg, Niko Suenderhauf and Michael Milford
Abstract: Human drivers are capable of recognizing places from a previous journey even when viewing them from the opposite direction during the return trip under radically different environmental conditions, without needing to look back or employ a [Formula: see text] camera or LIDAR sensor. Such navigation capabilities are attributed in large part to the robust semantic scene understanding capabilities of humans. However, for an autonomous robot or vehicle, achieving such human-like visual place recognition capability presents three major challenges: (1) dealing with a limited amount of commonly observable visual content when viewing the same place from the opposite direction; (2) dealing with significant lateral viewpoint changes caused by opposing directions of travel taking place on opposite sides of the road; and (3) dealing with a radically changed scene appearance due to environmental conditions such as time of day, season, and weather. Current state-of-the-art place recognition systems have only addressed these three challenges in isolation or in pairs, typically relying on appearance-based, deep-learnt place representations. In this paper, we present a novel, semantics-based system that for the first time solves all three challenges simultaneously. We propose a hybrid image descriptor that semantically aggregates salient visual information, complemented by appearance-based description, and augment a conventional coarse-to-fine recognition pipeline with keypoint correspondences extracted from within the convolutional feature maps of a pre-trained network. Finally, we introduce descriptor normalization and local score enhancement strategies for improving the robustness of the system. Using both existing benchmark datasets and extensive new datasets that for the first time combine the three challenges of opposing viewpoints, lateral viewpoint shifts, and extreme appearance change, we show that our system can achieve practical place recognition performance where existing state-of-the-art methods fail.
Keywords: Visual place recognition, visual localization; deep learning; semantic
Rights: © The Author(s) 2019
DOI: 10.1177/0278364919839761
Grant ID: http://purl.org/au-research/grants/arc/CE140100016
http://purl.org/au-research/grants/arc/FT140101229
Published version: http://dx.doi.org/10.1177/0278364919839761
Appears in Collections:Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.