Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Conference paper
Title: AffordanceNet: an end-to-end deep learning approach for object affordance detection
Author: Do, T.
Nguyen, A.
Reid, I.
Citation: IEEE International Conference on Robotics and Automation, 2018, pp.5882-5889
Publisher: IEEE
Issue Date: 2018
Series/Report no.: IEEE International Conference on Robotics and Automation ICRA
ISBN: 9781538630815
ISSN: 1050-4729
Conference Name: IEEE International Conference on Robotics and Automation (ICRA) (21 May 2018 - 25 May 2018 : Brisbane, Australia)
Statement of
Thanh-Toan Do, Anh Nguyen, Ian Reid
Abstract: We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at
Rights: ©2018 IEEE
DOI: 10.1109/ICRA.2018.8460902
Grant ID:
Appears in Collections:Aurora harvest 4
Australian Institute for Machine Learning publications
Computer Science publications

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.