Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||AffordanceNet: an end-to-end deep learning approach for object affordance detection|
|Citation:||IEEE International Conference on Robotics and Automation, 2018, pp.5882-5889|
|Series/Report no.:||IEEE International Conference on Robotics and Automation ICRA|
|Conference Name:||IEEE International Conference on Robotics and Automation (ICRA) (21 May 2018 - 25 May 2018 : Brisbane, Australia)|
|Thanh-Toan Do, Anh Nguyen, Ian Reid|
|Abstract:||We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at https://github.com/nqanh/affordance-net.|
|Appears in Collections:||Aurora harvest 4|
Australian Institute for Machine Learning publications
Computer Science publications
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.