Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Actively seeking and learning from live data|
|Citation:||Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), 2019 / vol.2019-June, pp.1940-1949|
|Publisher:||Computer Vision Foundation / IEEE|
|Series/Report no.:||IEEE Conference on Computer Vision and Pattern Recognition|
|Conference Name:||IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (15 Jun 2019 - 20 Jun 2019 : Long Beach, USA)|
|Damien Teney, Anton van den Hengel|
|Abstract:||One of the key limitations of traditional machine learning methods is their requirement for training data that exemplifies all the information to be learned. This is a particular problem for visual question answering methods, which may be asked questions about virtually anything. The approach we propose is a step toward overcoming this limitation by searching for the information required at test time. The resulting method dynamically utilizes data from an external source, such as a large set of questions/answers or images/captions. Concretely, we learn a set of base weights for a simple VQA model, that are specifically adapted to a given question with the information specifically retrieved for this question. The adaptation process leverages recent advances in gradient-based meta learning and contributions for efficient retrieval and cross-domain adaptation. We surpass the state-of-the-art on the VQACP v2 benchmark and demonstrate our approach to be intrinsically more robust to out-of-distribution test data. We demonstrate the use of external non-VQA data using the MS COCO captioning dataset to support the answering process. This approach opens a new avenue for open-domain VQA systems that interface with diverse sources of data.|
|Appears in Collections:||Australian Institute for Machine Learning publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.