Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/135766
Type: Conference paper
Title: On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law
Author: Teney, D.
Kafle, K.
Shrestha, R.
Abbasnejad, E.
Kanan, C.
Hengel, A.V.D.
Citation: Advances in neural information processing systems, 2020 / Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (ed./s), vol.abs/2005.09241, pp.1-11
Publisher: Morgan Kaufmann
Publisher Place: San Francisco, CA, United States
Issue Date: 2020
Series/Report no.: Neural Information Processing Systems
ISBN: 9781713829546
ISSN: 1049-5258
Conference Name: Conference on Neural Information Processing Systems (NeurIPS) (6 Dec 2020 - 12 Dec 2020 : Virtual, Online)
Editor: Larochelle, H.
Ranzato, M.
Hadsell, R.
Balcan, M.F.
Lin, H.
Statement of
Responsibility: 
Damien Teney, Kushal Kafle, Robik Shrestha, Ehsan Abbasnejad, Christopher Kanan, Anton van den Hengel
Abstract: Out-of-distribution (OOD) testing is increasingly popular for evaluating a machine learning system's ability to generalize beyond the biases of a training set. OOD benchmarks are designed to present a different joint distribution of data and labels between training and test time. VQA-CP has become the standard OOD benchmark for visual question answering, but we discovered three troubling practices in its current use. First, most published methods rely on explicit knowledge of the construction of the OOD splits. They often rely on inverting'' the distribution of labels, e.g. answering mostlyyes'' when the common training answer was no''. Second, the OOD test set is used for model selection. Third, a model's in-domain performance is assessed after retraining it on in-domain splits (VQA v2) that exhibit a more balanced distribution of labels. These three practices defeat the objective of evaluating generalization, and put into question the value of methods specifically designed for this dataset. We show that embarrassingly-simple methods, including one that generates answers at random, surpass the state of the art on some question types. We provide short- and long-term solutions to avoid these pitfalls and realize the benefits of OOD evaluation.
Rights: Copyright status unknown
Published version: https://www.elsevier.com/books-and-journals/morgan-kaufmann
Appears in Collections:Aurora harvest 8
Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.