Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/134021
Type: Thesis
Title: Relaxed Invariant Representation for Unsupervised Domain Adaptation
Author: Askari Lyarjdameh, Hossein
Issue Date: 2021
School/Discipline: School of Computer Science
Abstract: The success of supervised machine learning relies on the availability of a large amount of annotated training data from different domains, which is often cost-ineffective to collect, and unrealistic in many scenarios. Unsupervised domain adaptation (UDA) aims to overcome this problem by transferring predictive models trained on a labelled source domain to an unlabelled target domain, with the difficulty of resolving distributional shift between domains. To bridge this distribution gap, recent advances in deep learning focus on learning representations that are invariant across domains. However, such an approach may fail to generalize well to target domains and may even considerably deteriorate adaptability, due to the existence of an inherent trade-off between adaptability and invariance. Building on advances in deep generative models, this thesis aims to relax the learning of invariant representations, and to develop efficient algorithms for UDA. This thesis comprises two parts. The first part introduces the problem of learning invariant representations. In particular, we mathematically derive a lower bound on the joint probability distribution of the source and target domains as a framework for UDA and theoretically discuss how this bound can be used to relax the invariance in representation learning. Following this motivation, in the second part, we design a simple, yet efficient algorithm to address the challenges of forcing too much invariance in domain distributional matching. We empirically show how the trade-off between adaptability and invariant representation can be mitigated with an invertible architecture between the representation and predictor models while learning the invariant representation. The experiments are run on public benchmark problems and the results show that the proposed method relaxes the excessive invariance effectively and outperforms the existing domain adaptation approaches.
Advisor: Carneiro, Gustavo
Reid, Ian
Dissertation Note: Thesis (MPhil) -- University of Adelaide, School of Computer Science, 2021
Keywords: Relaxed invariance
domain adaptation
normalizing flow
Provenance: This electronic version is made publicly available by the University of Adelaide in accordance with its open access policy for student theses. Copyright in this thesis remains with the author. This thesis may incorporate third party material which has been used by the author pursuant to Fair Dealing exceptions. If you are the owner of any included third party copyright material you wish to be removed from this electronic version, please complete the take down form located at: http://www.adelaide.edu.au/legals
Appears in Collections:Research Theses

Files in This Item:
File Description SizeFormat 
Askari Lyarjdameh2021_MPhil.pdf13.42 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.