Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/139919
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Multi-Modal Learning With Missing Modality via Shared-Specific Feature Modelling
Author: Wang, H.
Chen, Y.
Ma, C.
Avery, J.C.
Hull, M.L.
Carneiro, G.
Citation: Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2023, pp.15878-15887
Publisher: IEEE
Issue Date: 2023
Series/Report no.: IEEE Conference on Computer Vision and Pattern Recognition
ISBN: 9798350301298
ISSN: 2575-7075
Conference Name: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17 Jun 2023 - 24 Jun 2023 : Vancouver, Canada)
Statement of
Responsibility: 
Hu Wang, Yuanhong Chen, Congbo Ma, Jodie Avery, Louise Hull, Gustavo Carneiro
Abstract: The missing modality issue is critical but non-trivial to be solved by multi-modal models. Current methods aiming to handle the missing modality problem in multi-modal tasks, either deal with missing modalities only during evaluation or train separate models to handle specific missing modality settings. In addition, these models are designed for specific tasks, so for example, classification models are not easily adapted to segmentation tasks and vice versa. In this paper, we propose the Shared-Specific Feature Modelling (ShaSpec) method that is considerably simpler and more effective than competing approaches that address the issues above. ShaSpec is designed to take advantage of all available input modalities during training and evaluation by learning shared and specific features to better represent the input data. This is achieved from a strategy that relies on auxiliary tasks based on distribution alignment and domain classification, in addition to a residual feature fusion procedure. Also, the design simplicity of ShaSpec enables its easy adaptation to multiple tasks, such as classification and segmentation. Experiments are conducted on both medical image segmentation and computer vision classification, with results indicating that ShaSpec outperforms competing methods by a large margin. For instance, on BraTS2018, ShaSpec improves the SOTA by more than 3% for enhancing tumour, 5% for tumour core and 3% for whole tumour.
Keywords: Multi-modal learning
Rights: © 2023, IEEE
DOI: 10.1109/CVPR52729.2023.01524
Grant ID: http://purl.org/au-research/grants/arc/FT190100525
Published version: https://ieeexplore.ieee.org/document/10204754
Appears in Collections:Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.