Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/137563
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: UniMiSS: Universal Medical Self-supervised Learning via Breaking Dimensionality Barrier
Author: Xie, Y.
Zhang, J.
Xia, Y.
Wu, Q.
Citation: Lecture Notes in Artificial Intelligence, 2022 / Avidan, S., Brostow, G., Cisse, M., Farinella, G.M., Hassner, T. (ed./s), vol.13681 LNCS, pp.558-575
Publisher: Springer
Publisher Place: Online
Issue Date: 2022
Series/Report no.: Lecture Notes in Computer Science; 13681
ISBN: 9783031198021
ISSN: 0302-9743
1611-3349
Conference Name: European Conference on Computer Vision (ECCV) (23 Oct 2022 - 27 Oct 2022 : Tel Aviv)
Editor: Avidan, S.
Brostow, G.
Cisse, M.
Farinella, G.M.
Hassner, T.
Statement of
Responsibility: 
Yutong Xie, Jianpeng Zhang, Yong Xia, and Qi Wu
Abstract: Self-supervised learning (SSL) opens up huge opportunities for medical image analysis that is well known for its lack of annotations. However, aggregating massive (unlabeled) 3D medical images like computerized tomography (CT) remains challenging due to its high imaging cost and privacy restrictions. In this paper, we advocate bringing a wealth of 2D images like chest X-rays as compensation for the lack of 3D data, aiming to build a universal medical self-supervised representation learning framework, called UniMiSS. The following problem is how to break the dimensionality barrier, i.e., making it possible to perform SSL with both 2D and 3D images? To achieve this, we design a pyramid U-like medical Transformer (MiT). It is composed of the switchable patch embedding (SPE) module and Transformers. The SPE module adaptively switches to either 2D or 3D patch embedding, depending on the input dimension. The embedded patches are converted into a sequence regardless of their original dimensions. The Transformers model the long-term dependencies in a sequence-to-sequence manner, thus enabling UniMiSS to learn representations from both 2D and 3D images. With the MiT as the backbone, we perform the UniMiSS in a self-distillation manner. We conduct expensive experiments on six 3D/2D medical image analysis tasks, including segmentation and classification. The results show that the proposed UniMiSS achieves promising performance on various downstream tasks, outperforming the ImageNet pre-training and other advanced SSL counterparts substantially. Code is available at https://github.com/YtongXie/UniMiSS-code.
Keywords: Self-supervised learning; Cross-dimension; Medical image analysis; Transformer
Rights: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
DOI: 10.1007/978-3-031-19803-8_33
Grant ID: http://purl.org/au-research/grants/arc/DE190100539
Published version: https://link.springer.com/book/10.1007/978-3-031-19803-8
Appears in Collections:Australian Institute for Machine Learning publications
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.