Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/137568
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Self-supervised Mean Teacher for Semi-supervised Chest X-Ray Classification
Author: Liu, F.
Tian, Y.
Cordeiro, F.R.
Belagiannis, V.
Reid, I.
Carneiro, G.
Citation: Lecture Notes in Artificial Intelligence, 2021 / Lian, C., Cao, X., Rekik, I., Xu, X., Yan, P. (ed./s), vol.12966 LNIP, pp.426-436
Publisher: Springer International Publishing
Publisher Place: Switzerland
Issue Date: 2021
Series/Report no.: Lecture Notes in Computer Science; 12966
ISBN: 9783030875886
ISSN: 0302-9743
1611-3349
Conference Name: 12th International Workshop, Machine Learning in Medical Imaging (MLMI) (27 Sep 2021 - 27 Sep 2021 : Strasbourg, France)
Editor: Lian, C.
Cao, X.
Rekik, I.
Xu, X.
Yan, P.
Statement of
Responsibility: 
Fengbei Liu, Yu Tian, Filipe R. Cordeiro, Vasileios Belagiannis, Ian Reid, and Gustavo Carneiro
Abstract: The training of deep learning models generally requires a large amount of annotated data for effective convergence and generalisation. However, obtaining high-quality annotations is a laboursome and expensive process due to the need of expert radiologists for the labelling task. The study of semi-supervised learning in medical image analysis is then of crucial importance given that it is much less expensive to obtain unlabelled images than to acquire images labelled by expert radiologists. Essentially, semi-supervised methods leverage large sets of unlabelled data to enable better training convergence and generalisation than using only the small set of labelled images. In this paper, we propose Selfsupervised Mean Teacher for Semi-supervised (S2MTS2) learning that combines self-supervised mean-teacher pre-training with semi-supervised fine-tuning. The main innovation of S2MTS2 is the self-supervised meanteacher pre-training based on the joint contrastive learning, which uses an infinite number of pairs of positive query and key features to improve the mean-teacher representation. The model is then fine-tuned using the exponential moving average teacher framework trained with semisupervised learning. We validate S2MTS2 on the multi-label classification problems from Chest X-ray14 and CheXpert, and the multi-class classification from ISIC2018, where we show that it outperforms the previous SOTA semi-supervised learning methods by a large margin. Our code will be available upon paper acceptance.
Keywords: Semi-supervised learning; Chest X-ray; Self-supervised learning; Multi-label classification
Description: This is the 12th in a series of workshops on this topic in conjunction with the 24th International Conference on Medical Image Computing & Computer Assisted Intervention (MICCAI 2021)
Rights: © Springer Nature Switzerland AG 2021
DOI: 10.1007/978-3-030-87589-3_44
Grant ID: http://purl.org/au-research/grants/arc/DP180103232
http://purl.org/au-research/grants/arc/FT190100525
Published version: https://link.springer.com/book/10.1007/978-3-030-87589-3
Appears in Collections:Australian Institute for Machine Learning publications
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.