Interpretable and Annotation-Efficient Learning for Medical Image Computing

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990).

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 20433

Special Issue Editors


E-Mail Website
Guest Editor
INESC TEC and University of Porto, 4099-002 Porto, Portugal
Interests: deep learning; explainable machine learning; computer vision; medical image analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN 55455, USA
Interests: medical image analysis; semantic segmentation; data annotation; reproducibility; challenge design

E-Mail Website
Guest Editor
Department of Informatics Engineering, Faculty of Sciences and Technology of Coimbra University and CISC, Polo II, 3030-290 Coimbra, Portugal
Interests: deep learning; explainable machine learning; computer vision; medical image analysis

E-Mail Website
Guest Editor
Amsterdam University Medical Centers, 1105 AZ Amsterdam, The Netherlands
Interests: deep learning; explainable machine learning; computer vision; medical image analysis

E-Mail Website
Guest Editor
Institut für Informatik, Technical University of Munich, 80333 München, Germany
Interests: medical image analysis; ultrasound modeling; microscopic images; shape analysis

Special Issue Information

Dear Colleagues,

As data-hungry methods continue to drive advancements in medical imaging, the need for high-quality annotated data to train and validate these methods continues to grow. Further, with the pressing need to address health disparities and to prevent learned systems from internalizing biases, there has never been a greater need for thorough study and discussion of best practices in data collection and annotation.

Additionally, the remarkable performances achieved by current machine learning systems are achieved at the cost of opacity and often contain training-data-induced bias, causing distrust and potentially limiting clinical acceptance. As these systems are pervasively being introduced to critical domains, such as medical image computing and computer-assisted intervention, it becomes imperative to develop methodologies allowing insight into their decision making. Such methodologies would help physicians to decide whether they should follow and trust automatic decisions. Additionally, interpretable machine learning methods could facilitate defining the legal and ethical framework of their clinical deployment.

For this Special Issue, we invite the authors of the very best works of iMIMIC and LABELS Workshops at MICCAI 2020 to submit a substantially extended and revised version of their workshop paper. Each extended submission to this Special Issue should contain at least 50% of new material, e.g., in the form of technical extensions, more in-depth evaluations, or additional use cases and a change of title, abstract, and keywords.

This special issue is also open to new submissions that are in line with the themes of the two workshops and with special emphasis on medical imaging: interpretability and model visualization techniques, local and textual explanations, uncertainty quantification, label crowdsourcing and validation, data augmentation and active learning, domain adaptation and transfer learning, modeling label uncertainty and training in the presence of noise.

All submissions will undergo a peer-review process according to the journal's rules of action. At least two technical committees will act as reviewers for each extended article submitted to this Special Issue; if needed, additional external reviewers will be invited to guarantee a high-quality reviewing process.

Prof. Dr. Jaime Cardoso
Mr. Nicholas Heller
Prof. Dr. Pedro Henriques Abreu
Prof. Dr. Ivana Išgum
Prof. Dr. Diana Mateus
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable machine learning
  • medical image analysis
  • decision support system

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 17579 KiB  
Article
Going to Extremes: Weakly Supervised Medical Image Segmentation
by Holger R. Roth, Dong Yang, Ziyue Xu, Xiaosong Wang and Daguang Xu
Mach. Learn. Knowl. Extr. 2021, 3(2), 507-524; https://doi.org/10.3390/make3020026 - 02 Jun 2021
Cited by 20 | Viewed by 5537
Abstract
Medical image annotation is a major hurdle for developing precise and robust machine-learning models. Annotation is expensive, time-consuming, and often requires expert knowledge, particularly in the medical field. Here, we suggest using minimal user interaction in the form of extreme point clicks to [...] Read more.
Medical image annotation is a major hurdle for developing precise and robust machine-learning models. Annotation is expensive, time-consuming, and often requires expert knowledge, particularly in the medical field. Here, we suggest using minimal user interaction in the form of extreme point clicks to train a segmentation model which, in effect, can be used to speed up medical image annotation. An initial segmentation is generated based on the extreme points using the random walker algorithm. This initial segmentation is then used as a noisy supervision signal to train a fully convolutional network that can segment the organ of interest, based on the provided user clicks. Through experimentation on several medical imaging datasets, we show that the predictions of the network can be refined using several rounds of training with the prediction from the same weakly annotated data. Further improvements are shown using the clicked points within a custom-designed loss and attention mechanism. Our approach has the potential to speed up the process of generating new training datasets for the development of new machine-learning and deep-learning-based models for, but not exclusively, medical image analysis. Full article
Show Figures

Figure 1

18 pages, 4238 KiB  
Article
On the Scale Invariance in State of the Art CNNs Trained on ImageNet
by Mara Graziani, Thomas Lompech, Henning Müller, Adrien Depeursinge and Vincent Andrearczyk
Mach. Learn. Knowl. Extr. 2021, 3(2), 374-391; https://doi.org/10.3390/make3020019 - 03 Apr 2021
Cited by 13 | Viewed by 4503
Abstract
The diffused practice of pre-training Convolutional Neural Networks (CNNs) on large natural image datasets such as ImageNet causes the automatic learning of invariance to object scale variations. This, however, can be detrimental in medical imaging, where pixel spacing has a known physical correspondence [...] Read more.
The diffused practice of pre-training Convolutional Neural Networks (CNNs) on large natural image datasets such as ImageNet causes the automatic learning of invariance to object scale variations. This, however, can be detrimental in medical imaging, where pixel spacing has a known physical correspondence and size is crucial to the diagnosis, for example, the size of lesions, tumors or cell nuclei. In this paper, we use deep learning interpretability to identify at what intermediate layers such invariance is learned. We train and evaluate different regression models on the PASCAL-VOC (Pattern Analysis, Statistical modeling and ComputAtional Learning-Visual Object Classes) annotated data to (i) separate the effects of the closely related yet different notions of image size and object scale, (ii) quantify the presence of scale information in the CNN in terms of the layer-wise correlation between input scale and feature maps in InceptionV3 and ResNet50, and (iii) develop a pruning strategy that reduces the invariance to object scale of the learned features. Results indicate that scale information peaks at central CNN layers and drops close to the softmax, where the invariance is reached. Our pruning strategy uses this to obtain features that preserve scale information. We show that the pruning significantly improves the performance on medical tasks where scale is a relevant factor, for example for the regression of breast histology image magnification. These results show that the presence of scale information at intermediate layers legitimates transfer learning in applications that require scale covariance rather than invariance and that the performance on these tasks can be improved by pruning off the layers where the invariance is learned. All experiments are performed on publicly available data and the code is available on GitHub. Full article
Show Figures

Figure 1

19 pages, 1445 KiB  
Article
Templated Text Synthesis for Expert-Guided Multi-Label Extraction from Radiology Reports
by Patrick Schrempf, Hannah Watson, Eunsoo Park, Maciej Pajak, Hamish MacKinnon, Keith W. Muir, David Harris-Birtill and Alison Q. O’Neil
Mach. Learn. Knowl. Extr. 2021, 3(2), 299-317; https://doi.org/10.3390/make3020015 - 24 Mar 2021
Cited by 6 | Viewed by 4967
Abstract
Training medical image analysis models traditionally requires large amounts of expertly annotated imaging data which is time-consuming and expensive to obtain. One solution is to automatically extract scan-level labels from radiology reports. Previously, we showed that, by extending BERT with a per-label attention [...] Read more.
Training medical image analysis models traditionally requires large amounts of expertly annotated imaging data which is time-consuming and expensive to obtain. One solution is to automatically extract scan-level labels from radiology reports. Previously, we showed that, by extending BERT with a per-label attention mechanism, we can train a single model to perform automatic extraction of many labels in parallel. However, if we rely on pure data-driven learning, the model sometimes fails to learn critical features or learns the correct answer via simplistic heuristics (e.g., that “likely” indicates positivity), and thus fails to generalise to rarer cases which have not been learned or where the heuristics break down (e.g., “likely represents prominent VR space or lacunar infarct” which indicates uncertainty over two differential diagnoses). In this work, we propose template creation for data synthesis, which enables us to inject expert knowledge about unseen entities from medical ontologies, and to teach the model rules on how to label difficult cases, by producing relevant training examples. Using this technique alongside domain-specific pre-training for our underlying BERT architecture i.e., PubMedBERT, we improve F1 micro from 0.903 to 0.939 and F1 macro from 0.512 to 0.737 on an independent test set for 33 labels in head CT reports for stroke patients. Our methodology offers a practical way to combine domain knowledge with machine learning for text classification tasks. Full article
Show Figures

Figure 1

20 pages, 13641 KiB  
Article
Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging
by Antoine Pirovano, Hippolyte Heuberger, Sylvain Berlemont, SaÏd Ladjal and Isabelle Bloch
Mach. Learn. Knowl. Extr. 2021, 3(1), 243-262; https://doi.org/10.3390/make3010012 - 22 Feb 2021
Cited by 3 | Viewed by 3892
Abstract
Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge [...] Read more.
Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification with the formalization of the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances. We measure the improvement using the tile-level AUC that we called Localization AUC, and show an improvement of more than 0.2. We also validate our results with a RemOve And Retrain (ROAR) measure. Then, after studying the impact of the number of features used for heat-map computation, we propose a corrective approach, relying on activation colocalization of selected features, that improves the performances and the stability of our proposed method. Full article
Show Figures

Figure 1

Back to TopTop