Transfer Learning Applications for Real-World Imaging Problems 2nd Edition

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 4326

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing, Edinburgh Napier University, Edinburgh EH11 4BN, UK
Interests: big data analytics; machine learning; computer vision; IoT; smart grids; distributed system; software engineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Biomedical Sciences Group, Department of Neurosciences, Research Group Ophthalmology, KU Leuven, 3000 Leuven, Belgium
Interests: AI in medicine, ophthalmic neuroscience, biomedical signal processing, machine learning

E-Mail Website
Guest Editor
Computational BioMedicine Laboratory (CBML), Institute of Computer Science (ICS) Foundation for Research and Technology Hellas (FORTH), GR 70013 Heraklion, Crete, Greece
Interests: computer vision; image interpretation; machine learning

E-Mail Website
Guest Editor
School of Mathematics, Statistics and Astrophysics, Newcastle University, Newcastle NE1 7RU, UK
Interests: predictive modelling; penalised methods; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronics and Informatics, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium
Interests: computer vision; machine learning; video understanding

E-Mail Website
Guest Editor
School of Computing, Edinburgh Napier University, Edinburgh EH11 4BN, UK
Interests: computer vision; deep learning; unsupervised domain adaptation; plant image analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning (ML) has been recognized as central to artificial intelligence (AI) for many decades. The question of how the things that have been learned in one context can be re-used and adapted in other related contexts, however, has only been brought to the attention of the wider ML research community over the past few years. In parallel (and sometimes preceding this), transfer learning has been receiving increasing attention in other research areas, e.g., psychology.

In deep learning context, problems are abstract concepts observed through the data which consists of instances and associated labels to learn from, while the solutions are considered to be the parameters of the model that will be learned for solving the problem. Transfer learning and domain adaptation refer to the situation where a model is learnt in one setting, and is exploited to improve generalization in another setting. The transfer process begins with a) a target task to be learnt in a target context; b) a set of solutions to the source tasks (already learnt in the source contexts); c) the transfer of knowledge based on the similarity between the target and source tasks. This is commonly understood in a supervised learning context, where the input is the same but the target may be of a different nature. If there is significantly more data in the first setting, then that may help to learn representations that are useful to quickly generalize. This happens because many visual categories share low-level notions of edges and visual shapes, changes in lighting, etc. Recent works have focused on incorporating transfer learning into deep visual representations, to combat the problem of insufficient training data. Pre-training CNNs on ImageNet or Places has been the standard practice for other vision problems. However, features learnt in pre-trained models are not perfectly fitted for the target learning task. Using the pre-trained network as a feature extractor or fine-tuning the network have become a frequently used method to learn task-specific features, while extensive efforts have been made to perceive transfer learning itself.

Therefore, this Special Issue welcomes new research contributions proposing novel (federated) transfer learning and domain adaptation approaches to real imaging-related problems, such as (but not limited to):

  • Medical imaging
  • Plant biology
  • Microscopy
  • Remote sensing
  • Hyperspectral imaging
  • Video surveillance
  • Human rights technology
  • COVID-19
  • Multi- and cross-modality

These topics solve one (or more) machine learning-related tasks, such as classification, regression, segmentation, detection, etc.

Dr. Christos Chrysoulas
Dr. Eirini Christinaki
Dr. Grigorios E. Kalliatakis
Prof. Dr. Aris Perperoglou
Dr. Alexandros Stergiou
Dr. Mario Valerio Giuffrida
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image analysis
  • deep learning
  • transfer learning
  • domain adaptation

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 27203 KiB  
Article
Domain-Aware Few-Shot Learning for Optical Coherence Tomography Noise Reduction
by Deborah Pereg
J. Imaging 2023, 9(11), 237; https://doi.org/10.3390/jimaging9110237 - 30 Oct 2023
Cited by 1 | Viewed by 1124
Abstract
Speckle noise has long been an extensively studied problem in medical imaging. In recent years, there have been significant advances in leveraging deep learning methods for noise reduction. Nevertheless, adaptation of supervised learning models to unseen domains remains a challenging problem. Specifically, deep [...] Read more.
Speckle noise has long been an extensively studied problem in medical imaging. In recent years, there have been significant advances in leveraging deep learning methods for noise reduction. Nevertheless, adaptation of supervised learning models to unseen domains remains a challenging problem. Specifically, deep neural networks (DNNs) trained for computational imaging tasks are vulnerable to changes in the acquisition system’s physical parameters, such as: sampling space, resolution, and contrast. Even within the same acquisition system, performance degrades across datasets of different biological tissues. In this work, we propose a few-shot supervised learning framework for optical coherence tomography (OCT) noise reduction, that offers high-speed training (of the order of seconds) and requires only a single image, or part of an image, and a corresponding speckle-suppressed ground truth, for training. Furthermore, we formulate the domain shift problem for OCT diverse imaging systems and prove that the output resolution of a despeckling trained model is determined by the source domain resolution. We also provide possible remedies. We propose different practical implementations of our approach, verify and compare their applicability, robustness, and computational efficiency. Our results demonstrate the potential to improve sample complexity, generalization, and time efficiency, for coherent and non-coherent noise reduction via supervised learning models, that can also be leveraged for other real-time computer vision applications. Full article
Show Figures

Figure 1

15 pages, 21134 KiB  
Article
Explainable Image Similarity: Integrating Siamese Networks and Grad-CAM
by Ioannis E. Livieris, Emmanuel Pintelas, Niki Kiriakidou and Panagiotis Pintelas
J. Imaging 2023, 9(10), 224; https://doi.org/10.3390/jimaging9100224 - 14 Oct 2023
Cited by 1 | Viewed by 2698
Abstract
With the proliferation of image-based applications in various domains, the need for accurate and interpretable image similarity measures has become increasingly critical. Existing image similarity models often lack transparency, making it challenging to understand the reasons why two images are considered similar. In [...] Read more.
With the proliferation of image-based applications in various domains, the need for accurate and interpretable image similarity measures has become increasingly critical. Existing image similarity models often lack transparency, making it challenging to understand the reasons why two images are considered similar. In this paper, we propose the concept of explainable image similarity, where the goal is the development of an approach, which is capable of providing similarity scores along with visual factual and counterfactual explanations. Along this line, we present a new framework, which integrates Siamese Networks and Grad-CAM for providing explainable image similarity and discuss the potential benefits and challenges of adopting this approach. In addition, we provide a comprehensive discussion about factual and counterfactual explanations provided by the proposed framework for assisting decision making. The proposed approach has the potential to enhance the interpretability, trustworthiness and user acceptance of image-based systems in real-world image similarity applications. Full article
Show Figures

Figure 1

Back to TopTop