Special Issue "Transfer Learning Applications for Real-World Imaging Problems"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 2608

Special Issue Editors

Dr. Christos Chrysoulas
E-Mail Website
Guest Editor
School of Computing, Edinburgh Napier University, Edinburgh, UK
Interests: big data analytics; machine learning; computer vision; IoT; smart grids; distributed system; software engineering
Dr. Mario Valerio Giuffrida
E-Mail Website
Guest Editor
School of Computing, Edinburgh Napier University, Edinburgh, UK
Interests: computer vision; deep learning; unsupervised domain adaptation; plant image analysis
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Aris Perperoglou
E-Mail Website
Guest Editor
School of Mathematics, Statistics and Astrophysics, Newcastle University, Newcastle, UK
Interests: predictive modelling; penalised methods; machine learning
Dr. Grigorios Kalliatakis
E-Mail Website
Guest Editor
WMG | University of Warwick, Coventry, UK
Interests: computer vision; image interpretation; machine learning
Mr. Alexandros Stergiou
E-Mail Website
Guest Editor
Department of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands
Interests: computer vision; machine learning; video understanding

Special Issue Information

Dear Colleagues,

Machine learning (ML) has been recognized as central to artificial intelligence (AI) for many decades. The question of how the things that have been learned in one context can be re-used and adapted in other related contexts, however, has only been brought to the attention of the wider ML research community over the past few years. In parallel (and sometimes preceding this), transfer learning has been receiving increasing attention in other research areas, e.g., psychology.

In deep learning context, problems are abstract concepts observed through the data which consists of instances and associated labels to learn from, while the solutions are considered to be the parameters of the model that will be learned for solving the problem. Transfer learning and domain adaptation refer to the situation where a model is learnt in one setting, and is exploited to improve generalization in another setting. The transfer process begins with a) a target task to be learnt in a target context; b) a set of solutions to the source tasks (already learnt in the source contexts); c) the transfer of knowledge based on the similarity between the target and source tasks. This is commonly understood in a supervised learning context, where the input is the same but the target may be of a different nature. If there is significantly more data in the first setting, then that may help to learn representations that are useful to quickly generalize. This happens because many visual categories share low-level notions of edges and visual shapes, changes in lighting, etc. Recent works have focused on incorporating transfer learning into deep visual representations, to combat the problem of insufficient training data. Pre-training CNNs on ImageNet or Places has been the standard practice for other vision problems. However, features learnt in pre-trained models are not perfectly fitted for the target learning task. Using the pre-trained network as a feature extractor or fine-tuning the network have become a frequently used method to learn task-specific features, while extensive efforts have been made to perceive transfer learning itself.

Therefore, this Special Issue welcomes new research contributions proposing novel (federated) transfer learning and domain adaptation approaches to real imaging-related problems, such as (but not limited to):

  • Medical imaging
  • Plant biology
  • Microscopy
  • Remote sensing
  • Hyperspectral imaging
  • Video surveillance
  • Human rights technology
  • COVID-19
  • Multi- and cross-modality

These topics solve one (or more) machine learning-related tasks, such as classification, regression, segmentation, detection, etc.

Dr. Christos Chrysoulas
Dr. Mario Valerio Giuffrida
Prof. Dr. Aris Perperoglou
Dr. Grigorios Kalliatakis
Mr. Alexandros Stergiou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image analysis
  • deep learning
  • transfer learning
  • domain adaptation

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
J. Imaging 2022, 8(2), 38; https://doi.org/10.3390/jimaging8020038 - 04 Feb 2022
Cited by 1 | Viewed by 1150
Abstract
Transfer learning from natural images is used in deep neural networks (DNNs) for medical image classification to achieve a computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to [...] Read more.
Transfer learning from natural images is used in deep neural networks (DNNs) for medical image classification to achieve a computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training datasets (medical images), which are often required for adversarial attacks, are generally unavailable in terms of security and privacy preservation. Nevertheless, in this study, we demonstrated that adversarial attacks are also possible using natural images for medical DNN models with transfer learning, even if such medical images are unavailable; in particular, we showed that universal adversarial perturbations (UAPs) can also be generated from natural images. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs to natural images is expected to become a significant security threat. Full article
(This article belongs to the Special Issue Transfer Learning Applications for Real-World Imaging Problems)
Show Figures

Figure 1

Article
Semi-Supervised Domain Adaptation for Holistic Counting under Label Gap
J. Imaging 2021, 7(10), 198; https://doi.org/10.3390/jimaging7100198 - 29 Sep 2021
Cited by 1 | Viewed by 799
Abstract
This paper proposes a novel approach for semi-supervised domain adaptation for holistic regression tasks, where a DNN predicts a continuous value yR given an input image x. The current literature generally lacks specific domain adaptation approaches for this task, as [...] Read more.
This paper proposes a novel approach for semi-supervised domain adaptation for holistic regression tasks, where a DNN predicts a continuous value yR given an input image x. The current literature generally lacks specific domain adaptation approaches for this task, as most of them mostly focus on classification. In the context of holistic regression, most of the real-world datasets not only exhibit a covariate (or domain) shift, but also a label gap—the target dataset may contain labels not included in the source dataset (and vice versa). We propose an approach tackling both covariate and label gap in a unified training framework. Specifically, a Generative Adversarial Network (GAN) is used to reduce covariate shift, and label gap is mitigated via label normalisation. To avoid overfitting, we propose a stopping criterion that simultaneously takes advantage of the Maximum Mean Discrepancy and the GAN Global Optimality condition. To restore the original label range—that was previously normalised—a handful of annotated images from the target domain are used. Our experimental results, run on 3 different datasets, demonstrate that our approach drastically outperforms the state-of-the-art across the board. Specifically, for the cell counting problem, the mean squared error (MSE) is reduced from 759 to 5.62; in the case of the pedestrian dataset, our approach lowered the MSE from 131 to 1.47. For the last experimental setup, we borrowed a task from plant biology, i.e., counting the number of leaves in a plant, and we ran two series of experiments, showing the MSE is reduced from 2.36 to 0.88 (intra-species), and from 1.48 to 0.6 (inter-species). Full article
(This article belongs to the Special Issue Transfer Learning Applications for Real-World Imaging Problems)
Show Figures

Figure 1

Back to TopTop