Topic Editors

Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao-Tung University, Taipei 112, Taiwan
Prof. Dr. Kuangyu Shi
Department of Nuclear Medicine, Bern University Hospital, Unviersity of Bern, 3010 Bern, Switzerland

Applications of Image and Video Processing in Medical Imaging

Abstract submission deadline
28 February 2026
Manuscript submission deadline
30 April 2026
Viewed by
2621

Topic Information

Dear Colleagues,

We invite submissions of original research articles and reviews on topics such as image reconstruction, enhancement, anomaly detection, segmentation, motion correction, modelling, and computer-aided diagnosis. Emphasis is placed on the role of artificial intelligence, machine learning, and deep learning in improving the safety, practicality, and efficacy of medical imaging in clinical applications.

This Special Issue seeks interdisciplinary contributions spanning areas such as radiology, nuclear medicine, ultrasound, interventional imaging, and telemedicine. We welcome works presenting novel algorithms and new applications, as well as those addressing challenges in big data-handling, privacy, and security.

Prof. Dr. Jyh-Cheng Chen
Prof. Dr. Kuangyu Shi
Topic Editors

Keywords

  • image and video processing
  • medical imaging
  • artificial intelligence
  • machine learning
  • deep learning

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.5 2011 19.8 Days CHF 2400 Submit
Electronics
electronics
2.6 6.1 2012 16.8 Days CHF 2400 Submit
Journal of Imaging
jimaging
3.3 6.7 2015 15.3 Days CHF 1800 Submit
Machine Learning and Knowledge Extraction
make
6.0 9.9 2019 25.5 Days CHF 1800 Submit
Information
information
2.9 6.5 2010 18.6 Days CHF 1800 Submit
Big Data and Cognitive Computing
BDCC
4.4 9.8 2017 24.5 Days CHF 1800 Submit
Signals
signals
2.6 4.6 2020 22.9 Days CHF 1200 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (3 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
49 pages, 5692 KiB  
Review
Artificial Intelligence-Empowered Embryo Selection for IVF Applications: A Methodological Review
by Lazaros Moysis, Lazaros Alexios Iliadis, George Vergos, Sotirios P. Sotiroudis, Achilles D. Boursianis, Achilleas Papatheodorou, Konstantinos-Iraklis D. Kokkinidis, Mohammad Abdul Matin, Panagiotis Sarigiannidis, Ilias Siniosoglou, Vasileios Argyriou and Sotirios K. Goudos
Mach. Learn. Knowl. Extr. 2025, 7(2), 56; https://doi.org/10.3390/make7020056 - 16 Jun 2025
Viewed by 893
Abstract
In vitro fertilization (IVF) is a well-established and efficient assisted reproductive technology (ART). However, it requires a series of costly and non-trivial procedures, and the success rate still needs improvement. Thus, increasing the success rate, simplifying the process, and reducing costs are all [...] Read more.
In vitro fertilization (IVF) is a well-established and efficient assisted reproductive technology (ART). However, it requires a series of costly and non-trivial procedures, and the success rate still needs improvement. Thus, increasing the success rate, simplifying the process, and reducing costs are all essential challenges of IVF. These can be addressed by integrating artificial intelligence techniques, like deep learning (DL), with several aspects of the IVF process. DL techniques can help extract important features from the data, support decision making, and perform several other tasks, as architectures can be adapted to different problems. The emergence of AI in the medical field has seen a rise in DL-supported tools for embryo selection. In this work, recent advances in the use of AI and DL-based embryo selection for IVF are reviewed. The different architectures that have been considered so far for each task are presented. Furthermore, future challenges for artificial intelligence-based ARTs are outlined. Full article
Show Figures

Graphical abstract

24 pages, 1224 KiB  
Article
MDFormer: Transformer-Based Multimodal Fusion for Robust Chest Disease Diagnosis
by Xinlong Liu, Fei Pan, Hainan Song, Siyi Cao, Chunping Li and Tanshi Li
Electronics 2025, 14(10), 1926; https://doi.org/10.3390/electronics14101926 - 9 May 2025
Viewed by 609
Abstract
With the increasing richness of medical images and clinical data, abundant data support is provided for multimodal chest disease diagnosis methods. However, traditional multimodal fusion methods are often relatively simple, leading to insufficient exploitation of crossmodal complementary advantages. At the same time, existing [...] Read more.
With the increasing richness of medical images and clinical data, abundant data support is provided for multimodal chest disease diagnosis methods. However, traditional multimodal fusion methods are often relatively simple, leading to insufficient exploitation of crossmodal complementary advantages. At the same time, existing multimodal chest disease diagnosis methods usually focus on two modalities, and their scalability is poor when extended to three or more modalities. Moreover, in practical clinical scenarios, missing modality problems often arise due to equipment limitations or incomplete data acquisition. To address these issues, this paper proposes a novel multimodal chest disease classification model, MDFormer. This model designs a crossmodal attention fusion mechanism, MFAttention, and combines it with the Transformer architecture to construct a multimodal fusion module, MFTrans, which effectively integrates medical imaging, clinical text, and vital signs data. When extended to multiple modalities, MFTrans significantly reduces model parameters. At the same time, this paper also proposes a two-stage masked enhancement classification and contrastive learning training framework, MECCL, which significantly improves the model’s robustness and transferability. Experimental results show that MDFormer achieves a classification precision of 0.8 on the MIMIC dataset, and when 50% of the modality data are missing, the AUC can reach 85% of that of the complete data, outperforming models that did not use two-stage training. Full article
Show Figures

Figure 1

25 pages, 6904 KiB  
Article
A Weighted Facial Expression Analysis for Pain Level Estimation
by Parkpoom Chaisiriprasert and Nattapat Patchsuwan
J. Imaging 2025, 11(5), 151; https://doi.org/10.3390/jimaging11050151 - 9 May 2025
Viewed by 556
Abstract
Accurate assessment of pain intensity is critical, particularly for patients who are unable to verbally express their discomfort. This study proposes a novel weighted analytical framework that integrates facial expression analysis through action units (AUs) with a facial feature-based weighting mechanism to enhance [...] Read more.
Accurate assessment of pain intensity is critical, particularly for patients who are unable to verbally express their discomfort. This study proposes a novel weighted analytical framework that integrates facial expression analysis through action units (AUs) with a facial feature-based weighting mechanism to enhance the estimation of pain intensity. The proposed method was evaluated on a dataset comprising 4084 facial images from 25 individuals and demonstrated an average accuracy of 92.72% using the weighted pain level estimation model, in contrast to 83.37% achieved using conventional approaches. The observed improvements are primarily attributed to the strategic utilization of AU zones and expression-based weighting, which enable more precise differentiation between pain-related and non-pain-related facial movements. These findings underscore the efficacy of the proposed model in enhancing the accuracy and reliability of automated pain detection, especially in contexts where verbal communication is impaired or absent. Full article
Show Figures

Figure 1

Back to TopTop