Topic Editors

Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao-Tung University, Taipei 112, Taiwan
Prof. Dr. Kuangyu Shi
Department of Nuclear Medicine, Bern University Hospital, Unviersity of Bern, 3010 Bern, Switzerland

Applications of Image and Video Processing in Medical Imaging

Abstract submission deadline
28 February 2026
Manuscript submission deadline
30 April 2026
Viewed by
813

Topic Information

Dear Colleagues,

We invite submissions of original research articles and reviews on topics such as image reconstruction, enhancement, anomaly detection, segmentation, motion correction, modelling, and computer-aided diagnosis. Emphasis is placed on the role of artificial intelligence, machine learning, and deep learning in improving the safety, practicality, and efficacy of medical imaging in clinical applications.

This Special Issue seeks interdisciplinary contributions spanning areas such as radiology, nuclear medicine, ultrasound, interventional imaging, and telemedicine. We welcome works presenting novel algorithms and new applications, as well as those addressing challenges in big data-handling, privacy, and security.

Prof. Dr. Jyh-Cheng Chen
Prof. Dr. Kuangyu Shi
Topic Editors

Keywords

  • image and video processing
  • medical imaging
  • artificial intelligence
  • machine learning
  • deep learning

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 18.4 Days CHF 2400 Submit
Electronics
electronics
2.6 5.3 2012 16.4 Days CHF 2400 Submit
Journal of Imaging
jimaging
2.7 5.9 2015 18.3 Days CHF 1800 Submit
Machine Learning and Knowledge Extraction
make
4.0 6.3 2019 20.8 Days CHF 1800 Submit
Information
information
2.4 6.9 2010 16.4 Days CHF 1600 Submit
Big Data and Cognitive Computing
BDCC
3.7 7.1 2017 25.3 Days CHF 1800 Submit
Signals
signals
- 3.2 2020 28.3 Days CHF 1000 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (2 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
24 pages, 1224 KiB  
Article
MDFormer: Transformer-Based Multimodal Fusion for Robust Chest Disease Diagnosis
by Xinlong Liu, Fei Pan, Hainan Song, Siyi Cao, Chunping Li and Tanshi Li
Electronics 2025, 14(10), 1926; https://doi.org/10.3390/electronics14101926 - 9 May 2025
Viewed by 228
Abstract
With the increasing richness of medical images and clinical data, abundant data support is provided for multimodal chest disease diagnosis methods. However, traditional multimodal fusion methods are often relatively simple, leading to insufficient exploitation of crossmodal complementary advantages. At the same time, existing [...] Read more.
With the increasing richness of medical images and clinical data, abundant data support is provided for multimodal chest disease diagnosis methods. However, traditional multimodal fusion methods are often relatively simple, leading to insufficient exploitation of crossmodal complementary advantages. At the same time, existing multimodal chest disease diagnosis methods usually focus on two modalities, and their scalability is poor when extended to three or more modalities. Moreover, in practical clinical scenarios, missing modality problems often arise due to equipment limitations or incomplete data acquisition. To address these issues, this paper proposes a novel multimodal chest disease classification model, MDFormer. This model designs a crossmodal attention fusion mechanism, MFAttention, and combines it with the Transformer architecture to construct a multimodal fusion module, MFTrans, which effectively integrates medical imaging, clinical text, and vital signs data. When extended to multiple modalities, MFTrans significantly reduces model parameters. At the same time, this paper also proposes a two-stage masked enhancement classification and contrastive learning training framework, MECCL, which significantly improves the model’s robustness and transferability. Experimental results show that MDFormer achieves a classification precision of 0.8 on the MIMIC dataset, and when 50% of the modality data are missing, the AUC can reach 85% of that of the complete data, outperforming models that did not use two-stage training. Full article
Show Figures

Figure 1

25 pages, 6904 KiB  
Article
A Weighted Facial Expression Analysis for Pain Level Estimation
by Parkpoom Chaisiriprasert and Nattapat Patchsuwan
J. Imaging 2025, 11(5), 151; https://doi.org/10.3390/jimaging11050151 - 9 May 2025
Viewed by 176
Abstract
Accurate assessment of pain intensity is critical, particularly for patients who are unable to verbally express their discomfort. This study proposes a novel weighted analytical framework that integrates facial expression analysis through action units (AUs) with a facial feature-based weighting mechanism to enhance [...] Read more.
Accurate assessment of pain intensity is critical, particularly for patients who are unable to verbally express their discomfort. This study proposes a novel weighted analytical framework that integrates facial expression analysis through action units (AUs) with a facial feature-based weighting mechanism to enhance the estimation of pain intensity. The proposed method was evaluated on a dataset comprising 4084 facial images from 25 individuals and demonstrated an average accuracy of 92.72% using the weighted pain level estimation model, in contrast to 83.37% achieved using conventional approaches. The observed improvements are primarily attributed to the strategic utilization of AU zones and expression-based weighting, which enable more precise differentiation between pain-related and non-pain-related facial movements. These findings underscore the efficacy of the proposed model in enhancing the accuracy and reliability of automated pain detection, especially in contexts where verbal communication is impaired or absent. Full article
Show Figures

Figure 1

Back to TopTop