sensors-logo

Journal Browser

Journal Browser

Transformer-Based Deep Learning in Medical Imaging and Healthy Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: 25 December 2025 | Viewed by 6151

Special Issue Editors


E-Mail Website
Guest Editor
College of Artificial Intelligence, Tianjin University of Science and Technology, Tianjin 300457, China
Interests: machine learning; image processing; medical imaging

Special Issue Information

Dear Colleagues,

Transformer-based deep learning models, originally introduced for natural language processing, have recently shown significant potential in the field of medical imaging and healthy sensors. Depending on the core architecture of the self-attention mechanism, transformer-based models enable to weigh the importance of different parts of the input data dynamically and excel at capturing long-range dependencies. Therefore, transformers are able to understand complex structures in medical images and have been applied to various medical imaging tasks, including medical imaging sensors. The application of transformer-based models in medical imaging and healthy sensors is rapidly advancing. Transformers are expected to play an increasingly important role in improving diagnostic accuracy, accelerating medical image processing workflows, and personalizing treatment plans, thereby driving a comprehensive revolution in medical imaging technology.

This Special Issue aims to compile original research to report the recent findings in applying transformer-based deep learning models in medical imaging and healthy sensors.

Potential topics of this Special Issue include, but are not limited to, the following:

  • Disease diagnostics and detection.
  • Medical image segmentation.
  • Medical imaging sensors.
  • Medical image reconstruction and enhancement.
  • Multimodal fusion learning.
  • Interpretability and explainability of transformers in medical imaging.
  • Advanced medical imaging techniques.
  • Cross-domain transfer learning.
  • Medical image generation.
  • 3D medical imaging.
  • Real-time medical image analysis.

Dr. Steve Ling
Dr. Juan Lyu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • transformer-based models
  • medical imaging
  • deep learning
  • self-attention mechanism
  • clinical applications
  • sensors

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 712 KiB  
Article
MUF-Net: A Novel Self-Attention Based Dual-Task Learning Approach for Automatic Left Ventricle Segmentation in Echocardiography
by Juan Lyu, Jinpeng Meng, Yu Zhang and Sai Ho Ling
Sensors 2025, 25(9), 2704; https://doi.org/10.3390/s25092704 - 24 Apr 2025
Viewed by 92
Abstract
Left ventricular ejection fraction (LVEF) is a critical indicator for assessing cardiac function and diagnosing heart disease. LVEF can be derived by estimating the left ventricular volume from end-systolic and end-diastolic frames through echocardiography segmentation. However, current algorithms either focus primarily on single-frame [...] Read more.
Left ventricular ejection fraction (LVEF) is a critical indicator for assessing cardiac function and diagnosing heart disease. LVEF can be derived by estimating the left ventricular volume from end-systolic and end-diastolic frames through echocardiography segmentation. However, current algorithms either focus primarily on single-frame segmentation, neglecting the temporal and spatial correlations between consecutive frames, or often fail to effectively address the inherent challenges posed by the low-contrast and fuzzy edges characteristic of echocardiography, thereby resulting in suboptimal segmentation outcomes. In this study, we propose a novel self-attention-based dual-task learning approach for automatic left ventricle segmentation. First, we introduce a multi-scale edge-attention U-Net to achieve supervised semantic segmentation of echocardiography. Second, an optical flow network is developed to capture the changes in the optical flow fields between frames in an unsupervised manner. These two tasks are then jointly trained using a temporal consistency mechanism to extract spatio-temporal features across frames. Experimental results demonstrate that our model outperforms existing segmentation methods. Our proposed method not only enhances the performance of semantic segmentation but also improves the consistency of segmentation between consecutive frames. Full article
Show Figures

Figure 1

22 pages, 4472 KiB  
Article
Epilepsy Prediction and Detection Using Attention-CssCDBN with Dual-Task Learning
by Weizheng Qiao, Xiaojun Bi, Lu Han and Yulin Zhang
Sensors 2025, 25(1), 51; https://doi.org/10.3390/s25010051 - 25 Dec 2024
Cited by 1 | Viewed by 937
Abstract
Epilepsy is a group of neurological disorders characterized by epileptic seizures, and it affects tens of millions of people worldwide. Currently, the most effective diagnostic method employs the monitoring of brain activity through electroencephalogram (EEG). However, it is critical to predict epileptic seizures [...] Read more.
Epilepsy is a group of neurological disorders characterized by epileptic seizures, and it affects tens of millions of people worldwide. Currently, the most effective diagnostic method employs the monitoring of brain activity through electroencephalogram (EEG). However, it is critical to predict epileptic seizures in patients prior to their onset, allowing for the administration of preventive medications before the seizure occurs. As a pivotal application of artificial intelligence in medical treatment, learning the features of EEGs for epilepsy prediction and detection remains a challenging problem, primarily due to the presence of intra-class and inter-class variations in EEG signals. In this study, we propose the spatio-temporal EEGNet, which integrates contractive slab and spike convolutional deep belief network (CssCDBN) with a self-attention architecture, augmented by dual-task learning to address this issue. Initially, our model was designed to extract high-order and deep representations from EEG spectrum images, enabling the simultaneous capture of spatial and temporal information. Furthermore, EEG-based verification aids in reducing intra-class variation by considering the time correlation of the EEG during the fine-tuning stage, resulting in easier inference and training. The results demonstrate the notable efficacy of our proposed method. Our method achieved a sensitivity of 98.5%, a false-positive rate (FPR) of 0.041, a prediction time of 50.92 min during the epilepsy prediction task, and an accuracy of 94.1% during the epilepsy detection task, demonstrating significant improvements over current state-of-the-art methods. Full article
Show Figures

Figure 1

Review

Jump to: Research

19 pages, 661 KiB  
Review
Next-Gen Medical Imaging: U-Net Evolution and the Rise of Transformers
by Chen Zhang, Xiangyao Deng and Sai Ho Ling
Sensors 2024, 24(14), 4668; https://doi.org/10.3390/s24144668 - 18 Jul 2024
Cited by 4 | Viewed by 4725
Abstract
The advancement of medical imaging has profoundly impacted our understanding of the human body and various diseases. It has led to the continuous refinement of related technologies over many years. Despite these advancements, several challenges persist in the development of medical imaging, including [...] Read more.
The advancement of medical imaging has profoundly impacted our understanding of the human body and various diseases. It has led to the continuous refinement of related technologies over many years. Despite these advancements, several challenges persist in the development of medical imaging, including data shortages characterized by low contrast, high noise levels, and limited image resolution. The U-Net architecture has significantly evolved to address these challenges, becoming a staple in medical imaging due to its effective performance and numerous updated versions. However, the emergence of Transformer-based models marks a new era in deep learning for medical imaging. These models and their variants promise substantial progress, necessitating a comparative analysis to comprehend recent advancements. This review begins by exploring the fundamental U-Net architecture and its variants, then examines the limitations encountered during its evolution. It then introduces the Transformer-based self-attention mechanism and investigates how modern models incorporate positional information. The review emphasizes the revolutionary potential of Transformer-based techniques, discusses their limitations, and outlines potential avenues for future research. Full article
Show Figures

Figure 1

Back to TopTop