sensors-logo

Journal Browser

Journal Browser

Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 9735

Special Issue Editor


E-Mail Website
Guest Editor
College of Life Science & Technology, Huazhong University of Science and Technology, Wuhan, China
Interests: medical image processing; artificial intelligence for medical diagnosis; surgical guidance; surgical robots
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Multi-sensor fusion plays an important role in medical imaging, diagnosis and therapy. Recently, with the advancement of various imaging modalities such as ultrasound, computer tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET), the fusion of various imaging data has aroused wide interest among researchers. Effective fusion algorithms have a significant influence on the quality of fused data, thereby affecting the final diagnosis and therapy. Traditional fusion methods based on sparse representation and multi-scale decomposition have been explored in depth. Deep learning-based fusion methods can generally deliver efficient data fusion by combining the convolutional neural network or transformer and unsupervised loss function. Apart from the research on fusion methods, great efforts have been made to explore the application of fusion methods to disease diagnosis and therapy, such as PET-CT fusion for lung cancer detection and MR-ultrasound fusion for targeted prostate biopsy. Thus, the aim of this Special Issue, titled “Multi-Sensor Fusion in Medical Imaging, Diagnosis and Therapy”, is to collect high-quality research papers on multi-sensor fusion methods and their application to disease diagnosis and therapy.

Dr. Xuming Zhang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data fusion
  • sparse representation
  • multi-scale decomposition
  • machine learning
  • deep learning model
  • convolutional neural network
  • transformer
  • disease diagnosis and therapy

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 21796 KiB  
Article
Design of a Cost-Effective Ultrasound Force Sensor and Force Control System for Robotic Extra-Body Ultrasound Imaging
by Yixuan Zheng, Hongyuan Ning, Eason Rangarajan, Aban Merali, Adam Geale, Lukas Lindenroth, Zhouyang Xu, Weizhao Wang, Philipp Kruse, Steven Morris, Liang Ye, Xinyi Fu, Kawal Rhode and Richard James Housden
Sensors 2025, 25(2), 468; https://doi.org/10.3390/s25020468 - 15 Jan 2025
Cited by 1 | Viewed by 1452
Abstract
Ultrasound imaging is widely valued for its safety, non-invasiveness, and real-time capabilities but is often limited by operator variability, affecting image quality and reproducibility. Robot-assisted ultrasound may provide a solution by delivering more consistent, precise, and faster scans, potentially reducing human error and [...] Read more.
Ultrasound imaging is widely valued for its safety, non-invasiveness, and real-time capabilities but is often limited by operator variability, affecting image quality and reproducibility. Robot-assisted ultrasound may provide a solution by delivering more consistent, precise, and faster scans, potentially reducing human error and healthcare costs. Effective force control is crucial in robotic ultrasound scanning to ensure consistent image quality and patient safety. However, existing robotic ultrasound systems rely heavily on expensive commercial force sensors or the integrated sensors of commercial robotic arms, limiting their accessibility. To address these challenges, we developed a cost-effective, lightweight, 3D-printed force sensor and a hybrid position–force control strategy tailored for robotic ultrasound scanning. The system integrates patient-to-robot registration, automated scanning path planning, and multi-sensor data fusion, allowing the robot to autonomously locate the patient, target the region of interest, and maintain optimal contact force during scanning. Validation was conducted using an ultrasound-compatible abdominal aortic aneurysm (AAA) phantom created from patient CT data and healthy volunteer testing. For the volunteer testing, during a 1-min scan, 65% of the forces were within the good image range. Both volunteers reported no discomfort or pain during the whole procedure. These results demonstrate the potential of the system to provide safe, precise, and autonomous robotic ultrasound imaging in real-world conditions. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

12 pages, 750 KiB  
Article
Phonocardiogram (PCG) Murmur Detection Based on the Mean Teacher Method
by Yi Luo, Zuoming Fu, Yantian Ding, Xiaojian Chen and Kai Ding
Sensors 2024, 24(20), 6646; https://doi.org/10.3390/s24206646 - 15 Oct 2024
Viewed by 1798
Abstract
Cardiovascular diseases (CVDs) are among the primary causes of mortality globally, highlighting the critical need for early detection to mitigate their impact. Phonocardiograms (PCGs), which record heart sounds, are essential for the non-invasive assessment of cardiac function, enabling the early identification of abnormalities [...] Read more.
Cardiovascular diseases (CVDs) are among the primary causes of mortality globally, highlighting the critical need for early detection to mitigate their impact. Phonocardiograms (PCGs), which record heart sounds, are essential for the non-invasive assessment of cardiac function, enabling the early identification of abnormalities such as murmurs. Particularly in underprivileged regions with high birth rates, the absence of early diagnosis poses a significant public health challenge. In pediatric populations, the analysis of PCG signals is invaluable for detecting abnormal sound waves indicative of congenital and acquired heart diseases, such as septal defects and defective cardiac valves. In the PhysioNet 2022 challenge, the murmur score is a weighted accuracy metric that reflects detection accuracy based on clinical significance. In our research, we proposed a mean teacher method tailored for murmur detection, making full use of the Phyionet2022 and Phyionet2016 PCG datasets, achieving the SOTA (State of Art) performance with a murmur score of 0.82 and an AUC score of 0.90, providing an accessible and high accuracy non-invasive early stage CVD assessment tool, especially for low and middle-income countries (LMICs). Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

18 pages, 7717 KiB  
Article
Machine Learning-Empowered Real-Time Acoustic Trapping: An Enabling Technique for Increasing MRI-Guided Microbubble Accumulation
by Mengjie Wu and Wentao Liao
Sensors 2024, 24(19), 6342; https://doi.org/10.3390/s24196342 - 30 Sep 2024
Viewed by 1178
Abstract
Acoustic trap, using ultrasound interference to ensnare bioparticles, has emerged as a versatile tool for life sciences due to its non-invasive nature. Bolstered by magnetic resonance imaging’s advances in sensing acoustic interference and tracking drug carriers (e.g., microbubble), acoustic trap holds promise for [...] Read more.
Acoustic trap, using ultrasound interference to ensnare bioparticles, has emerged as a versatile tool for life sciences due to its non-invasive nature. Bolstered by magnetic resonance imaging’s advances in sensing acoustic interference and tracking drug carriers (e.g., microbubble), acoustic trap holds promise for increasing MRI-guided microbubbles (MBs) accumulation in target microvessels, improving drug carrier concentration. However, accurate trap generation remains challenging due to complex ultrasound propagation in tissues. Moreover, the MBs’ short lifetime demands high computation efficiency for trap position adjustments based on real-time MRI-guided carrier monitoring. To this end, we propose a machine learning-based model to modulate the transducer array. Our model delivers accurate prediction of both time-of-flight (ToF) and pressure amplitude, achieving low average prediction errors for ToF (−0.45 µs to 0.67 µs, with only a few isolated outliers) and amplitude (−0.34% to 1.75%). Compared with the existing methods, our model enables rapid prediction (<10 ms), achieving a four-order of magnitude improvement in computational efficiency. Validation results based on different transducer sizes and penetration depths support the model’s adaptability and potential for future ultrasound treatments. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

24 pages, 2258 KiB  
Article
CIRF: Coupled Image Reconstruction and Fusion Strategy for Deep Learning Based Multi-Modal Image Fusion
by Junze Zheng, Junyan Xiao, Yaowei Wang and Xuming Zhang
Sensors 2024, 24(11), 3545; https://doi.org/10.3390/s24113545 - 30 May 2024
Cited by 1 | Viewed by 1733
Abstract
Multi-modal medical image fusion (MMIF) is crucial for disease diagnosis and treatment because the images reconstructed from signals collected by different sensors can provide complementary information. In recent years, deep learning (DL) based methods have been widely used in MMIF. However, these methods [...] Read more.
Multi-modal medical image fusion (MMIF) is crucial for disease diagnosis and treatment because the images reconstructed from signals collected by different sensors can provide complementary information. In recent years, deep learning (DL) based methods have been widely used in MMIF. However, these methods often adopt a serial fusion strategy without feature decomposition, causing error accumulation and confusion of characteristics across different scales. To address these issues, we have proposed the Coupled Image Reconstruction and Fusion (CIRF) strategy. Our method parallels the image fusion and reconstruction branches which are linked by a common encoder. Firstly, CIRF uses the lightweight encoder to extract base and detail features, respectively, through the Vision Transformer (ViT) and the Convolutional Neural Network (CNN) branches, where the two branches interact to supplement information. Then, two types of features are fused separately via different blocks and finally decoded into fusion results. In the loss function, both the supervised loss from the reconstruction branch and the unsupervised loss from the fusion branch are included. As a whole, CIRF increases its expressivity by adding multi-task learning and feature decomposition. Additionally, we have also explored the impact of image masking on the network’s feature extraction ability and validated the generalization capability of the model. Through experiments on three datasets, it has been demonstrated both subjectively and objectively, that the images fused by CIRF exhibit appropriate brightness and smooth edge transition with more competitive evaluation metrics than those fused by several other traditional and DL-based methods. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

23 pages, 7592 KiB  
Article
Rehabilitation Assessment System for Stroke Patients Based on Fusion-Type Optoelectronic Plethysmography Device and Multi-Modality Fusion Model: Design and Validation
by Liangwen Yan, Ze Long, Jie Qian, Jianhua Lin, Sheng Quan Xie and Bo Sheng
Sensors 2024, 24(9), 2925; https://doi.org/10.3390/s24092925 - 3 May 2024
Cited by 1 | Viewed by 2198
Abstract
This study aimed to propose a portable and intelligent rehabilitation evaluation system for digital stroke-patient rehabilitation assessment. Specifically, the study designed and developed a fusion device capable of emitting red, green, and infrared lights simultaneously for photoplethysmography (PPG) acquisition. Leveraging the different penetration [...] Read more.
This study aimed to propose a portable and intelligent rehabilitation evaluation system for digital stroke-patient rehabilitation assessment. Specifically, the study designed and developed a fusion device capable of emitting red, green, and infrared lights simultaneously for photoplethysmography (PPG) acquisition. Leveraging the different penetration depths and tissue reflection characteristics of these light wavelengths, the device can provide richer and more comprehensive physiological information. Furthermore, a Multi-Channel Convolutional Neural Network–Long Short-Term Memory–Attention (MCNN-LSTM-Attention) evaluation model was developed. This model, constructed based on multiple convolutional channels, facilitates the feature extraction and fusion of collected multi-modality data. Additionally, it incorporated an attention mechanism module capable of dynamically adjusting the importance weights of input information, thereby enhancing the accuracy of rehabilitation assessment. To validate the effectiveness of the proposed system, sixteen volunteers were recruited for clinical data collection and validation, comprising eight stroke patients and eight healthy subjects. Experimental results demonstrated the system’s promising performance metrics (accuracy: 0.9125, precision: 0.8980, recall: 0.8970, F1 score: 0.8949, and loss function: 0.1261). This rehabilitation evaluation system holds the potential for stroke diagnosis and identification, laying a solid foundation for wearable-based stroke risk assessment and stroke rehabilitation assistance. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

Back to TopTop