Diagnostic Biomedical Image and Processing with Artificial Intelligence and Deep Learning

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (28 February 2025) | Viewed by 17481

Special Issue Editors


E-Mail Website
Guest Editor
Digital Medical Research Centre, Fudan University, Shanghai 200438, China
Interests: medical image processing; image-guided intervention; application of virtual and augmented reality technologies in medicine

E-Mail Website
Guest Editor
Digital Medical Research Centre, Fudan University, Shanghai 200438, China
Interests: artificial intelligence; medical image processing techniques; 3D computer vision; surgical navigation; computer-assisted surgical technologies such as surgical robots

E-Mail Website
Guest Editor
Digital Medical Research Centre, Fudan University, Shanghai 200438, China
Interests: artificial intelligence analysis of medical images; biophysical modeling

Special Issue Information

Dear Colleagues,

As technology continues to evolve, biomedical imaging, including radiographic images (CT, MRI, PET, SPECT, etc.), pathological images, ophthalmic images (Optical Coherence Tomography - OCT, OCT Angiography - OCTA, fundus photography, and fluorescein angiography), microscopy imaging, protein images, and other related biomedical images, are playing an increasingly important role in assisting clinical disease diagnosis, treatment decisions, and scientific research. With the continuous enrichment of large-scale image datasets and the ongoing advancement of cutting-edge parallel graphics processing units, advanced image processing techniques, particularly the integration of biomedical imaging and AI, hold the potential to further enhance diagnostic efficiency and accuracy. This progress is expected to significantly advance scientific research in the field of biomedical imaging.

Advanced image processing techniques, particularly AI-based methods represented by deep learning, have been widely applied to various tasks in biomedical images. These tasks range from image classification, image segmentation, image reconstruction, image super-resolution, image registration, and image fusion to disease classification, lesion detection, and survival prediction. However, the challenges of AI in biomedical image analysis still require further resolution.

We are pleased to invite you as contributors of this Special Issue to publish your experimental and theoretical results of new approaches and applications in biomedical imaging. This Special Issue aims to focus on articles and cutting-edge technology reviews that apply the most advanced techniques to

biomedical image processing and applications. The topics include constructing data-efficient deep learning models to address the demands of large datasets, establishing models with efficient data annotation, enhancing algorithm robustness and interpretability to create high-confidence models, and developing more efficient and advanced algorithms for specific tasks.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but not limited to) the following:

  1. Advanced image processing techniques applied to biomedical imaging:

Image segmentation, image reconstruction, image super-resolution, image registration, image fusion.

  1. Advanced application technologies based on biomedical imaging:

Image and disease classification, object and lesion detection, organ region and marker localization, organ and structure segmentation, survival prediction, radiation therapy planning, assistive treatment, surgical navigation, innovative approaches in large model techniques, fusion of biomedical imaging and multimodal information.

  1. Data-efficient models based on biomedical imaging:

Training methods based on limited annotated data (unsupervised learning, semi-supervised learning, self-supervised learning, and weakly supervised learning), efficient domain adaptation models and approaches, efficient data annotation models and approaches.

  1. Applications of novel imaging and imaging techniques in biomedical and engineering fields:

Cutting-edge imaging techniques such as super-resolution imaging, fast image reconstruction and imaging techniques, emerging radiographic imaging, latest imaging techniques for pathological images, microscopic images, and the integration of virtual reality and augmented reality technologies with ai in biomedical imaging. Also, applications in the biomedical field utilizing novel imaging combined with AI, including the use of protein imaging and digital signal images.

  1. Other relevant technical articles and state-of-the-art technology reviews in the field.

We look forward to receiving your contributions.

Prof. Dr. Zhijian Song
Prof. Dr. Manning Wang
Dr. Shuo Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical image
  • artificial intelligence
  • deep learning
  • image processing
  • medical imaging
  • computer-assisted diagnosis
  • pattern recognition
  • computer vision
  • bioengineering

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

12 pages, 1779 KiB  
Article
Deep Learning-Based Estimation of Myocardial Material Parameters from Cardiac MRI
by Yunhe Chen, Xiwen Zhang, Yongzhong Huo and Shuo Wang
Bioengineering 2025, 12(4), 433; https://doi.org/10.3390/bioengineering12040433 - 21 Apr 2025
Viewed by 118
Abstract
Background: Accurate estimation of myocardial material parameters is crucial to understand cardiac biomechanics and plays a key role in advancing computational modeling and clinical applications. Traditional inverse finite element (FE) methods rely on iterative optimization to infer these parameters, which is computationally expensive [...] Read more.
Background: Accurate estimation of myocardial material parameters is crucial to understand cardiac biomechanics and plays a key role in advancing computational modeling and clinical applications. Traditional inverse finite element (FE) methods rely on iterative optimization to infer these parameters, which is computationally expensive and time-consuming, limiting their clinical applicability. Methods: This study proposes a deep learning-based approach to rapidly and accurately estimate the left ventricular myocardial material parameters directly from routine cardiac magnetic resonance imaging (CMRI) data. A ResNet18-based model was trained on FEM-derived parameters from a dataset of 1288 healthy subjects. Results: The proposed model demonstrated high predictive accuracy on healthy subjects, achieving mean absolute errors of 0.0146 for Ca and 0.0139 for Cb, with mean relative errors below 5.00%. Additionally, we evaluated the model on a small pathological subset (including ARV and HCM cases). The results revealed that while the model maintained strong performance on healthy data, the prediction errors in the pathological samples were higher, indicating increased challenges in modeling diseased myocardial tissue. Conclusion: This study establishes a computationally efficient and accurate deep learning framework for estimating myocardial material parameters, eliminating the need for time-consuming iterative FE optimization. While the model shows promising performance on healthy subjects, further validation and refinement are required to address its limitations in pathological conditions, thereby paving the way for personalized cardiac modeling and improved clinical decision-making. Full article
Show Figures

Figure 1

11 pages, 2099 KiB  
Article
ACM-Assessor: An Artificial Intelligence System for Assessing Angle Closure Mechanisms in Ultrasound Biomicroscopy
by Yuyu Cong, Weiyan Jiang, Zehua Dong, Jian Zhu, Yuanhao Yang, Yujin Wang, Qian Deng, Yulin Yan, Jiewen Mao, Xiaoshuo Shi, Jiali Pan, Zixian Yang, Yingli Wang, Juntao Fang, Biqing Zheng and Yanning Yang
Bioengineering 2025, 12(4), 415; https://doi.org/10.3390/bioengineering12040415 - 14 Apr 2025
Viewed by 224
Abstract
Primary angle-closure glaucoma (PACG), characterized by angle closure (AC) with insidious and irreversible progression, requires precise assessment of AC mechanisms for accurate diagnosis and treatment. This study developed an artificial intelligence system, ACM-Assessor, to evaluate AC mechanisms in ultrasound biomicroscopy (UBM) images. A [...] Read more.
Primary angle-closure glaucoma (PACG), characterized by angle closure (AC) with insidious and irreversible progression, requires precise assessment of AC mechanisms for accurate diagnosis and treatment. This study developed an artificial intelligence system, ACM-Assessor, to evaluate AC mechanisms in ultrasound biomicroscopy (UBM) images. A dataset of 8482 UBM images from 1160 patients was retrospectively collected. ACM-Assessor comprises models for pixel-to-physical spacing conversion, anterior chamber angle boundary segmentation, and scleral spur localization, along with three binary classification models to assess pupillary block (PB), thick peripheral iris (TPI), and anteriorly located ciliary body (ALCB). The integrated assessment model classifies AC mechanisms into pure PB, pure non-PB, multiple mechanisms (MM), and others. ACM-Assessor’s evaluation encompassed external testing (2266 images), human–machine competition and assisting beginners’ assessment (an independent test set of 436 images). ACM-Assessor achieved accuracies of 0.924 (PB), 0.925 (TPI), 0.947 (ALCB), and 0.839 (integrated assessment). In man–machine comparisons, the system’s accuracy was comparable to experts (p > 0.05). With model assistance, beginners’ accuracy improved by 0.117 for binary classification and 0.219 for integrated assessment. ACM-Assessor demonstrates expert-level accuracy and enhances beginners’ learning in UBM analysis. Full article
Show Figures

Figure 1

15 pages, 686 KiB  
Article
IDNet: A Diffusion Model-Enhanced Framework for Accurate Cranio-Maxillofacial Bone Defect Repair
by Xueqin Ji, Wensheng Wang, Xiaobiao Zhang and Xinrong Chen
Bioengineering 2025, 12(4), 407; https://doi.org/10.3390/bioengineering12040407 - 11 Apr 2025
Viewed by 259
Abstract
Cranio-maxillofacial bone defect repair poses significant challenges in oral and maxillofacial surgery due to the complex anatomy of the region and its substantial impact on patients’ physiological function, aesthetic appearance, and quality of life. Inaccurate reconstruction can result in serious complications, including functional [...] Read more.
Cranio-maxillofacial bone defect repair poses significant challenges in oral and maxillofacial surgery due to the complex anatomy of the region and its substantial impact on patients’ physiological function, aesthetic appearance, and quality of life. Inaccurate reconstruction can result in serious complications, including functional impairment and psychological trauma. Traditional methods have notable limitations for complex defects, underscoring the need for advanced computational approaches to achieve high-precision personalized reconstruction. This study presents the Internal Diffusion Network (IDNet), a novel framework that integrates a diffusion model into a standard U-shaped network to extract valuable information from input data and produce high-resolution representations for 3D medical segmentation. A Step-Uncertainty Fusion module was designed to enhance prediction robustness by combining diffusion model outputs at each inference step. The model was evaluated on a dataset consisting of 125 normal human skull 3D reconstructions and 2625 simulated cranio-maxillofacial bone defects. Quantitative evaluation revealed that IDNet outperformed mainstream methods, including UNETR and 3D U-Net, across key metrics: Dice Similarity Coefficient (DSC), True Positive Rate (RECALL), and 95th percentile Hausdorff Distance (HD95). The approach achieved an average DSC of 0.8140, RECALL of 0.8554, and HD95 of 4.35 mm across seven defect types, substantially surpassing comparison methods. This study demonstrates the significant performance advantages of diffusion model-based approaches in cranio-maxillofacial bone defect repair, with potential implications for increasing repair surgery success rates and patient satisfaction in clinical applications. Full article
Show Figures

Figure 1

22 pages, 1508 KiB  
Article
Dynamic Frequency-Decoupled Refinement Network for Polyp Segmentation
by Yao Tong, Jingxian Chai, Ziqi Chen, Zuojian Zhou, Yun Hu, Xin Li, Xuebin Qiao and Kongfa Hu
Bioengineering 2025, 12(3), 277; https://doi.org/10.3390/bioengineering12030277 - 11 Mar 2025
Viewed by 519
Abstract
Polyp segmentation is crucial for early colorectal cancer detection, but accurately delineating polyps is challenging due to their variations in size, shape, and texture and low contrast with surrounding tissues. Existing methods often rely solely on spatial-domain processing, which struggles to separate high-frequency [...] Read more.
Polyp segmentation is crucial for early colorectal cancer detection, but accurately delineating polyps is challenging due to their variations in size, shape, and texture and low contrast with surrounding tissues. Existing methods often rely solely on spatial-domain processing, which struggles to separate high-frequency features (edges, textures) from low-frequency ones (global structures), leading to suboptimal segmentation performance. We propose the Dynamic Frequency-Decoupled Refinement Network (DFDRNet), a novel segmentation framework that integrates frequency-domain and spatial-domain processing. DFDRNet introduces the Frequency Adaptive Decoupling (FAD) module, which dynamically separates high- and low-frequency components, and the Frequency Adaptive Refinement (FAR) module, which refines these components before fusing them with spatial features to enhance segmentation accuracy. Embedded within a U-shaped encoder–decoder framework, DFDRNet achieves state-of-the-art performance across three benchmark datasets, demonstrating superior robustness and efficiency. Our extensive evaluations and ablation studies confirm the effectiveness of DFDRNet in balancing segmentation accuracy with computational efficiency. Full article
Show Figures

Figure 1

14 pages, 968 KiB  
Article
FTSNet: Fundus Tumor Segmentation Network on Multiple Scales Guided by Classification Results and Prompts
by Shurui Bai, Zhuo Deng, Jingyan Yang, Zheng Gong, Weihao Gao, Lei Shao, Fang Li, Wenbin Wei and Lan Ma
Bioengineering 2024, 11(9), 950; https://doi.org/10.3390/bioengineering11090950 - 22 Sep 2024
Cited by 1 | Viewed by 1327
Abstract
The segmentation of fundus tumors is critical for ophthalmic diagnosis and treatment, yet it presents unique challenges due to the variability in lesion size and shape. Our study introduces Fundus Tumor Segmentation Network (FTSNet), a novel segmentation network designed to address these challenges [...] Read more.
The segmentation of fundus tumors is critical for ophthalmic diagnosis and treatment, yet it presents unique challenges due to the variability in lesion size and shape. Our study introduces Fundus Tumor Segmentation Network (FTSNet), a novel segmentation network designed to address these challenges by leveraging classification results and prompt learning. Our key innovation is the multiscale feature extractor and the dynamic prompt head. Multiscale feature extractors are proficient in eliciting a spectrum of feature information from the original image across disparate scales. This proficiency is fundamental for deciphering the subtle details and patterns embedded in the image at multiple levels of granularity. Meanwhile, a dynamic prompt head is engineered to engender bespoke segmentation heads for each image, customizing the segmentation process to align with the distinctive attributes of the image under consideration. We also present the Fundus Tumor Segmentation (FTS) dataset, comprising 254 pairs of fundus images with tumor lesions and reference segmentations. Experiments demonstrate FTSNet’s superior performance over existing methods, achieving a mean Intersection over Union (mIoU) of 0.8254 and mean Dice (mDice) of 0.9042. The results highlight the potential of our approach in advancing the accuracy and efficiency of fundus tumor segmentation. Full article
Show Figures

Graphical abstract

18 pages, 6243 KiB  
Article
Dual and Multi-Target Cone-Beam X-ray Luminescence Computed Tomography Based on the DeepCB-XLCT Network
by Tianshuai Liu, Shien Huang, Ruijing Li, Peng Gao, Wangyang Li, Hongbing Lu, Yonghong Song and Junyan Rong
Bioengineering 2024, 11(9), 874; https://doi.org/10.3390/bioengineering11090874 - 28 Aug 2024
Viewed by 1209
Abstract
Background and Objective: Emerging as a hybrid imaging modality, cone-beam X-ray luminescence computed tomography (CB-XLCT) has been developed using X-ray-excitable nanoparticles. In contrast to conventional bio-optical imaging techniques like bioluminescence tomography (BLT) and fluorescence molecular tomography (FMT), CB-XLCT offers the advantage of greater [...] Read more.
Background and Objective: Emerging as a hybrid imaging modality, cone-beam X-ray luminescence computed tomography (CB-XLCT) has been developed using X-ray-excitable nanoparticles. In contrast to conventional bio-optical imaging techniques like bioluminescence tomography (BLT) and fluorescence molecular tomography (FMT), CB-XLCT offers the advantage of greater imaging depth while significantly reducing interference from autofluorescence and background fluorescence, owing to its utilization of X-ray-excited nanoparticles. However, due to the intricate excitation process and extensive light scattering within biological tissues, the inverse problem of CB-XLCT is fundamentally ill-conditioned. Methods: An end-to-end three-dimensional deep encoder-decoder network, termed DeepCB-XLCT, is introduced to improve the quality of CB-XLCT reconstructions. This network directly establishes a nonlinear mapping between the distribution of internal X-ray-excitable nanoparticles and the corresponding boundary fluorescent signals. To improve the fidelity of target shape restoration, the structural similarity loss (SSIM) was incorporated into the objective function of the DeepCB-XLCT network. Additionally, a loss term specifically for target regions was introduced to improve the network’s emphasis on the areas of interest. As a result, the inaccuracies in reconstruction caused by the simplified linear model used in conventional methods can be effectively minimized by the proposed DeepCB-XLCT method. Results and Conclusions: Numerical simulations, phantom experiments, and in vivo experiments with two targets were performed, revealing that the DeepCB-XLCT network enhances reconstruction accuracy regarding contrast-to-noise ratio and shape similarity when compared to traditional methods. In addition, the findings from the XLCT tomographic images involving three targets demonstrate its potential for multi-target CB-XLCT imaging. Full article
Show Figures

Figure 1

13 pages, 3003 KiB  
Article
Integrating Multi-Organ Imaging-Derived Phenotypes and Genomic Information for Predicting the Occurrence of Common Diseases
by Meng Liu, Yan Li, Longyu Sun, Mengting Sun, Xumei Hu, Qing Li, Mengyao Yu, Chengyan Wang, Xinping Ren and Jinlian Ma
Bioengineering 2024, 11(9), 872; https://doi.org/10.3390/bioengineering11090872 - 28 Aug 2024
Cited by 1 | Viewed by 1648
Abstract
As medical imaging technologies advance, these tools are playing a more and more important role in assisting clinical disease diagnosis. The fusion of biomedical imaging and multi-modal information is profound, as it significantly enhances diagnostic precision and comprehensiveness. Integrating multi-organ imaging with genomic [...] Read more.
As medical imaging technologies advance, these tools are playing a more and more important role in assisting clinical disease diagnosis. The fusion of biomedical imaging and multi-modal information is profound, as it significantly enhances diagnostic precision and comprehensiveness. Integrating multi-organ imaging with genomic information can significantly enhance the accuracy of disease prediction because many diseases involve both environmental and genetic determinants. In the present study, we focused on the fusion of imaging-derived phenotypes (IDPs) and polygenic risk score (PRS) of diseases from different organs including the brain, heart, lung, liver, spleen, pancreas, and kidney for the prediction of the occurrence of nine common diseases, namely atrial fibrillation, heart failure (HF), hypertension, myocardial infarction, asthma, type 2 diabetes, chronic kidney disease, coronary artery disease (CAD), and chronic obstructive pulmonary disease, in the UK Biobank (UKBB) dataset. For each disease, three prediction models were developed utilizing imaging features, genomic data, and a fusion of both, respectively, and their performances were compared. The results indicated that for seven diseases, the model integrating both imaging and genomic data achieved superior predictive performance compared to models that used only imaging features or only genomic data. For instance, the Area Under Curve (AUC) of HF risk prediction was increased from 0.68 ± 0.15 to 0.79 ± 0.12, and the AUC of CAD diagnosis was increased from 0.76 ± 0.05 to 0.81 ± 0.06. Full article
Show Figures

Graphical abstract

16 pages, 4028 KiB  
Article
Synthesizing High b-Value Diffusion-Weighted Imaging of Gastric Cancer Using an Improved Vision Transformer CycleGAN
by Can Hu, Congchao Bian, Ning Cao, Han Zhou and Bin Guo
Bioengineering 2024, 11(8), 805; https://doi.org/10.3390/bioengineering11080805 - 8 Aug 2024
Viewed by 1494
Abstract
Background: Diffusion-weighted imaging (DWI), a pivotal component of multiparametric magnetic resonance imaging (mpMRI), plays a pivotal role in the detection, diagnosis, and evaluation of gastric cancer. Despite its potential, DWI is often marred by substantial anatomical distortions and sensitivity artifacts, which can hinder [...] Read more.
Background: Diffusion-weighted imaging (DWI), a pivotal component of multiparametric magnetic resonance imaging (mpMRI), plays a pivotal role in the detection, diagnosis, and evaluation of gastric cancer. Despite its potential, DWI is often marred by substantial anatomical distortions and sensitivity artifacts, which can hinder its practical utility. Presently, enhancing DWI’s image quality necessitates reliance on cutting-edge hardware and extended scanning durations. The development of a rapid technique that optimally balances shortened acquisition time with improved image quality would have substantial clinical relevance. Objectives: This study aims to construct and evaluate the unsupervised learning framework called attention dual contrast vision transformer cyclegan (ADCVCGAN) for enhancing image quality and reducing scanning time in gastric DWI. Methods: The ADCVCGAN framework, proposed in this study, employs high b-value DWI (b = 1200 s/mm2) as a reference for generating synthetic b-value DWI (s-DWI) from acquired lower b-value DWI (a-DWI, b = 800 s/mm2). Specifically, ADCVCGAN incorporates an attention mechanism CBAM module into the CycleGAN generator to enhance feature extraction from the input a-DWI in both the channel and spatial dimensions. Subsequently, a vision transformer module, based on the U-net framework, is introduced to refine detailed features, aiming to produce s-DWI with image quality comparable to that of b-DWI. Finally, images from the source domain are added as negative samples to the discriminator, encouraging the discriminator to steer the generator towards synthesizing images distant from the source domain in the latent space, with the goal of generating more realistic s-DWI. The image quality of the s-DWI is quantitatively assessed using metrics such as the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), feature similarity index (FSIM), mean squared error (MSE), weighted peak signal-to-noise ratio (WPSNR), and weighted mean squared error (WMSE). Subjective evaluations of different DWI images were conducted using the Wilcoxon signed-rank test. The reproducibility and consistency of b-ADC and s-ADC, calculated from b-DWI and s-DWI, respectively, were assessed using the intraclass correlation coefficient (ICC). A statistical significance level of p < 0.05 was considered. Results: The s-DWI generated by the unsupervised learning framework ADCVCGAN scored significantly higher than a-DWI in quantitative metrics such as PSNR, SSIM, FSIM, MSE, WPSNR, and WMSE, with statistical significance (p < 0.001). This performance is comparable to the optimal level achieved by the latest synthetic algorithms. Subjective scores for lesion visibility, image anatomical details, image distortion, and overall image quality were significantly higher for s-DWI and b-DWI compared to a-DWI (p < 0.001). At the same time, there was no significant difference between the scores of s-DWI and b-DWI (p > 0.05). The consistency of b-ADC and s-ADC readings was comparable among different readers (ICC: b-ADC 0.87–0.90; s-ADC 0.88–0.89, respectively). The repeatability of b-ADC and s-ADC readings by the same reader was also comparable (Reader1 ICC: b-ADC 0.85–0.86, s-ADC 0.85–0.93; Reader2 ICC: b-ADC 0.86–0.87, s-ADC 0.89–0.92, respectively). Conclusions: ADCVCGAN shows excellent promise in generating gastric cancer DWI images. It effectively reduces scanning time, improves image quality, and ensures the authenticity of s-DWI images and their s-ADC values, thus providing a basis for assisting clinical decision making. Full article
Show Figures

Graphical abstract

20 pages, 3931 KiB  
Article
Novel Hybrid Quantum Architecture-Based Lung Cancer Detection Using Chest Radiograph and Computerized Tomography Images
by Jason Elroy Martis, Sannidhan M S, Balasubramani R, A. M. Mutawa and M. Murugappan
Bioengineering 2024, 11(8), 799; https://doi.org/10.3390/bioengineering11080799 - 7 Aug 2024
Cited by 3 | Viewed by 2570
Abstract
Lung cancer, the second most common type of cancer worldwide, presents significant health challenges. Detecting this disease early is essential for improving patient outcomes and simplifying treatment. In this study, we propose a hybrid framework that combines deep learning (DL) with quantum computing [...] Read more.
Lung cancer, the second most common type of cancer worldwide, presents significant health challenges. Detecting this disease early is essential for improving patient outcomes and simplifying treatment. In this study, we propose a hybrid framework that combines deep learning (DL) with quantum computing to enhance the accuracy of lung cancer detection using chest radiographs (CXR) and computerized tomography (CT) images. Our system utilizes pre-trained models for feature extraction and quantum circuits for classification, achieving state-of-the-art performance in various metrics. Not only does our system achieve an overall accuracy of 92.12%, it also excels in other crucial performance measures, such as sensitivity (94%), specificity (90%), F1-score (93%), and precision (92%). These results demonstrate that our hybrid approach can more accurately identify lung cancer signatures compared to traditional methods. Moreover, the incorporation of quantum computing enhances processing speed and scalability, making our system a promising tool for early lung cancer screening and diagnosis. By leveraging the strengths of quantum computing, our approach surpasses traditional methods in terms of speed, accuracy, and efficiency. This study highlights the potential of hybrid computational technologies to transform early cancer detection, paving the way for wider clinical applications and improved patient care outcomes. Full article
Show Figures

Graphical abstract

14 pages, 851 KiB  
Article
Preoperative Molecular Subtype Classification Prediction of Ovarian Cancer Based on Multi-Parametric Magnetic Resonance Imaging Multi-Sequence Feature Fusion Network
by Yijiang Du, Tingting Wang, Linhao Qu, Haiming Li, Qinhao Guo, Haoran Wang, Xinyuan Liu, Xiaohua Wu and Zhijian Song
Bioengineering 2024, 11(5), 472; https://doi.org/10.3390/bioengineering11050472 - 9 May 2024
Cited by 2 | Viewed by 2097
Abstract
In the study of the deep learning classification of medical images, deep learning models are applied to analyze images, aiming to achieve the goals of assisting diagnosis and preoperative assessment. Currently, most research classifies and predicts normal and cancer cells by inputting single-parameter [...] Read more.
In the study of the deep learning classification of medical images, deep learning models are applied to analyze images, aiming to achieve the goals of assisting diagnosis and preoperative assessment. Currently, most research classifies and predicts normal and cancer cells by inputting single-parameter images into trained models. However, for ovarian cancer (OC), identifying its different subtypes is crucial for predicting disease prognosis. In particular, the need to distinguish high-grade serous carcinoma from clear cell carcinoma preoperatively through non-invasive means has not been fully addressed. This study proposes a deep learning (DL) method based on the fusion of multi-parametric magnetic resonance imaging (mpMRI) data, aimed at improving the accuracy of preoperative ovarian cancer subtype classification. By constructing a new deep learning network architecture that integrates various sequence features, this architecture achieves the high-precision prediction of the typing of high-grade serous carcinoma and clear cell carcinoma, achieving an AUC of 91.62% and an AP of 95.13% in the classification of ovarian cancer subtypes. Full article
Show Figures

Figure 1

13 pages, 2034 KiB  
Article
An Automated Video Analysis System for Retrospective Assessment and Real-Time Monitoring of Endoscopic Procedures (with Video)
by Yan Zhu, Ling Du, Pei-Yao Fu, Zi-Han Geng, Dan-Feng Zhang, Wei-Feng Chen, Quan-Lin Li and Ping-Hong Zhou
Bioengineering 2024, 11(5), 445; https://doi.org/10.3390/bioengineering11050445 - 30 Apr 2024
Viewed by 1747
Abstract
Background and Aims: Accurate recognition of endoscopic instruments facilitates quantitative evaluation and quality control of endoscopic procedures. However, no relevant research has been reported. In this study, we aimed to develop a computer-assisted system, EndoAdd, for automated endoscopic surgical video analysis based on [...] Read more.
Background and Aims: Accurate recognition of endoscopic instruments facilitates quantitative evaluation and quality control of endoscopic procedures. However, no relevant research has been reported. In this study, we aimed to develop a computer-assisted system, EndoAdd, for automated endoscopic surgical video analysis based on our dataset of endoscopic instrument images. Methods: Large training and validation datasets containing 45,143 images of 10 different endoscopic instruments and a test dataset of 18,375 images collected from several medical centers were used in this research. Annotated image frames were used to train the state-of-the-art object detection model, YOLO-v5, to identify the instruments. Based on the frame-level prediction results, we further developed a hidden Markov model to perform video analysis and generate heatmaps to summarize the videos. Results: EndoAdd achieved high accuracy (>97%) on the test dataset for all 10 endoscopic instrument types. The mean average accuracy, precision, recall, and F1-score were 99.1%, 92.0%, 88.8%, and 89.3%, respectively. The area under the curve values exceeded 0.94 for all instrument types. Heatmaps of endoscopic procedures were generated for both retrospective and real-time analyses. Conclusions: We successfully developed an automated endoscopic video analysis system, EndoAdd, which supports retrospective assessment and real-time monitoring. It can be used for data analysis and quality control of endoscopic procedures in clinical practice. Full article
Show Figures

Figure 1

18 pages, 1620 KiB  
Article
Reference Data for Diagnosis of Spondylolisthesis and Disc Space Narrowing Based on NHANES-II X-rays
by John Hipp, Trevor Grieco, Patrick Newman, Vikas Patel and Charles Reitman
Bioengineering 2024, 11(4), 360; https://doi.org/10.3390/bioengineering11040360 - 8 Apr 2024
Viewed by 1868
Abstract
Robust reference data, representing a large and diverse population, are needed to objectively classify measurements of spondylolisthesis and disc space narrowing as normal or abnormal. The reference data should be open access to drive standardization across technology developers. The large collection of radiographs [...] Read more.
Robust reference data, representing a large and diverse population, are needed to objectively classify measurements of spondylolisthesis and disc space narrowing as normal or abnormal. The reference data should be open access to drive standardization across technology developers. The large collection of radiographs from the 2nd National Health and Nutrition Examination Survey was used to establish reference data. A pipeline of neural networks and coded logic was used to place landmarks on the corners of all vertebrae, and these landmarks were used to calculate multiple disc space metrics. Descriptive statistics for nine SPO and disc metrics were tabulated and used to identify normal discs, and data for only the normal discs were used to arrive at reference data. A spondylolisthesis index was developed that accounts for important variables. These reference data facilitate simplified and standardized reporting of multiple intervertebral disc metrics. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

17 pages, 2935 KiB  
Systematic Review
23Na-MRI for Breast Cancer Diagnosis and Treatment Monitoring: A Scoping Review
by Taylor Smith, Minh Chau, Jordan Sims and Elio Arruzza
Bioengineering 2025, 12(2), 158; https://doi.org/10.3390/bioengineering12020158 - 6 Feb 2025
Viewed by 1034
Abstract
(1) Background: Variations in intracellular and extracellular sodium levels have been hypothesized to serve as biomarkers for tumour characterization and therapeutic response. While previous research has explored the feasibility of 23Na-MRI, a comprehensive review of its clinical utility in breast cancer is lacking. [...] Read more.
(1) Background: Variations in intracellular and extracellular sodium levels have been hypothesized to serve as biomarkers for tumour characterization and therapeutic response. While previous research has explored the feasibility of 23Na-MRI, a comprehensive review of its clinical utility in breast cancer is lacking. This scoping review aims to synthesize existing literature on the potential role of 23Na-MRI in breast cancer diagnosis and treatment monitoring. (2) Methods: This review included English-language studies reporting on quantitative applications of 23Na-MRI in breast cancer. Systematic searches were conducted across PubMed, Emcare, Embase, Scopus, Google Scholar, Cochrane Library, and Medline. (3) Results: Seven primary studies met the inclusion criteria, highlighting the ability of 23Na-MRI to differentiate between malignant and benign breast lesions based on elevated total sodium concentration (TSC) in tumour tissues. 23Na-MRI also showed potential in early prediction of treatment response, with significant reductions in TSC observed in responders. However, the studies varied widely in their protocols, use of phantoms, field strengths, and contrast agent application, limiting inter-study comparability. (4) Conclusion: 23Na-MRI holds promise as a complementary imaging modality for breast cancer diagnosis and treatment monitoring. However, standardization of imaging protocols and technical optimization are essential before it can be translated into clinical practice. Full article
Show Figures

Figure 1

Back to TopTop