Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (84)

Search Parameters:
Journal = Information
Section = Biomedical Information and Health

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 625 KiB  
Article
A Procedure to Estimate Dose and Time of Exposure to Ionizing Radiation from the γ-H2AX Assay
by Yilun Cai, Yingjuan Zhang, Hannah Mancey, Stephen Barnard and Jochen Einbeck
Information 2025, 16(8), 672; https://doi.org/10.3390/info16080672 - 6 Aug 2025
Viewed by 233
Abstract
Accurately estimating the radiation dose received by an individual is essential for evaluating potential damage caused by exposure to ionizing radiation. Most retrospective dosimetry methods require the time since exposure to be known and rely on calibration curves specific to that time point. [...] Read more.
Accurately estimating the radiation dose received by an individual is essential for evaluating potential damage caused by exposure to ionizing radiation. Most retrospective dosimetry methods require the time since exposure to be known and rely on calibration curves specific to that time point. In this work, we introduce a novel method tailored to the γ-H2AX assay, which is a protein-based biomarker for radiation exposure, that enables the estimation of both the radiation dose and the time of exposure within a plausible post-exposure interval. Specifically, we extend calibration curves available at two distinct time points by incorporating the biological decay of foci, resulting in a model that captures the joint dependence of foci count on both dose and time. We demonstrate the applicability of this approach using both real-world and simulated data. Full article
(This article belongs to the Section Biomedical Information and Health)
Show Figures

Figure 1

16 pages, 2784 KiB  
Article
Development of Stacked Neural Networks for Application with OCT Data, to Improve Diabetic Retinal Health Care Management
by Pedro Rebolo, Guilherme Barbosa, Eduardo Carvalho, Bruno Areias, Ana Guerra, Sónia Torres-Costa, Nilza Ramião, Manuel Falcão and Marco Parente
Information 2025, 16(8), 649; https://doi.org/10.3390/info16080649 - 30 Jul 2025
Viewed by 245
Abstract
Background: Retinal diseases are becoming an important public health issue, with early diagnosis and timely intervention playing a key role in preventing vision loss. Optical coherence tomography (OCT) remains the leading non-invasive imaging technique for identifying retinal conditions. However, distinguishing between diabetic macular [...] Read more.
Background: Retinal diseases are becoming an important public health issue, with early diagnosis and timely intervention playing a key role in preventing vision loss. Optical coherence tomography (OCT) remains the leading non-invasive imaging technique for identifying retinal conditions. However, distinguishing between diabetic macular edema (DME) and macular edema resulting from retinal vein occlusion (RVO) can be particularly challenging, especially for clinicians without specialized training in retinal disorders, as both conditions manifest through increased retinal thickness. Due to the limited research exploring the application of deep learning methods, particularly for RVO detection using OCT scans, this study proposes a novel diagnostic approach based on stacked convolutional neural networks. This architecture aims to enhance classification accuracy by integrating multiple neural network layers, enabling more robust feature extraction and improved differentiation between retinal pathologies. Methods: The VGG-16, VGG-19, and ResNet50 models were fine-tuned using the Kermany dataset to classify the OCT images and afterwards were trained using a private OCT dataset. Four stacked models were then developed using these models: a model using the VGG-16 and VGG-19 networks, a model using the VGG-16 and ResNet50 networks, a model using the VGG-19 and ResNet50 models, and finally a model using all three networks. The performance metrics of the model includes accuracy, precision, recall, F2-score, and area under of the receiver operating characteristic curve (AUROC). Results: The stacked neural network using all three models achieved the best results, having an accuracy of 90.7%, precision of 99.2%, a recall of 90.7%, and an F2-score of 92.3%. Conclusions: This study presents a novel method for distinguishing retinal disease by using stacked neural networks. This research aims to provide a reliable tool for ophthalmologists to improve diagnosis accuracy and speed. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal Processing)
Show Figures

Figure 1

27 pages, 19258 KiB  
Article
A Lightweight Multi-Frequency Feature Fusion Network with Efficient Attention for Breast Tumor Classification in Pathology Images
by Hailong Chen, Qingqing Song and Guantong Chen
Information 2025, 16(7), 579; https://doi.org/10.3390/info16070579 - 6 Jul 2025
Viewed by 418
Abstract
The intricate and complex tumor cell morphology in breast pathology images is a key factor for tumor classification. This paper proposes a lightweight breast tumor classification model with multi-frequency feature fusion (LMFM) to tackle the problem of inadequate feature extraction and poor classification [...] Read more.
The intricate and complex tumor cell morphology in breast pathology images is a key factor for tumor classification. This paper proposes a lightweight breast tumor classification model with multi-frequency feature fusion (LMFM) to tackle the problem of inadequate feature extraction and poor classification performance. The LMFM utilizes wavelet transform (WT) for multi-frequency feature fusion, integrating high-frequency (HF) tumor details with high-level semantic features to enhance feature representation. The network’s ability to extract irregular tumor characteristics is further reinforced by dynamic adaptive deformable convolution (DADC). The introduction of the token-based Region Focus Module (TRFM) reduces interference from irrelevant background information. At the same time, the incorporation of a linear attention (LA) mechanism lowers the model’s computational complexity and further enhances its global feature extraction capability. The experimental results demonstrate that the proposed model achieves classification accuracies of 98.23% and 97.81% on the BreaKHis and BACH datasets, with only 9.66 M parameters. Full article
(This article belongs to the Section Biomedical Information and Health)
Show Figures

Figure 1

25 pages, 1863 KiB  
Review
Deep Learning Segmentation Techniques for Atherosclerotic Plaque on Ultrasound Imaging: A Systematic Review
by Laura De Rosa, Serena L’Abbate, Eduarda Mota da Silva, Mauro Andretta, Elisabetta Bianchini, Vincenzo Gemignani, Claudia Kusmic and Francesco Faita
Information 2025, 16(6), 491; https://doi.org/10.3390/info16060491 - 13 Jun 2025
Viewed by 1727
Abstract
Background: Atherosclerotic disease is the leading global cause of death, driven by progressive plaque accumulation in the arteries. Ultrasound (US) imaging, both conventional (CUS) and intravascular (IVUS), is crucial for the non-invasive assessment of atherosclerotic plaques. Deep learning (DL) techniques have recently gained [...] Read more.
Background: Atherosclerotic disease is the leading global cause of death, driven by progressive plaque accumulation in the arteries. Ultrasound (US) imaging, both conventional (CUS) and intravascular (IVUS), is crucial for the non-invasive assessment of atherosclerotic plaques. Deep learning (DL) techniques have recently gained attention as tools to improve the accuracy and efficiency of image analysis in this domain. This paper reviews recent advancements in DL-based methods for the segmentation, classification, and quantification of atherosclerotic plaques in US imaging, focusing on their performance, clinical relevance, and translational challenges. Methods: A systematic literature search was conducted in the PubMed, Scopus, and Web of Science databases, following PRISMA guidelines. The review included peer-reviewed original articles published up to 31 January 2025 that applied DL models for plaque segmentation, characterization, and/or quantification in US images. Results: A total of 53 studies were included, with 72% focusing on carotid CUS and 28% on coronary IVUS. DL architectures, such as UNet and attention-based networks, were commonly used, achieving high segmentation accuracy with average Dice similarity coefficients of around 84%. Many models provided reliable quantitative outputs (such as total plaque area, plaque burden, and stenosis severity index) with correlation coefficients often exceeding R = 0.9 compared to manual annotations. Limitations included the scarcity of large, annotated, and publicly available datasets; the lack of external validation; and the limited availability of open-source code. Conclusions: DL-based approaches show considerable promise for advancing atherosclerotic plaque analysis in US imaging. To facilitate broader clinical adoption, future research should prioritize methodological standardization, external validation, data and code sharing, and integrating 3D US technologies. Full article
Show Figures

Figure 1

31 pages, 550 KiB  
Review
Advances in Application of Federated Machine Learning for Oncology and Cancer Diagnosis
by Mohammad Nasajpour, Seyedamin Pouriyeh, Reza M. Parizi, Meng Han, Fatemeh Mosaiyebzadeh, Yixin Xie, Liyuan Liu and Daniel Macêdo Batista
Information 2025, 16(6), 487; https://doi.org/10.3390/info16060487 - 12 Jun 2025
Viewed by 1115
Abstract
Machine learning has brought about a revolutionary transformation in healthcare. It has traditionally been employed to create predictive models through training on locally available data. However, privacy concerns can sometimes impede the collection and integration of data from diverse sources. Conversely, a lack [...] Read more.
Machine learning has brought about a revolutionary transformation in healthcare. It has traditionally been employed to create predictive models through training on locally available data. However, privacy concerns can sometimes impede the collection and integration of data from diverse sources. Conversely, a lack of sufficient data may hinder the construction of accurate models, thereby limiting the ability to produce meaningful outcomes. Especially in the field of healthcare, collecting datasets centrally is challenging due to privacy concerns. Indeed, federated learning (FL) emerges as a sophisticated distributed machine learning approach that comes to the rescue in such scenarios. It allows multiple devices hosted at different institutions, like hospitals, to collaboratively train a global model without sharing raw data. In addition, each device retains its data securely on locally, addressing the challenges of time-consuming annotation and privacy concerns. In this paper, we conducted a comprehensive literature review aimed at identifying the most advanced federated learning applications in cancer research and clinical oncology analysis. Our main goal was to present a comprehensive overview of the development of federated learning in the field of oncology. Additionally, we discuss the challenges and future research directions. Full article
Show Figures

Figure 1

14 pages, 1324 KiB  
Article
Preprocessing of Physician Notes by LLMs Improves Clinical Concept Extraction Without Information Loss
by Daniel B. Hier, Michael A. Carrithers, Steven K. Platt, Anh Nguyen, Ioannis Giannopoulos and Tayo Obafemi-Ajayi
Information 2025, 16(6), 446; https://doi.org/10.3390/info16060446 - 27 May 2025
Viewed by 812
Abstract
Clinician notes are a rich source of patient information, but often contain inconsistencies due to varied writing styles, abbreviations, medical jargon, grammatical errors, and non-standard formatting. These inconsistencies hinder their direct use in patient care and degrade the performance of downstream computational applications [...] Read more.
Clinician notes are a rich source of patient information, but often contain inconsistencies due to varied writing styles, abbreviations, medical jargon, grammatical errors, and non-standard formatting. These inconsistencies hinder their direct use in patient care and degrade the performance of downstream computational applications that rely on these notes as input, such as quality improvement, population health analytics, precision medicine, clinical decision support, and research. We present a large-language-model (LLM) approach to the preprocessing of 1618 neurology notes. The LLM corrected spelling and grammatical errors, expanded acronyms, and standardized terminology and formatting, without altering clinical content. Expert review of randomly sampled notes confirmed that no significant information was lost. To evaluate downstream impact, we applied an ontology-based NLP pipeline (Doc2Hpo) to extract biomedical concepts from the notes before and after editing. F1 scores for Human Phenotype Ontology extraction improved from 0.40 to 0.61, confirming our hypothesis that better inputs yielded better outputs. We conclude that LLM-based preprocessing is an effective error correction strategy that improves data quality at the level of free text in clinical notes. This approach may enhance the performance of a broad class of downstream applications that derive their input from unstructured clinical documentation. Full article
(This article belongs to the Special Issue Biomedical Natural Language Processing and Text Mining)
Show Figures

Figure 1

49 pages, 2038 KiB  
Review
A Review of Non-Fully Supervised Deep Learning for Medical Image Segmentation
by Xinyue Zhang, Jianfeng Wang, Jinqiao Wei, Xinyu Yuan and Ming Wu
Information 2025, 16(6), 433; https://doi.org/10.3390/info16060433 - 24 May 2025
Viewed by 1355
Abstract
Medical image segmentation, a critical task in medical image analysis, aims to precisely delineate regions of interest (ROIs) such as organs, lesions, and cells, and is crucial for applications including computer-aided diagnosis, surgical planning, radiation therapy, and pathological analysis. While fully supervised deep [...] Read more.
Medical image segmentation, a critical task in medical image analysis, aims to precisely delineate regions of interest (ROIs) such as organs, lesions, and cells, and is crucial for applications including computer-aided diagnosis, surgical planning, radiation therapy, and pathological analysis. While fully supervised deep learning methods have demonstrated remarkable performance in this domain, their reliance on large-scale, pixel-level annotated datasets—a significant label scarcity challenge—severely hinders their widespread deployment in clinical settings. Addressing this limitation, this review focuses on non-fully supervised learning paradigms, systematically investigating the application of semi-supervised, weakly supervised, and unsupervised learning techniques for medical image segmentation. We delve into the theoretical foundations, core advantages, typical application scenarios, and representative algorithmic implementations associated with each paradigm. Furthermore, this paper compiles and critically reviews commonly utilized benchmark datasets within the field. Finally, we discuss future research directions and challenges, offering insights for advancing the field and reducing dependence on extensive annotation. Full article
(This article belongs to the Section Biomedical Information and Health)
Show Figures

Figure 1

13 pages, 3105 KiB  
Article
AI-Based Detection of Optical Microscopic Images of Pseudomonas aeruginosa in Planktonic and Biofilm States
by Bidisha Sengupta, Mousa Alrubayan, Manideep Kolla, Yibin Wang, Esther Mallet, Angel Torres, Ravyn Solis, Haifeng Wang and Prabhakar Pradhan
Information 2025, 16(4), 309; https://doi.org/10.3390/info16040309 - 14 Apr 2025
Viewed by 1168
Abstract
Biofilms are resistant microbial cell aggregates that pose risks to the health and food industries and produce environmental contamination. The accurate and efficient detection and prevention of biofilms are challenging and demand interdisciplinary approaches. This multidisciplinary research reports the application of a deep [...] Read more.
Biofilms are resistant microbial cell aggregates that pose risks to the health and food industries and produce environmental contamination. The accurate and efficient detection and prevention of biofilms are challenging and demand interdisciplinary approaches. This multidisciplinary research reports the application of a deep learning-based artificial intelligence (AI) model for detecting biofilms produced by Pseudomonas aeruginosa with high accuracy. Aptamer DNA-templated silver nanocluster (Ag-NC) was used to prevent biofilm formation, which produced images of the planktonic states of the bacteria. Large-volume bright-field images of bacterial biofilms were used to design the AI model. In particular, we used U-Net with ResNet encoder enhancement to segment biofilm images for AI analysis. Different degrees of biofilm structures can be efficiently detected using ResNet18 and ResNet34 backbones. The potential applications of this technique are also discussed. Full article
Show Figures

Figure 1

28 pages, 4137 KiB  
Article
Epidemic Modeling in Satellite Towns and Interconnected Cities: Data-Driven Simulation and Real-World Lockdown Validation
by Rafaella S. Ferreira, Wallace Casaca, João F. C. A. Meyer, Marilaine Colnago, Mauricio A. Dias and Rogério G. Negri
Information 2025, 16(4), 299; https://doi.org/10.3390/info16040299 - 8 Apr 2025
Viewed by 421
Abstract
Understanding the effectiveness of different quarantine strategies is crucial for controlling the spread of COVID-19, particularly in regions with limited data. This study presents a SCIRD-inspired model to simulate the transmission dynamics of COVID-19 in medium-sized cities and their surrounding satellite towns. Unlike [...] Read more.
Understanding the effectiveness of different quarantine strategies is crucial for controlling the spread of COVID-19, particularly in regions with limited data. This study presents a SCIRD-inspired model to simulate the transmission dynamics of COVID-19 in medium-sized cities and their surrounding satellite towns. Unlike previous works that focus primarily on large urban centers or homogeneous populations, our approach incorporates intercity mobility and evaluates the impact of spatially differentiated interventions. By analyzing lockdown strategies implemented during the first year of the pandemic, we demonstrate that short, localized lockdowns are highly effective in reducing virus propagation, while intermittent restrictions balance public health concerns with socioeconomic demands. A key contribution of this study is the validation of the epidemic model using real-world data from the 2021 lockdown that occurred in a medium-sized city, confirming its predictive accuracy and adaptability to different contexts. Additionally, we provide a detailed analysis of how mobility patterns between municipalities influence infection spread, offering a more comprehensive mathematical framework for decision-making. These findings advance the understanding of epidemic control in regions with sparse data and provide evidence-based insights to inform public health policies in similar contexts. Full article
Show Figures

Graphical abstract

15 pages, 2011 KiB  
Article
A Lightweight Neural Network for Cell Segmentation Based on Attention Enhancement
by Shuang Xia, Qian Sun, Yiheng Zhou, Zhaoyuxuan Wang, Chaoxing You, Kainan Ma and Ming Liu
Information 2025, 16(4), 295; https://doi.org/10.3390/info16040295 - 8 Apr 2025
Viewed by 1020
Abstract
Deep neural networks have made significant strides in medical image segmentation tasks, but their large-scale parameters and high computational complexity limit their applicability on resource-constrained edge devices. To address this challenge, this paper introduces a lightweight nuclear segmentation network called Attention-Enhanced U-Net (AttE-Unet) [...] Read more.
Deep neural networks have made significant strides in medical image segmentation tasks, but their large-scale parameters and high computational complexity limit their applicability on resource-constrained edge devices. To address this challenge, this paper introduces a lightweight nuclear segmentation network called Attention-Enhanced U-Net (AttE-Unet) for cell segmentation. AttE-Unet enhances the network’s feature extraction capabilities through an attention mechanism and combines the strengths of deep learning with traditional image filtering algorithms, while substantially reducing computational and storage demands. Experimental results on the PanNuke dataset demonstrate that AttE-Unet, despite its significant reduction in model size—with the number of parameters and floating-point operations per second reduced to 1.57% and 0.1% of the original model, respectively—still maintains a high level of segmentation performance. Specifically, the F1 score and Intersection over Union (IoU) score are 91.7% and 89.3% of the original model’s scores. Furthermore, deployment on an MCU consumes only 2.09 MB of Flash and 1.38 MB of RAM, highlighting the model’s lightweight nature and its potential for practical deployment as a medical image segmentation solution on edge devices. Full article
(This article belongs to the Special Issue Disease Diagnosis Based on Medical Images and Signals)
Show Figures

Figure 1

12 pages, 3677 KiB  
Article
Study on Radiation Protection Educational Tool Using Real-Time Scattering Radiation Distribution Calculation Method with Ray Tracing Technology
by Toshioh Fujibuchi
Information 2025, 16(4), 266; https://doi.org/10.3390/info16040266 - 26 Mar 2025
Viewed by 460
Abstract
In this study, we developed an application for radiation protection that calculates in real time the distribution of scattered radiation during fluoroscopy using ray tracing technology, assuming that most of the scattered radiation in the room originates from the patient and that the [...] Read more.
In this study, we developed an application for radiation protection that calculates in real time the distribution of scattered radiation during fluoroscopy using ray tracing technology, assuming that most of the scattered radiation in the room originates from the patient and that the scattered radiation originating from the patient travels linearly. The directional vectors and energy information for the scattered radiation spreading from the patient’s body surface to the outside of the body were obtained via simulation in a virtual X-ray fluoroscopy room. Based on this information, the scattered dose distribution in the X-ray room was calculated. The ratio of the scattered doses calculated by the method to those obtained from the Monte Carlo simulation was mostly within the range of 0.7 to 1.8 times, except for behind the X-ray machine. The scattered radiation distribution changed smoothly as the radiation protective plates were moved. When using protection plates with a high degree of freedom in their placement, it is not practical to measure the scattered radiation distribution each time. This application cannot be used for dose estimation for medical staff in clinical settings because it does not take into account the scattered radiation of non-patients and its dose calculation accuracy is low. However, the simple confirmation of the scattered radiation distribution and changes in staff dose led to an intuitive understanding of the appropriate placement of the protection plates. Full article
(This article belongs to the Special Issue Medical Data Visualization)
Show Figures

Figure 1

16 pages, 1263 KiB  
Article
Identifying Heart Attack Risk in Vulnerable Population: A Machine Learning Approach
by Subhagata Chattopadhyay and Amit K Chattopadhyay
Information 2025, 16(4), 265; https://doi.org/10.3390/info16040265 - 26 Mar 2025
Viewed by 812
Abstract
The COVID-19 pandemic has significantly increased the incidence of post-infection cardiovascular events, particularly myocardial infarction, in individuals over 40. While the underlying mechanisms remain elusive, this study employs a hybrid machine learning approach to analyze epidemiological data in assessing 13 key heart attack [...] Read more.
The COVID-19 pandemic has significantly increased the incidence of post-infection cardiovascular events, particularly myocardial infarction, in individuals over 40. While the underlying mechanisms remain elusive, this study employs a hybrid machine learning approach to analyze epidemiological data in assessing 13 key heart attack risk factors and their susceptibility. Based on a unique dataset that combines demographic, biochemical, ECG, and thallium stress tests, this study aims to design, develop, and deploy a clinical decision support system. Assimilating outcomes from five clustering techniques applied to the ‘Kaggle heart attack risk’ dataset, the study categorizes distinct subpopulations against varying risk profiles and then divides the population into ‘at-risk’ (AR) and ‘not-at-risk’ (NAR) groups using clustering algorithms. The GMM algorithm outperforms its competitors (with clustering accuracy and Silhouette coefficient scores of 84.24% and 0.2623, respectively). Subsequent analyses, employing Pearson correlation and linear regression as descriptors, reveal a strong association between the likelihood of experiencing a heart attack and the 13 risk factors studied, and these are statistically significant (p < 0.05). Our findings provide valuable insights into the development of targeted risk stratification and preventive strategies for high-risk individuals based on heart attack risk scores. The aggravated risk for postmenopausal patients indicates compromised individual risk factors due to estrogen depletion that may be further compromised by extraneous stress impacts, like anxiety and fear, aspects that have traditionally eluded data modeling predictions. The model can be repurposed to analyze the impact of COVID-19 on vulnerable populations. Full article
Show Figures

Graphical abstract

27 pages, 2569 KiB  
Article
Cognitive Handwriting Insights for Alzheimer’s Diagnosis: A Hybrid Framework
by Shafiq Ul Rehman and Uddalak Mitra
Information 2025, 16(3), 249; https://doi.org/10.3390/info16030249 - 20 Mar 2025
Viewed by 1114
Abstract
Alzheimer’s disease (AD) is a persistent neurologic disorder that has no cure. For a successful treatment to be implemented, it is essential to diagnose AD at an early stage, which may occur up to eight years before dementia manifests. In this regard, a [...] Read more.
Alzheimer’s disease (AD) is a persistent neurologic disorder that has no cure. For a successful treatment to be implemented, it is essential to diagnose AD at an early stage, which may occur up to eight years before dementia manifests. In this regard, a new predictive machine learning model is proposed that works in two stages and takes advantage of both unsupervised and supervised learning approaches to provide a fast, affordable, yet accurate solution. The first stage involved fuzzy partitioning of a gold-standard dataset, DARWIN (Diagnosis AlzheimeR WIth haNdwriting). This dataset consists of clinical features and is designed to detect Alzheimer’s disease through handwriting analysis. To determine the optimal number of clusters, four Clustering Validity Indices (CVIs) were averaged, which we refer to as cognitive features. During the second stage, a predictive model was constructed exclusively from these cognitive features. In comparison to models relying on datasets featuring clinical attributes, models incorporating cognitive features showed substantial performance enhancements, ranging from 12% to 26%. Our proposed model surpassed all current state-of-the-art models, achieving a mean accuracy of 99%, mean sensitivity of 98%, mean specificity of 100%, mean precision of 100%, and mean MCC and Cohen’s Kappa of 98%, along with a mean AUC-ROC score of 99%. Hence, integrating the output of unsupervised learning into supervised machine learning models significantly improved their performance. In the process of crafting early interventions for individuals with a heightened risk of disease onset, our prognostic framework can aid in both the recruitment and advancement of clinical trials. Full article
(This article belongs to the Special Issue Detection and Modelling of Biosignals)
Show Figures

Graphical abstract

15 pages, 288 KiB  
Article
LLMs in Action: Robust Metrics for Evaluating Automated Ontology Annotation Systems
by Ali Noori, Pratik Devkota, Somya D. Mohanty and Prashanti Manda
Information 2025, 16(3), 225; https://doi.org/10.3390/info16030225 - 14 Mar 2025
Viewed by 1014
Abstract
Ontologies are critical for organizing and interpreting complex domain-specific knowledge, with applications in data integration, functional prediction, and knowledge discovery. As the manual curation of ontology annotations becomes increasingly infeasible due to the exponential growth of biomedical and genomic data, natural language processing [...] Read more.
Ontologies are critical for organizing and interpreting complex domain-specific knowledge, with applications in data integration, functional prediction, and knowledge discovery. As the manual curation of ontology annotations becomes increasingly infeasible due to the exponential growth of biomedical and genomic data, natural language processing (NLP)-based systems have emerged as scalable alternatives. Evaluating these systems requires robust semantic similarity metrics that account for hierarchical and partially correct relationships often present in ontology annotations. This study explores the integration of graph-based and language-based embeddings to enhance the performance of semantic similarity metrics. Combining embeddings generated via Node2Vec and large language models (LLMs) with traditional semantic similarity metrics, we demonstrate that hybrid approaches effectively capture both structural and semantic relationships within ontologies. Our results show that combined similarity metrics outperform individual metrics, achieving high accuracy in distinguishing child–parent pairs from random pairs. This work underscores the importance of robust semantic similarity metrics for evaluating and optimizing NLP-based ontology annotation systems. Future research should explore the real-time integration of these metrics and advanced neural architectures to further enhance scalability and accuracy, advancing ontology-driven analyses in biomedical research and beyond. Full article
(This article belongs to the Special Issue Biomedical Natural Language Processing and Text Mining)
Show Figures

Figure 1

16 pages, 6070 KiB  
Article
MRF-Mixer: A Simulation-Based Deep Learning Framework for Accelerated and Accurate Magnetic Resonance Fingerprinting Reconstruction
by Tianyi Ding, Yang Gao, Zhuang Xiong, Feng Liu, Martijn A. Cloos and Hongfu Sun
Information 2025, 16(3), 218; https://doi.org/10.3390/info16030218 - 11 Mar 2025
Cited by 2 | Viewed by 1077
Abstract
MRF-Mixer is a novel deep learning method for magnetic resonance fingerprinting (MRF) reconstruction, offering 200× faster processing (0.35 s on CPU and 0.3 ms on GPU) and 40% higher accuracy (lower MAE) than dictionary matching. It develops a simulation-driven approach using complex-valued multi-layer [...] Read more.
MRF-Mixer is a novel deep learning method for magnetic resonance fingerprinting (MRF) reconstruction, offering 200× faster processing (0.35 s on CPU and 0.3 ms on GPU) and 40% higher accuracy (lower MAE) than dictionary matching. It develops a simulation-driven approach using complex-valued multi-layer perceptrons and convolutional neural networks to efficiently process MRF data, enabling generalization across sequence and acquisition parameters and eliminating the need for extensive in vivo training data. Evaluation on simulated and in vivo data showed that MRF-Mixer outperforms dictionary matching and existing deep learning methods for T1 and T2 mapping. In six-shot simulations, it achieved the highest PSNR (T1: 33.48, T2: 35.9) and SSIM (T1: 0.98, T2: 0.98) and the lowest MAE (T1: 28.8, T2: 4.97) and RMSE (T1: 72.9, T2: 13.67). In vivo results further demonstrate that single-shot reconstructions using MRF-Mixer matched the quality of multi-shot acquisitions, highlighting its potential to reduce scan times. These findings suggest that MRF-Mixer enables faster, more accurate multiparametric tissue mapping, substantially improving quantitative MRI for clinical applications by reducing acquisition time while maintaining imaging quality. Full article
Show Figures

Figure 1

Back to TopTop