Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,841)

Search Parameters:
Journal = Bioengineering

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3130 KB  
Article
ColiFormer: A Transformer-Based Codon Optimization Model Balancing Multiple Objectives for Enhanced E. coli Gene Expression
by Saketh Baddam, Omar Emam, Abdelrahman Elfikky, Francesco Cavarretta, George Luka, Ibrahim Farag and Yasser Sanad
Bioengineering 2026, 13(1), 114; https://doi.org/10.3390/bioengineering13010114 (registering DOI) - 19 Jan 2026
Abstract
Codon optimization is widely used to improve heterologous gene expression in Escherichia coli. However, many existing methods focus primarily on maximizing the codon adaptation index (CAI) and neglect broader aspects of biological context. In this study, we present ColiFormer, a transformer-based codon [...] Read more.
Codon optimization is widely used to improve heterologous gene expression in Escherichia coli. However, many existing methods focus primarily on maximizing the codon adaptation index (CAI) and neglect broader aspects of biological context. In this study, we present ColiFormer, a transformer-based codon optimization framework fine-tuned on 3676 high-expression E. coli genes curated from the NCBI database. Built on the CodonTransformer BigBird architecture, ColiFormer employs self-attention mechanisms and a mathematical optimization method (the augmented Lagrangian approach) to balance multiple biological objectives simultaneously, including CAI, GC content, tRNA adaptation index (tAI), RNA stability, and minimization of negative cis-regulatory elements. Based on in silico evaluations on 37,053 native E. coli genes and 80 recombinant protein targets commonly used in industrial studies, ColiFormer demonstrated significant improvements in CAI and tAI values, maintained GC content within biologically optimal ranges, and reduced inhibitory cis-regulatory motifs compared with established codon optimization approaches, while maintaining competitive runtime performance. These results represent computational predictions derived from standard in silico metrics; future experimental work is anticipated to validate these computational predictions in vivo. ColiFormer has been released as an open-source tool alongside the benchmark datasets used in this study. Full article
(This article belongs to the Section Biochemical Engineering)
Show Figures

Graphical abstract

18 pages, 840 KB  
Article
Large Language Models Evaluation of Medical Licensing Examination Using GPT-4.0, ERNIE Bot 4.0, and GPT-4o
by Luoyu Lian, Xin Luo, Kavimbi Chipusu, Muhammad Awais Ashraf, Kelvin K. L. Wong and Wenjun Zhang
Bioengineering 2026, 13(1), 113; https://doi.org/10.3390/bioengineering13010113 (registering DOI) - 17 Jan 2026
Abstract
This study systematically evaluated the performance of three advanced large language models (LLMs)—GPT-4.0, ERNIE Bot 4.0, and GPT-4o—in the 2023 Chinese Medical Licensing Examination. Employing a dataset of 600 standardized questions, we analyzed the accuracy of each model in answering questions from three [...] Read more.
This study systematically evaluated the performance of three advanced large language models (LLMs)—GPT-4.0, ERNIE Bot 4.0, and GPT-4o—in the 2023 Chinese Medical Licensing Examination. Employing a dataset of 600 standardized questions, we analyzed the accuracy of each model in answering questions from three comprehensive sections: Basic Medical Comprehensive, Clinical Medical Comprehensive, and Humanities and Preventive Medicine Comprehensive. Our results demonstrate that both ERNIE Bot 4.0 and GPT-4o significantly outperformed GPT-4.0, achieving accuracies above the national pass mark. The study further examined the strengths and limitations of each model, providing insights into their applicability in medical education and potential areas for future improvement. These findings underscore the promise and challenges of deploying LLMs in multilingual medical education, suggesting a pathway towards integrating AI into medical training and assessment practices. Full article
(This article belongs to the Special Issue New Sights of Data Analysis and Digital Model in Biomedicine)
Show Figures

Graphical abstract

14 pages, 250 KB  
Article
Exploring an AI-First Healthcare System
by Ali Gates, Asif Ali, Scott Conard and Patrick Dunn
Bioengineering 2026, 13(1), 112; https://doi.org/10.3390/bioengineering13010112 (registering DOI) - 17 Jan 2026
Abstract
Artificial intelligence (AI) is now embedded across many aspects of healthcare, yet most implementations remain fragmented, task-specific, and layered onto legacy workflows. This paper does not review AI applications in healthcare per se; instead, it examines what an AI-first healthcare system would look [...] Read more.
Artificial intelligence (AI) is now embedded across many aspects of healthcare, yet most implementations remain fragmented, task-specific, and layered onto legacy workflows. This paper does not review AI applications in healthcare per se; instead, it examines what an AI-first healthcare system would look like, one in which AI functions as a foundational organizing principle of care delivery rather than an adjunct technology. We synthesize evidence across ambulatory, inpatient, diagnostic, post-acute, and population health settings to assess where AI capabilities are sufficiently mature to support system-level integration and where critical gaps remain. Across domains, the literature demonstrates strong performance for narrowly defined tasks such as imaging interpretation, documentation support, predictive surveillance, and remote monitoring. However, evidence for longitudinal orchestration, cross-setting integration, and sustained impact on outcomes, costs, and equity remains limited. Key barriers include data fragmentation, workflow misalignment, algorithmic bias, insufficient governance, and lack of prospective, multi-site evaluations. We argue that advancing toward AI-first healthcare requires shifting evaluation from accuracy-centric metrics to system-level outcomes, emphasizing human-enabled AI, interoperability, continuous learning, and equity-aware design. Using hypertension management and patient journey exemplars, we illustrate how AI-first systems can enable proactive risk stratification, coordinated intervention, and continuous support across the care continuum. We further outline architectural and governance requirements, including cloud-enabled infrastructure, interoperability, operational machine learning practices, and accountability frameworks—necessary to operationalize AI-first care safely and at scale, subject to prospective validation, regulatory oversight, and post-deployment surveillance. This review contributes a system-level framework for understanding AI-first healthcare, identifies priority research and implementation gaps, and offers practical considerations for clinicians, health systems, researchers, and policymakers. By reframing AI as infrastructure rather than isolated tools, the AI-first approach provides a pathway toward more proactive, coordinated, and equitable healthcare delivery while preserving the central role of human judgment and trust. Full article
(This article belongs to the Special Issue AI and Data Science in Bioengineering: Innovations and Applications)
16 pages, 2231 KB  
Article
Evaluating Explainability: A Framework for Systematic Assessment of Explainable AI Features in Medical Imaging
by Miguel A. Lago, Ghada Zamzmi, Brandon Eich and Jana G. Delfino
Bioengineering 2026, 13(1), 111; https://doi.org/10.3390/bioengineering13010111 - 16 Jan 2026
Viewed by 41
Abstract
Explainability features are intended to provide insight into the internal mechanisms of an Artificial Intelligence (AI) device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to assess and report explainable AI features [...] Read more.
Explainability features are intended to provide insight into the internal mechanisms of an Artificial Intelligence (AI) device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to assess and report explainable AI features in medical images. Our evaluation framework for AI explainability is based on four criteria that relate to the particular needs in AI-enabled medical devices: (1) Consistency quantifies the variability of explanations to similar inputs; (2) plausibility estimates how close the explanation is to the ground truth; (3) fidelity assesses the alignment between the explanation and the model internal mechanisms; and (4) usefulness evaluates the impact on task performance of the explanation. Finally, we developed a scorecard for AI explainability methods in medical imaging that serves as a complete description and evaluation to accompany this type of device. We describe these four criteria and give examples on how they can be evaluated. As a case study, we use Ablation CAM and Eigen CAM to illustrate the evaluation of explanation heatmaps on the detection of breast lesions on synthetic mammographies. The first three criteria are evaluated for task-relevant scenarios. This framework establishes criteria through which the quality of explanations provided by medical devices can be quantified. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Medical Imaging)
Show Figures

Figure 1

22 pages, 12812 KB  
Article
bFGF-Loaded PDA Microparticles Enhance Vascularization of Engineered Skin with a Concomitant Increase in Leukocyte Recruitment
by Britani N. Blackstone, Zachary W. Everett, Syed B. Alvi, Autumn C. Campbell, Emilio Alvalle, Olivia Borowski, Jennifer M. Hahn, Divya Sridharan, Dorothy M. Supp, Mahmood Khan and Heather M. Powell
Bioengineering 2026, 13(1), 110; https://doi.org/10.3390/bioengineering13010110 - 16 Jan 2026
Viewed by 55
Abstract
Engineered skin (ES) can serve as an advanced therapy for treatment of large full-thickness wounds, but delayed vascularization can cause ischemia, necrosis, and graft failure. To accelerate ES vascularization, this study assessed incorporation of polydopamine (PDA) microparticles loaded with different concentrations of basic [...] Read more.
Engineered skin (ES) can serve as an advanced therapy for treatment of large full-thickness wounds, but delayed vascularization can cause ischemia, necrosis, and graft failure. To accelerate ES vascularization, this study assessed incorporation of polydopamine (PDA) microparticles loaded with different concentrations of basic fibroblast growth factor (bFGF) into collagen scaffolds, which were subsequently seeded with human fibroblasts to create dermal templates (DTs), and then keratinocytes to create ES. DTs and ES were evaluated in vitro and following grafting to full-thickness wounds in immunodeficient mice. In vitro, metabolic activity of DTs was enhanced with PDA+bFGF, though this increase was not observed following seeding with keratinocytes to generate ES. After grafting, ES with bFGF-loaded PDA microparticles displayed dose-dependent increases in CD31-positive vessel formation vs. PDA-only controls (p < 0.001 at day 7; p < 0.05 at day 14). Interestingly, ES containing PDA+bFGF microparticles exhibited an almost 3-fold increase in water loss through the skin and a less-organized basal keratinocyte layer at day 14 post-grafting vs. controls. This was associated with significantly increased inflammatory cell infiltrate vs. controls at day 7 in vivo (p < 0.001). The results demonstrate that PDA microparticles are a viable method for delivery of growth factors in ES. However, further investigation of bFGF concentrations, and/or investigation of alternative growth factors, will be required to promote vascularization while reducing inflammation and maintaining epidermal health. Full article
Show Figures

Figure 1

22 pages, 1464 KB  
Article
Optimal Recycling Ratio of Biodried Product at 12% Enhances Digestate Valorization: Synergistic Acceleration of Drying Kinetics, Nutrient Enrichment, and Energy Recovery
by Xiandong Hou, Hangxi Liao, Bingyan Wu, Nan An, Yuanyuan Zhang and Yangyang Li
Bioengineering 2026, 13(1), 109; https://doi.org/10.3390/bioengineering13010109 - 16 Jan 2026
Viewed by 136
Abstract
Rapid urbanization in China has driven annual food waste production to 130 million tons, posing severe environmental challenges for anaerobic digestate management. To resolve trade-offs among drying efficiency, resource recovery (fertilizer/fuel), and carbon neutrality by optimizing the biodried product (BDP) recycling ratio (0–15%), [...] Read more.
Rapid urbanization in China has driven annual food waste production to 130 million tons, posing severe environmental challenges for anaerobic digestate management. To resolve trade-offs among drying efficiency, resource recovery (fertilizer/fuel), and carbon neutrality by optimizing the biodried product (BDP) recycling ratio (0–15%), six BDP treatments were tested in 60 L bioreactors. Metrics included drying kinetics, product properties, and environmental–economic trade-offs. The results showed that 12% BDP achieved a peak temperature integral (514.13 °C·d), an optimal biodrying index (3.67), and shortened the cycle to 12 days. Furthermore, 12% BDP yielded total nutrients (N + P2O5 + K2O) of 4.19%, meeting the NY 525-2021 standard in China, while ≤3% BDP maximized fuel suitability with LHV > 5000 kJ·kg−1, compliant with CEN/TC 343 RDF standards. BDP recycling reduced global warming potential by 27.3% and eliminated leachate generation, mitigating groundwater contamination risks. The RDF pathway (12% BDP) achieved the highest NPV (USD 716,725), whereas organic fertilizer required farmland subsidies (28.57/ton) to offset its low market value. A 12% BDP recycling ratio optimally balances technical feasibility, environmental safety, and economic returns, offering a closed-loop solution for global food waste valorization. Full article
(This article belongs to the Special Issue Anaerobic Digestion Advances in Biomass and Waste Treatment)
Show Figures

Graphical abstract

15 pages, 3826 KB  
Review
Artificial Authority: The Promise and Perils of LLM Judges in Healthcare
by Ariana Genovese, Lars Hegstrom, Srinivasagam Prabha, Cesar A. Gomez-Cabello, Syed Ali Haider, Bernardo Collaco, Nadia G. Wood and Antonio Jorge Forte
Bioengineering 2026, 13(1), 108; https://doi.org/10.3390/bioengineering13010108 - 16 Jan 2026
Viewed by 111
Abstract
Background: Large language models (LLMs) are increasingly integrated into clinical documentation, decision support, and patient-facing applications across healthcare, including plastic and reconstructive surgery. Yet, their evaluation remains bottlenecked by costly, time-consuming human review. This has given rise to LLM-as-a-judge, in which LLMs are [...] Read more.
Background: Large language models (LLMs) are increasingly integrated into clinical documentation, decision support, and patient-facing applications across healthcare, including plastic and reconstructive surgery. Yet, their evaluation remains bottlenecked by costly, time-consuming human review. This has given rise to LLM-as-a-judge, in which LLMs are used to evaluate the outputs of other AI systems. Methods: This review examines LLM-as-a-judge in healthcare with particular attention to judging architectures, validation strategies, and emerging applications. A narrative review of the literature was conducted, synthesizing LLM judge methodologies as well as judging paradigms, including those applied to clinical documentation, medical question-answering systems, and clinical conversation assessment. Results: Across tasks, LLM judges align most closely with clinicians on objective criteria (e.g., factuality, grammaticality, internal consistency), benefit from structured evaluation and chain-of-thought prompting, and can approach or exceed inter-clinician agreement, but remain limited for subjective or affective judgments and by dataset quality and task specificity. Conclusions: The literature indicates that LLM judges can enable efficient, standardized evaluation in controlled settings; however, their appropriate role remains supportive rather than substitutive, and their performance may not generalize to complex plastic surgery environments. Their safe use depends on rigorous human oversight and explicit governance structures. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

19 pages, 953 KB  
Article
Energy Measures as Biomarkers of SARS-CoV-2 Variants and Receptors
by Khawla Ghannoum Al Chawaf and Salim Lahmiri
Bioengineering 2026, 13(1), 107; https://doi.org/10.3390/bioengineering13010107 - 16 Jan 2026
Viewed by 150
Abstract
The COVID-19 outbreak has made it evident that the nature and behavior of SARS-CoV-2 requires constant research and surveillance, owing to the high mutation rates that lead to variants. This work focuses on the statistical analysis of energy measures as biomarkers of SARS-CoV-2. [...] Read more.
The COVID-19 outbreak has made it evident that the nature and behavior of SARS-CoV-2 requires constant research and surveillance, owing to the high mutation rates that lead to variants. This work focuses on the statistical analysis of energy measures as biomarkers of SARS-CoV-2. The main purpose of this study is to determine which energy measure can differentiate between SARS-CoV-2 variants, human cell receptors (GRP78 and ACE2), and their combinations. The dataset includes energy measures for different biological structures categorized by variants, receptors, and combinations, representing the sequence of variants and receptors. A multiple analysis of variance (ANOVA) test for equality of means and a Bartlett test for equality of variances are applied to energy measures. Results from multiple ANOVA show (a) the presence of significant differences in energy across variants, receptors, and combinations, (b) that average energy is significant only for receptors and combinations, but not for variants, and (c) the absence of significant differences observed for standard deviation across variants or combinations, but that there are significant differences across receptors. The results from the Bartlett tests show that (a) there is a presence of significant differences in the variances in energy across the variants and combinations, but no significant differences across receptors, (b) there is an absence of significant differences in variances across any group (variants, receptors, combinations), and (c) there is an absence of significant differences in variances for standard deviation of energy across variants, receptors, or combinations. In summary, it is concluded that energy and mean energy are the key biomarkers used to differentiate receptors and combinations. In addition, energy is the primary biomarker where variances differ across variants and combinations. These findings can help to implement tailored interventions, address the SARS-CoV-2 issue, and contribute considerably to the global fight against the pandemic. Full article
(This article belongs to the Special Issue Data Modeling and Algorithms in Biomedical Applications)
Show Figures

Figure 1

12 pages, 964 KB  
Review
Jawbone Cavitations: Current Understanding and Conceptual Introduction of Covered Socket Residuum (CSR)
by Shahram Ghanaati, Anja Heselich, Johann Lechner, Robert Sader, Jerry E. Bouquot and Sarah Al-Maawi
Bioengineering 2026, 13(1), 106; https://doi.org/10.3390/bioengineering13010106 - 16 Jan 2026
Viewed by 54
Abstract
Jawbone cavitations have been described for decades under various terminologies, including neuralgia-inducing cavitational osteonecrosis (NICO) and fatty degenerative osteolysis of the jawbone (FDOJ). Their biological nature and clinical relevance remain controversial. The present review aimed to summarize the current understanding of jawbone cavitations, [...] Read more.
Jawbone cavitations have been described for decades under various terminologies, including neuralgia-inducing cavitational osteonecrosis (NICO) and fatty degenerative osteolysis of the jawbone (FDOJ). Their biological nature and clinical relevance remain controversial. The present review aimed to summarize the current understanding of jawbone cavitations, identify relevant research gaps, and propose a unified descriptive terminology. This narrative literature review was conducted using PubMed/MEDLINE, Google Scholar, and manual searches of relevant journals. The available evidence was qualitatively synthesized. The results indicate that most published data on jawbone cavitations are derived from observational, retrospective, and cohort studies, with etiological concepts largely based on histopathological findings. Recent three-dimensional radiological analyses suggest that intraosseous non-mineralized areas frequently observed at former extraction sites may represent a physiological outcome of socket collapse and incomplete ossification rather than a pathological condition. This review introduces Covered Socket Residuum (CSR) as a radiological descriptive term and clearly distinguishes it from pathological entities such as NICO and FDOJ. Recognition of CSR is clinically relevant, particularly in dental implant planning, where unrecognized non-mineralized areas may compromise primary stability. The findings emphasize the role of three-dimensional radiological assessment for diagnosis and implant planning and discuss preventive and therapeutic strategies, including Guided Open Wound Healing (GOWHTM). Prospective controlled clinical studies are required to validate this concept and determine its clinical relevance. Full article
(This article belongs to the Section Regenerative Engineering)
Show Figures

Figure 1

41 pages, 5624 KB  
Article
Tackling Imbalanced Data in Chronic Obstructive Pulmonary Disease Diagnosis: An Ensemble Learning Approach with Synthetic Data Generation
by Yi-Hsin Ko, Chuan-Sheng Hung, Chun-Hung Richard Lin, Da-Wei Wu, Chung-Hsuan Huang, Chang-Ting Lin and Jui-Hsiu Tsai
Bioengineering 2026, 13(1), 105; https://doi.org/10.3390/bioengineering13010105 - 15 Jan 2026
Viewed by 224
Abstract
Chronic obstructive pulmonary disease (COPD) is a major health burden worldwide and in Taiwan, ranking as the third leading cause of death globally, and its prevalence in Taiwan continues to rise. Readmission within 14 days is a key indicator of disease instability and [...] Read more.
Chronic obstructive pulmonary disease (COPD) is a major health burden worldwide and in Taiwan, ranking as the third leading cause of death globally, and its prevalence in Taiwan continues to rise. Readmission within 14 days is a key indicator of disease instability and care efficiency, driven jointly by patient-level physiological vulnerability (such as reduced lung function and multiple comorbidities) and healthcare system-level deficiencies in transitional care. To mitigate the growing burden and improve quality of care, it is urgently necessary to develop an AI-based prediction model for 14-day readmission. Such a model could enable early identification of high-risk patients and trigger multidisciplinary interventions, such as pulmonary rehabilitation and remote monitoring, to effectively reduce avoidable early readmissions. However, medical data are commonly characterized by severe class imbalance, which limits the ability of conventional machine learning methods to identify minority-class cases. In this study, we used real-world clinical data from multiple hospitals in Kaohsiung City to construct a prediction framework that integrates data generation and ensemble learning to forecast readmission risk among patients with chronic obstructive pulmonary disease (COPD). CTGAN and kernel density estimation (KDE) were employed to augment the minority class, and the impact of these two generation approaches on model performance was compared across different augmentation ratios. We adopted a stacking architecture composed of six base models as the core framework and conducted systematic comparisons against the baseline models XGBoost, AdaBoost, Random Forest, and LightGBM across multiple recall thresholds, different feature configurations, and alternative data generation strategies. Overall, the results show that, under high-recall targets, KDE combined with stacking achieves the most stable and superior overall performance relative to the baseline models. We further performed ablation experiments by sequentially removing each base model to evaluate and analyze its contribution. The results indicate that removing KNN yields the greatest negative impact on the stacking classifier, particularly under high-recall settings where the declines in precision and F1-score are most pronounced, suggesting that KNN is most sensitive to the distributional changes introduced by KDE-generated data. This configuration simultaneously improves precision, F1-score, and specificity, and is therefore adopted as the final recommended model setting in this study. Full article
Show Figures

Figure 1

22 pages, 4811 KB  
Article
MedSegNet10: A Publicly Accessible Network Repository for Split Federated Medical Image Segmentation
by Chamani Shiranthika, Zahra Hafezi Kafshgari, Hadi Hadizadeh and Parvaneh Saeedi
Bioengineering 2026, 13(1), 104; https://doi.org/10.3390/bioengineering13010104 - 15 Jan 2026
Viewed by 86
Abstract
Machine Learning (ML) and Deep Learning (DL) have shown significant promise in healthcare, particularly in medical image segmentation, which is crucial for accurate disease diagnosis and treatment planning. Despite their potential, challenges such as data privacy concerns, limited annotated data, and inadequate training [...] Read more.
Machine Learning (ML) and Deep Learning (DL) have shown significant promise in healthcare, particularly in medical image segmentation, which is crucial for accurate disease diagnosis and treatment planning. Despite their potential, challenges such as data privacy concerns, limited annotated data, and inadequate training data persist. Decentralized learning approaches such as federated learning (FL), split learning (SL), and split federated learning (SplitFed/SFL) address these issues effectively. This paper introduces “MedSegNet10,” a publicly accessible repository designed for medical image segmentation using split-federated learning. MedSegNet10 provides a collection of pre-trained neural network architectures optimized for various medical image types, including microscopic images of human blastocysts, dermatoscopic images of skin lesions, and endoscopic images of lesions, polyps, and ulcers. MedSegNet10 implements SplitFed versions of ten established segmentation architectures, enabling collaborative training without centralizing raw data and labels, reducing the computational load required at client sites. This repository supports researchers, practitioners, trainees, and data scientists, aiming to advance medical image segmentation while maintaining patient data privacy. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

47 pages, 1424 KB  
Article
Integrating the Contrasting Perspectives Between the Constrained Disorder Principle and Deterministic Optical Nanoscopy: Enhancing Information Extraction from Imaging of Complex Systems
by Yaron Ilan
Bioengineering 2026, 13(1), 103; https://doi.org/10.3390/bioengineering13010103 - 15 Jan 2026
Viewed by 78
Abstract
This paper examines the contrasting yet complementary approaches of the Constrained Disorder Principle (CDP) and Stefan Hell’s deterministic optical nanoscopy for managing noise in complex systems. The CDP suggests that controlled disorder within dynamic boundaries is crucial for optimal system function, particularly in [...] Read more.
This paper examines the contrasting yet complementary approaches of the Constrained Disorder Principle (CDP) and Stefan Hell’s deterministic optical nanoscopy for managing noise in complex systems. The CDP suggests that controlled disorder within dynamic boundaries is crucial for optimal system function, particularly in biological contexts, where variability acts as an adaptive mechanism rather than being merely a measurement error. In contrast, Hell’s recent breakthrough in nanoscopy demonstrates that engineered diffraction minima can achieve sub-nanometer resolution without relying on stochastic (random) molecular switching, thereby replacing randomness with deterministic measurement precision. Philosophically, these two approaches are distinct: the CDP views noise as functionally necessary, while Hell’s method seeks to overcome noise limitations. However, both frameworks address complementary aspects of information extraction. The primary goal of microscopy is to provide information about structures, thereby facilitating a better understanding of their functionality. Noise is inherent to biological structures and functions and is part of the information in complex systems. This manuscript achieves integration through three specific contributions: (1) a mathematical framework combining CDP variability bounds with Hell’s precision measurements, validated through Monte Carlo simulations showing 15–30% precision improvements; (2) computational demonstrations with N = 10,000 trials quantifying performance under varying biological noise regimes; and (3) practical protocols for experimental implementation, including calibration procedures and real-time parameter optimization. The CDP provides a theoretical understanding of variability patterns at the system level, while Hell’s technique offers precision tools at the molecular level for validation. Integrating these approaches enables multi-scale analysis, allowing for deterministic measurements to accurately quantify the functional variability that the CDP theory predicts is vital for system health. This synthesis opens up new possibilities for adaptive imaging systems that maintain biologically meaningful noise while achieving unprecedented measurement precision. Specific applications include cancer diagnostics through chromosomal organization variability, neurodegenerative disease monitoring via protein aggregation disorder patterns, and drug screening by assessing cellular response heterogeneity. The framework comprises machine learning integration pathways for automated recognition of variability patterns and adaptive acquisition strategies. Full article
(This article belongs to the Section Biosignal Processing)
22 pages, 13863 KB  
Article
AI-Based Augmented Reality Microscope for Real-Time Sperm Detection and Tracking in Micro-TESE
by Mahmoud Mohamed, Ezaki Yuriko, Yuta Kawagoe, Kazuhiro Kawamura and Masashi Ikeuchi
Bioengineering 2026, 13(1), 102; https://doi.org/10.3390/bioengineering13010102 - 15 Jan 2026
Viewed by 189
Abstract
Non-obstructive azoospermia (NOA) is a severe male infertility condition characterized by extremely low or absent sperm production. In microdissection testicular sperm extraction (Micro-TESE) procedures for NOA, embryologists must manually search through testicular tissue under a microscope for rare sperm, a process that can [...] Read more.
Non-obstructive azoospermia (NOA) is a severe male infertility condition characterized by extremely low or absent sperm production. In microdissection testicular sperm extraction (Micro-TESE) procedures for NOA, embryologists must manually search through testicular tissue under a microscope for rare sperm, a process that can take 1.8–7.5 h and impose significant fatigue and burden. This paper presents an augmented reality (AR) microscope system with AI-based image analysis to accelerate sperm retrieval in Micro-TESE. The proposed system integrates a deep learning model (YOLOv5) for real-time sperm detection in microscope images, a multi-object tracker (DeepSORT) for continuous sperm tracking, and a velocity calculation module for sperm motility analysis. Detected sperm positions and motility metrics are overlaid in the microscope’s eyepiece view via a microdisplay, providing immediate visual guidance to the embryologist. In experiments on seminiferous tubule sample images, the YOLOv5 model achieved a precision of 0.81 and recall of 0.52, outperforming previous classical methods in accuracy and speed. The AR interface allowed an operator to find sperm faster, roughly doubling the sperm detection rate (66.9% vs. 30.8%). These results demonstrate that the AR microscope system can significantly aid embryologists by highlighting sperm in real time and potentially shorten Micro-TESE procedure times. This application of AR and AI in sperm retrieval shows promise for improving outcomes in assisted reproductive technology. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Graphical abstract

18 pages, 1606 KB  
Review
Biologic Augmentation for Meniscus Repair: A Narrative Review
by Tsung-Lin Lee and Scott Rodeo
Bioengineering 2026, 13(1), 101; https://doi.org/10.3390/bioengineering13010101 - 15 Jan 2026
Viewed by 83
Abstract
Meniscal preservation is increasingly recognized as a critical determinant of long-term knee joint health, yet successful repair remains challenging due to the meniscus’s limited intrinsic healing capacity. The adult meniscus is characterized by restricted vascularity, low cellularity, a dense extracellular matrix, complex biomechanical [...] Read more.
Meniscal preservation is increasingly recognized as a critical determinant of long-term knee joint health, yet successful repair remains challenging due to the meniscus’s limited intrinsic healing capacity. The adult meniscus is characterized by restricted vascularity, low cellularity, a dense extracellular matrix, complex biomechanical loading, and a hostile post-injury intra-articular inflammatory environment—factors that collectively impair meniscus healing, particularly in the avascular zones. Over the past several decades, a wide range of biologic augmentation strategies have been explored to overcome these barriers, including synovial abrasion, fibrin clot implantation, marrow stimulation, platelet-derived biologics, cell-based therapies, scaffold coverage, and emerging biologic and biophysical interventions. This review summarizes the biological basis of meniscal healing, critically evaluates current and emerging biologic augmentation techniques, and integrates these approaches within a unified framework of vascular, cellular, matrix, biomechanical, and immunologic targets. Understanding and modulating the cellular and molecular mechanisms governing meniscal degeneration and repair may enable the development of more effective, mechanism-driven strategies to improve healing outcomes and reduce the risk of post-traumatic osteoarthritis. Full article
(This article belongs to the Special Issue Novel Techniques in Meniscus Repair)
Show Figures

Figure 1

14 pages, 1368 KB  
Article
Three-Dimensional Visualization and Detection of the Pulmonary Venous–Left Atrium Connection Using Artificial Intelligence in Fetal Cardiac Ultrasound Screening
by Reina Komatsu, Masaaki Komatsu, Katsuji Takeda, Naoaki Harada, Naoki Teraya, Shohei Wakisaka, Takashi Natsume, Tomonori Taniguchi, Rina Aoyama, Mayumi Kaneko, Kazuki Iwamoto, Ryu Matsuoka, Akihiko Sekizawa and Ryuji Hamamoto
Bioengineering 2026, 13(1), 100; https://doi.org/10.3390/bioengineering13010100 - 15 Jan 2026
Viewed by 100
Abstract
Total anomalous pulmonary venous connection (TAPVC) is one of the most severe congenital heart defects; however, prenatal diagnosis remains suboptimal. A normal fetal heart has a junction between the pulmonary venous (PV) and left atrium (LA). In contrast, no junctions are observed in [...] Read more.
Total anomalous pulmonary venous connection (TAPVC) is one of the most severe congenital heart defects; however, prenatal diagnosis remains suboptimal. A normal fetal heart has a junction between the pulmonary venous (PV) and left atrium (LA). In contrast, no junctions are observed in patients with TAPVC. In the present study, we attempted to visualize and detect fetal PV-LA connections using artificial intelligence (AI) trained on the fetal cardiac ultrasound videos of 100 normal cases and six TAPVC cases. The PV-LA aggregate area was segmented using the following three-dimensional (3D) segmentation models: SegResNet, Swin UNETR, MedNeXt, and SegFormer3D. The Dice coefficient and 95% Hausdorff distance were used to evaluate segmentation performance. The mean values of the shortest PV-LA distance (PLD) and major axis angle (PLA) in each video were calculated. These methods demonstrated sufficient performance in visualizing and detecting the PV-LA connection. In terms of TAPVC screening performance, MedNeXt-PLD and SegResNet-PLA achieved mean area under the receiver operating characteristic curve values of 0.844 and 0.840, respectively. Overall, this study shows that our approach can support unskilled examiners in capturing the PV-LA connection and has the potential to improve the prenatal detection rate of TAPVC. Full article
Show Figures

Figure 1

Back to TopTop