Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (608)

Search Parameters:
Keywords = genome-scale models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1459 KB  
Article
Genomic Predictors of Platinum Resistance and Survival in High-Grade Serous Ovarian Carcinoma: Insights from an Explorative Targeted Next-Generation Sequencing Analysis
by Carmela De Marco, Valentina Rocca, Simona Migliozzi, Claudia Veneziano, Francesca Gualtieri, Annamaria Cerantonio, Tahreem Arshad Butt, Gianluca Santamaria, Maria Teresa De Angelis, Annalisa Di Cello, Roberta Venturella, Fulvio Zullo and Giuseppe Viglietto
Cancers 2026, 18(9), 1390; https://doi.org/10.3390/cancers18091390 - 28 Apr 2026
Abstract
Background: High-grade serous ovarian carcinoma (HG-SOC) remains the most lethal gynecological malignancy, largely due to intrinsic or acquired resistance to platinum-based chemotherapy. Although large-scale sequencing studies have delineated the genomic landscape of HG-SOC, clinically actionable biomarkers predictive of platinum response and outcome are [...] Read more.
Background: High-grade serous ovarian carcinoma (HG-SOC) remains the most lethal gynecological malignancy, largely due to intrinsic or acquired resistance to platinum-based chemotherapy. Although large-scale sequencing studies have delineated the genomic landscape of HG-SOC, clinically actionable biomarkers predictive of platinum response and outcome are still lacking. This study aimed to identify genomic alterations associated with platinum sensitivity, resistance, or refractoriness, and to assess their prognostic relevance. Methods: Tumor DNA from 24 HG-SOC patients with optimal cytoreductive resection, classified as platinum-sensitive (n = 9), platinum-resistant (n = 8), or platinum-refractory (n = 7) underwent targeted next-generation sequencing of 409 cancer-associated genes. Somatic variants were filtered and classified for oncogenicity using established criteria incorporating predicted functional impact, REVEL scores, and population allele frequencies. Associations between mutational profiles, platinum response, and overall survival (OS) were evaluated using Kaplan–Meier and Cox regression analyses. Key findings were validated in the TCGA ovarian serous carcinoma (TCGA-OV) dataset using survival analyses. Results: A total of 1367 protein-altering somatic variants across 301 genes were identified. While TP53 mutations were ubiquitous, platinum-resistant and platinum-refractory tumors showed enrichment of pathogenic alterations affecting DNA repair, transcriptional regulation, epigenetic modification, and oncogenic signaling, including FANCA, ATF1, MAF, NCOA2, PIK3CA, and TET1. Mutations in these genes were associated with reduced overall survival in exploratory analyses (median 2.5–9 months vs. 27.5–45 months). Multivariate analysis identified FANCA and ATF1 as potential independent predictors in exploratory modeling. In the TCGA-OV cohort, patients harboring pathogenic variants in a multi-gene panel derived from this study (excluding BRCA1/2) exhibited significantly worse survival compared with both BRCA1/2-mutated cases and the overall cohort. Conclusions: This exploratory study identifies a set of genomic alterations converging on transcriptional and epigenetic regulation, DNA repair, and oncogenic signaling that are associated with platinum resistance and adverse prognosis in HG-SOC. Independent validation in TCGA supports the potential clinical relevance of this mutational signature. These findings warrant further validation in larger prospective cohorts and functional studies to clarify their role as biomarkers of aggressive disease and therapeutic vulnerability. Full article
(This article belongs to the Special Issue Genetics and Epigenetics of Gynecological Cancer)
Show Figures

Figure 1

26 pages, 1379 KB  
Review
Epigenetic Variation in Plant Populations: DNA Methylation as a Driver of Phenotypic Diversity and Adaptation
by Jakub Sawicki, Wiktoria Czochór, Aniela Garbowska, Kamil Koczwara, Jerzy Andrzej Przyborowski, Natan Pupek, Paweł Sulima, Joanna Szablińska and Monika Szczecińska
Diversity 2026, 18(5), 259; https://doi.org/10.3390/d18050259 - 27 Apr 2026
Abstract
DNA methylation constitutes a primary layer of epigenetic regulation in plants, operating across three sequence contexts (CG, CHG, and CHH) through distinct enzymatic pathways. Over the past fifteen years, accumulating evidence has shown that DNA methylation varies substantially among individuals and populations of [...] Read more.
DNA methylation constitutes a primary layer of epigenetic regulation in plants, operating across three sequence contexts (CG, CHG, and CHH) through distinct enzymatic pathways. Over the past fifteen years, accumulating evidence has shown that DNA methylation varies substantially among individuals and populations of wild plants, sometimes independently of underlying genetic polymorphism. This variation can influence gene expression, transposable element activity, and phenotypic traits relevant to ecological adaptation. Population epigenetics, the study of methylation variation at the population scale, has matured from initial surveys using methylation-sensitive amplified fragment length polymorphism (MS-AFLP) into a discipline increasingly reliant on reduced-representation bisulfite sequencing (epiGBS, bsRADseq), whole-genome bisulfite sequencing (WGBS), enzymatic methyl-seq (EM-seq), and direct long-read detection by nanopore sequencing. These methodological advances are opening population epigenetics to non-model organisms across the full breadth of the plant phylogeny, from angiosperms and gymnosperms to ferns and bryophytes. We cover (i) the molecular machinery underlying plant DNA methylation, including the debated status of N6-methyladenine (6mA); (ii) empirical evidence for natural epigenetic variation in plant populations, spanning clonal, invasive, and outcrossing species; (iii) the methodological toolkit available for population-scale methylation profiling, with emphasis on approaches suitable for non-model taxa; and (iv) the ecological and evolutionary significance of population epigenetic variation, including transgenerational inheritance, stress memory, epigenetic clocks, conservation applications, and the emerging integration of epigenetics into the extended evolutionary synthesis. We identify critical knowledge gaps, particularly the near-complete absence of population-level epigenetic data for bryophytes, ferns, and lycophytes, and outline priorities for future research. Full article
(This article belongs to the Special Issue 2026 Feature Papers by Diversity's Editorial Board Members)
Show Figures

Figure 1

22 pages, 1217 KB  
Article
The Missing Layer in Modern IT: Governance of Commitments, Not Just Compute and Data
by Rao Mikkilineni and William Patrick Kelly
Computers 2026, 15(5), 275; https://doi.org/10.3390/computers15050275 - 24 Apr 2026
Viewed by 118
Abstract
Contemporary enterprise IT operations are largely implemented on Shannon–Turing computing models in which programs execute read–compute–write cycles over data structures, while governance—fault handling, configuration control, auditability, continuity, and accounting—is applied externally through infrastructure platforms, observability stacks, and human operational processes. This separation scales [...] Read more.
Contemporary enterprise IT operations are largely implemented on Shannon–Turing computing models in which programs execute read–compute–write cycles over data structures, while governance—fault handling, configuration control, auditability, continuity, and accounting—is applied externally through infrastructure platforms, observability stacks, and human operational processes. This separation scales analytical throughput but accumulates what we term coherence debt: locally expedient operational commitments whose provenance and revisability degrade over time until exposed by failures, security incidents, regulatory demands, or architectural transitions. This paper examines the evolution of operational computing models that integrate com-pupation with regulation at two distinct levels. First, Distributed Intelligent Managed Elements (DIME) extend the classical Turing cycle toward a supervised execution loop—read–check-with-oracle–compute–write—by incorporating signaling overlays and FCAPS (Fault, Configuration, Accounting, Performance, and Security) supervision into computation in progress. Second, the Autopoietic Management and Orchestration System (AMOS), grounded in the General Theory of Information, the Burgin–Mikkilineni Thesis, and Deutsch’s epistemic framework, fully decouples process executors from governance by treating any Turing-equivalent engine as a replaceable execution substrate while elevating knowledge structures—encoded as local and global Digital Genomes—to first-class operational state within a governed knowledge network. Using a distributed microservice transaction testbed, we demonstrate how this approach operationalizes topology-as-data, a capability-oriented control plane, decoupled application-layer FCAPS independent of infrastructure management, and policy-selectable consistency/availability semantics. Our results show that the principal benefit of AMOS is not circumventing theoretical constraints such as the Consistency, Availability, and Partition tolerance (CAP) theorem, but governing their trade-offs as explicit, auditable commitments with defined convergence pathways and controlled return to a coherent system state, thereby reducing coherence debt and improving operational reliability in distributed AI-enabled enterprise systems. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
22 pages, 1328 KB  
Review
Bridging Traditional Modeling and Artificial Intelligence in Measles Epidemiology: Methods, Applications, and Future Directions—A Narrative Review
by Andrei Florentin Baiasu, Alexandra-Daniela Rotaru-Zavaleanu, Ana-Maria Boldea, Mihai-Andrei Ruscu, Mircea-Sebastian Serbanescu and Lucretiu Radu
J. Clin. Med. 2026, 15(9), 3242; https://doi.org/10.3390/jcm15093242 - 24 Apr 2026
Viewed by 182
Abstract
Measles remains one of the most contagious infectious diseases globally and continues to pose substantial public health risks despite decades of effective vaccination. This narrative review examines both classical and contemporary computational approaches used for measles monitoring, prediction, and control, with particular attention [...] Read more.
Measles remains one of the most contagious infectious diseases globally and continues to pose substantial public health risks despite decades of effective vaccination. This narrative review examines both classical and contemporary computational approaches used for measles monitoring, prediction, and control, with particular attention given to the emerging role of artificial intelligence (AI). We synthesized findings from 46 studies; 31 focused directly on measles and 15 on methodologically relevant studies from related infectious diseases (COVID-19, influenza, malaria), selected through searches of PubMed, Scopus, Web of Science, IEEE Xplore, and preprint servers, conducted between June and December 2025. Traditional compartmental models (SIR, SEIR, MSEIR), statistical tools (ARIMA, SARIMA), and seroepidemiological analysis provide transparent, well-characterized frameworks for estimating transmission dynamics and simulating intervention scenarios. Spatial modeling, network analysis, and Monte Carlo simulations have added geographic granularity to outbreak characterization. More recently, AI and machine learning (ML) methods, including supervised algorithms (Random Forest, XGBoost, SVM), deep learning architectures (CNN, LSTM), and hybrid mechanistic ML models, have shown improved predictive performance by integrating multiple data sources: epidemiological records, demographic profiles, mobility patterns, and behavioral indicators. AI-based approaches appear most valuable for high-dimensional risk prediction and image-based diagnostic tasks, while classical models retain clear advantages for policy-oriented scenario analysis. However, no AI-based or hybrid model identified in this review has been adopted into routine national measles surveillance or used for vaccination policy decisions at scale. Important challenges remain: data quality varies across settings, model generalizability cannot be assumed, and computational infrastructure disparities limit deployment in high-burden regions. Explainable AI, federated learning, workforce training for model interpretation, and integration of vaccination registries with mobility and genomic surveillance data represent concrete future directions for strengthening computational support for measles elimination. Full article
(This article belongs to the Special Issue New Advances of Infectious Disease Epidemiology)
Show Figures

Figure 1

18 pages, 835 KB  
Review
Genomic Resources and Gene Family Studies in Longan (Dimocarpus longan Lour.): Progress, Limitations, and Prospects
by Xiang Li, Liqin Liu, Xiaowen Hu, Shengyou Shi, Tianzi Li and Jiannan Zhou
Horticulturae 2026, 12(5), 513; https://doi.org/10.3390/horticulturae12050513 - 22 Apr 2026
Viewed by 495
Abstract
The rapid accumulation of genome-scale data has transformed plant biology from descriptive genetics to predictive and increasingly mechanistic genomics. Longan (Dimocarpus longan Lour.) is an economically important subtropical fruit tree in China and Southeast Asia, but compared with model plants and major [...] Read more.
The rapid accumulation of genome-scale data has transformed plant biology from descriptive genetics to predictive and increasingly mechanistic genomics. Longan (Dimocarpus longan Lour.) is an economically important subtropical fruit tree in China and Southeast Asia, but compared with model plants and major temperate fruit crops, its genomic resources and functional studies have developed relatively late. Here, we review recent progress in longan genomics with emphasis on three interrelated areas: genome assembly and annotation, transcriptomic resources, and representative gene family studies associated with flowering, somatic embryogenesis, and transporter-mediated stress tolerance. The progression from the first draft genome of ‘Honghezi’ to the chromosome-scale assemblies of ‘Jidanben’ and ‘Shixia’ has substantially improved contiguity and gene annotation, thereby enabling population-genomic analysis, genome-wide gene family identification, and candidate-gene discovery. Available transcriptomic datasets further support studies of reproductive development, stress responses, and embryogenic competence, although cross-study integration remains limited. We also summarize how gene family analyses have advanced the current understanding of floral induction, continuous flowering, somatic embryogenesis, mineral transport, and sugar transport in longan. Importantly, the field is still dominated by cataloguing and expression-based inference, whereas causal validation, pan-genomic analysis, and multi-omics integration remain insufficient. We therefore argue that future progress in longan molecular breeding will depend on integrating high-quality genomic resources with functional validation, standardized comparative annotation, and improved transformation or regeneration systems. Full article
Show Figures

Figure 1

27 pages, 505 KB  
Article
An Information Theory of Persistent Homology: Entropy, the Data Processing Inequality, and Rate–Distortion Bounds for Topological Features
by Deepalakshmi Perumalsamy, Caleb Gunalan and Rajermani Thinakaran
Mathematics 2026, 14(8), 1385; https://doi.org/10.3390/math14081385 - 20 Apr 2026
Viewed by 201
Abstract
Background: Topological Data Analysis (TDA) captures multi-scale geometric features of data as persistence diagrams, yet no principled information-theoretic framework quantifies how much information those features carry, how efficiently they compress, or when they are informationally irreducible. Methods: We construct a measure-theoretic [...] Read more.
Background: Topological Data Analysis (TDA) captures multi-scale geometric features of data as persistence diagrams, yet no principled information-theoretic framework quantifies how much information those features carry, how efficiently they compress, or when they are informationally irreducible. Methods: We construct a measure-theoretic probability space over persistence diagram space using a Poisson-process reference measure, and define topological entropy (H-T), topological mutual information (I-T), and a topological rate–distortion function as the core objects of a new theory. Results: Four theorems with full proofs establish finite stability, axiomatic uniqueness, a Topological Data Processing Inequality, and a Rate–Distortion Theorem with explicit Poisson-model closed-form formula. A Renyi generalization of topological entropy is also established. Computational and practical implementation aspects—including finite-sample estimation, multi-parameter extension, and algorithmic realization—are addressed inline throughout the paper. Conclusions: This framework provides a rigorous measure-theoretic information-theoretic foundation for persistent homology, demonstrated on simulated brain connectivity and point cloud data, with applications to threshold selection, genomic classification bounds, and compressed sensing. Full article
Show Figures

Figure 1

21 pages, 1322 KB  
Review
The Importance of the Physcomitrium patens Genome in the Evolutionary Genomics of Terrestrial Plants
by Anderson Franco da Cruz Lima, Wellington Bruno dos Santos Alves, Letícia Fernanda Presotti Matos, Yasmin Jansen Araujo, Michele Gomes de Morais, Giovanna Melo Nishitani, Stephan Machado Dohms and Marcelo Henrique Soller Ramada
Plants 2026, 15(8), 1261; https://doi.org/10.3390/plants15081261 - 20 Apr 2026
Viewed by 370
Abstract
Mosses (Bryophyta) comprises a group of terrestrial plants that colonized land more than 450 million years ago that play fundamental ecological and evolutionary roles, particularly in polar and peatland ecosystems. The sequencing of Physcomitrium patens marked a milestone in bryophyte genomics, establishing mosses [...] Read more.
Mosses (Bryophyta) comprises a group of terrestrial plants that colonized land more than 450 million years ago that play fundamental ecological and evolutionary roles, particularly in polar and peatland ecosystems. The sequencing of Physcomitrium patens marked a milestone in bryophyte genomics, establishing mosses as model organisms for evolutionary and functional studies. However, the recent advent of next-generation sequencing technologies has broadened genomic exploration beyond P. patens, unveiling the genetic diversity of additional bryophyte species. Notably, the genomes of Sphagnum fallax, Sphagnum magellanicum, the liverwort Marchantia polymorpha and hornworts from Athoceros genus have provided new insights into carbon fixation mechanisms, ecological adaptations, and lineage-specific evolutionary traits. These advances have enabled large-scale comparative analyses and expanded the understanding of conserved and divergent genomic features among bryophytes. The integration of these datasets into public databases such as Phytozome and NCBI Genome has created a robust framework for investigating plant genome evolution and biotechnological potential. Altogether, the expanding genomic landscape of bryophytes reveals their remarkable evolutionary plasticity and underscores their importance as key models for studying adaptation, metabolism, and genomic innovation in terrestrial plants. Full article
(This article belongs to the Special Issue Bryophyte Biology, 2nd Edition)
Show Figures

Figure 1

28 pages, 5809 KB  
Article
PSMC-FAC: Automated Optimization of False-Negative Rate Corrections for Low-Coverage PSMC-Based Demographic Inference
by Francisco Iglesias-Santos, Alba Nieto, Sònia Casillas, Antonio Barbadilla and Carlos Sarabia
Biology 2026, 15(8), 631; https://doi.org/10.3390/biology15080631 - 16 Apr 2026
Viewed by 413
Abstract
Inferring demographic history from whole-genome data is a central objective in evolutionary and conservation genomics. However, the Pairwise Sequentially Markovian Coalescent (PSMC) framework, one of the most widely used demographic inference methods for whole-genome sequence data, is highly sensitive to sequencing coverage, with [...] Read more.
Inferring demographic history from whole-genome data is a central objective in evolutionary and conservation genomics. However, the Pairwise Sequentially Markovian Coalescent (PSMC) framework, one of the most widely used demographic inference methods for whole-genome sequence data, is highly sensitive to sequencing coverage, with low coverage producing systematic underestimation of heterozygosity, which biases effective population size trajectories. Here, we present PSMC-FAC, an automated method designed to optimize false-negative rate correction in low-coverage genomes by minimizing geometric distances between FNR-corrected low-coverage trajectories and their corresponding high-coverage references. Whole-genome datasets from humans, gray wolves, and cattle were downsampled across multiple coverage levels and processed through standard demographic inference pipelines. Corrected trajectories, projected onto a common temporal grid, were compared using Hausdorff and discrete Fréchet distance metrics and optimal correction factors were modeled as a function of sequencing depth using second-degree polynomial regression. Across species and demographic contexts, PSMC-FAC substantially improved concordance between low- and high-coverage trajectories and revealed highly predictable coverage-dependent correction patterns. Overall, PSMC-FAC provides a reproducible and mathematically grounded alternative to subjective correction approaches, enabling reliable demographic inference from moderate-coverage genomes and facilitating broader population-scale genomic analyses. Full article
(This article belongs to the Section Theoretical Biology and Biomathematics)
Show Figures

Figure 1

33 pages, 5765 KB  
Article
Explainable Smart-Building Energy Consumption Forecasting and Anomaly Diagnosis Framework Based on Multi-Head Transformer and Dual-Stream Detection
by Yuanyu Cai, Dan Liao and Bin Liu
Appl. Sci. 2026, 16(8), 3836; https://doi.org/10.3390/app16083836 - 15 Apr 2026
Viewed by 268
Abstract
Fine-grained energy management in smart-campus buildings requires accurate load forecasting together with reliable and interpretable anomaly diagnosis. This study presents an integrated forecasting–diagnosis framework for building energy systems. Hourly energy demand is modeled using a Transformer-based sequence-to-sequence architecture, in which a domain-aware attention [...] Read more.
Fine-grained energy management in smart-campus buildings requires accurate load forecasting together with reliable and interpretable anomaly diagnosis. This study presents an integrated forecasting–diagnosis framework for building energy systems. Hourly energy demand is modeled using a Transformer-based sequence-to-sequence architecture, in which a domain-aware attention mechanism is introduced to separately represent historical consumption dynamics, environmental influences, and temporal regularities commonly observed in building energy use. Anomaly diagnosis is conducted through a dual-scale strategy that supports both the timely detection of abrupt abnormal events and the identification of gradual performance degradation. Short-term anomalies are detected from forecasting residuals using adaptive thresholds, while long-term anomalies are identified by comparing current residual patterns with same-season historical baselines and validating multi-window trends over a 48 h horizon. The two detection streams are jointly used to distinguish point, pattern, and composite anomalies. To support practical operation and maintenance, SHAP-based explanations are provided to interpret both energy predictions and detected anomalies. Case studies on two educational buildings from the Building Data Genome Project 2 demonstrate that the proposed framework achieves the best overall forecasting performance against both conventional baselines and stronger recent Transformer-based models, with mean absolute percentage errors of approximately 3%. The results indicate that the proposed framework provides a practical solution for data-driven energy monitoring and decision support in smart buildings. Full article
(This article belongs to the Special Issue Emerging Applications of AI and Machine Learning in Industry)
Show Figures

Figure 1

16 pages, 3032 KB  
Article
A Novel Topology-Based Candidate Reaction Prediction Approach for Gap-Fillings of Genome-Scale Metabolic Models
by Jiajun Qu and Kai Wang
Metabolites 2026, 16(4), 258; https://doi.org/10.3390/metabo16040258 - 12 Apr 2026
Viewed by 328
Abstract
Background: It is significant to predict and fill metabolic reaction gaps (gap-fillings) for reconstructions of high-quality genome-scale metabolic models (GEMs). Currently, many existing optimization-based gap-filling methods have to rely on phenotypic data, while performances of topology-based approaches by deep learning algorithms need [...] Read more.
Background: It is significant to predict and fill metabolic reaction gaps (gap-fillings) for reconstructions of high-quality genome-scale metabolic models (GEMs). Currently, many existing optimization-based gap-filling methods have to rely on phenotypic data, while performances of topology-based approaches by deep learning algorithms need to be further improved. Methods: This paper proposes a novel topology-based approach (GHCN-SE) of predicting confidence scores of candidate reactions, which can be used for gap-fillings of GEMs. The topological features of GEMs are fully extracted by simultaneously using graph and hypergraph convolutional networks, such that both associations of metabolites in the same reaction and higher-order interactions of metabolites within reactions can be captured. After the feature fusion, we further employ the squeeze-and-excitation network to enhance features. Results: The reaction prediction and reaction recovery experiments through 5-fold cross validations on 108 high-quality BiGG GEMs show that the proposed GHCN-SE is superior to other related methods. The ablation study further demonstrates the contributions of the graph convolutional network, hypergraph convolutional network, and squeeze-and-excitation network in GHCN-SE. In addition, the visualization study interprets the effectiveness of GHCN-SE. Conclusions: For potential applications in metabolic engineering, biomedicine, etc., this proposed GHCN-SE can be used to further improve the phenotypic prediction accuracy of the draft GEM generated from automated reconstruction tools. Full article
(This article belongs to the Section Bioinformatics and Data Analysis)
Show Figures

Figure 1

22 pages, 882 KB  
Review
Artificial Intelligence for Tuberculosis Screening and Detection: From Evidence to Policy and Implementation
by Hien Thi Thu Nguyen, Vang Le-Quy, Anh Tuan Dinh-Xuan and Linh Nhat Nguyen
Diagnostics 2026, 16(8), 1127; https://doi.org/10.3390/diagnostics16081127 - 9 Apr 2026
Viewed by 879
Abstract
Artificial intelligence (AI) is increasingly used to support tuberculosis (TB) screening and diagnosis, particularly through computer-aided detection (CAD) applied to chest radiography (CXR). However, the programmatic value of AI depends not only on diagnostic accuracy but also on implementation context, threshold calibration, and [...] Read more.
Artificial intelligence (AI) is increasingly used to support tuberculosis (TB) screening and diagnosis, particularly through computer-aided detection (CAD) applied to chest radiography (CXR). However, the programmatic value of AI depends not only on diagnostic accuracy but also on implementation context, threshold calibration, and integration into diagnostic pathways. We conducted a narrative, state-of-the-art review of AI applications across the TB diagnosis pathway. Evidence was synthesized from World Health Organization policy documents, independent validation initiatives, and peer-reviewed studies published between 2010 and 2026, with a structured selection process aligned with PRISMA principles. CAD for CXR is the most mature AI application and is recommended by WHO for TB screening and triage among individuals aged ≥15 years in specific contexts. Across studies, CAD-CXR demonstrates sensitivity comparable to human readers, although performance varies by product, population, and imaging conditions, necessitating local threshold calibration. Evidence from implementation studies suggests improvements in screening efficiency and potential cost-effectiveness in high-burden settings. Other AI modalities, including computed tomography (CT)-based imaging analysis, point-of-care ultrasound interpretation, cough or stethoscope sound analysis, clinical risk models, and genomic resistance prediction show promising but heterogeneous results, with most requiring further independent validation and prospective evaluation. AI has the potential to strengthen TB screening and diagnostic pathways, but its impact depends on integration into health systems and evaluated using patient- and program-level outcomes rather than accuracy alone. A differentiated approach is needed, with responsible scale-up of policy-endorsed tools alongside rigorous evaluation of emerging technologies to support effective and equitable TB care. Full article
(This article belongs to the Special Issue Innovative Approaches to Tuberculosis Screening and Diagnosis)
Show Figures

Figure 1

18 pages, 692 KB  
Review
From Pixels to Prediction: Developing Integrated AI Foundation Models for Personalized Thyroid Cancer Care
by Jae Hyun Park, Younghyun Park, Yong Moon Lee, Sejung Yang and Jong Ho Yoon
Cancers 2026, 18(7), 1155; https://doi.org/10.3390/cancers18071155 - 3 Apr 2026
Viewed by 511
Abstract
Background: Thyroid cancer incidence continues to rise globally, yet current diagnostic methods, reliant on ultrasound-guided fine-needle aspiration, suffer from substantial inter-observer variability and indeterminate results. Objective: This review explores the transformative potential of integrated artificial intelligence (AI) foundation models in thyroid [...] Read more.
Background: Thyroid cancer incidence continues to rise globally, yet current diagnostic methods, reliant on ultrasound-guided fine-needle aspiration, suffer from substantial inter-observer variability and indeterminate results. Objective: This review explores the transformative potential of integrated artificial intelligence (AI) foundation models in thyroid cancer management. We propose a paradigm shift using foundation models—large-scale, multimodal architectures pre-trained on diverse datasets—to bridge the gap between initial pixels and long-term prognostic prediction. Proposed Models: We introduce two integrated conceptual frameworks: ThyroSight-Prognos for high-precision assessment in specialized tertiary settings and SonoPredict-AI for cost-effective screening in primary care. Key Innovations: By synthesizing data from ultrasound, pathology (WSI), genomics, and clinical parameters through explainable AI (XAI), these models aim to reduce unnecessary surgeries and personalize treatment pathways. Challenges and Outlook: This paper addresses critical implementation challenges, including data heterogeneity, hardware requirements, and regulatory trust, ultimately providing a strategic blueprint for future multi-center prospective clinical validation to revolutionize thyroid care through precision oncology. Full article
(This article belongs to the Special Issue The Changing Paradigms in the Management of Thyroid Cancer)
Show Figures

Figure 1

35 pages, 1234 KB  
Article
EHMN 2026: A Thermodynamically Refined, SBML-Standardised Human Metabolic Network for Genome-Scale Analysis and QSP Integration
by Igor Goryanin, Leonid Slovianov, Stephen Checkley and Irina Goryanin
Metabolites 2026, 16(4), 236; https://doi.org/10.3390/metabo16040236 - 31 Mar 2026
Viewed by 506
Abstract
Background: Genome-scale metabolic models (GEMs) are foundational tools for systems biology, enabling quantitative interrogation of human metabolism across physiological and pathological states. However, many legacy reconstructions exhibit heterogeneous identifier usage, incomplete pathway integration, and limited thermodynamic refinement, constraining reproducibility, interoperability, and translational applicability. [...] Read more.
Background: Genome-scale metabolic models (GEMs) are foundational tools for systems biology, enabling quantitative interrogation of human metabolism across physiological and pathological states. However, many legacy reconstructions exhibit heterogeneous identifier usage, incomplete pathway integration, and limited thermodynamic refinement, constraining reproducibility, interoperability, and translational applicability. Methods: We present EHMN 2026, an update of the Edinburgh Human Metabolic Network. The reconstruction was refined through systematic identifier reconciliation using MetaNetX and ChEBI mappings, duplicate reaction consolidation, thermodynamic directionality assessment, and structured pathway annotation via Reactome. The final model was encoded in Systems Biology Markup Language (SBML) Level 3 Version 2 with the Flux Balance Constraints (FBC2) package, ensuring explicit gene–protein–reaction (GPR) representation and compatibility with modern constraint-based modelling toolchains. Results: EHMN 2026 comprises 11 compartments, 14,321 metabolites (species), and 22,642 reactions, supported by 3996 gene products. Of all reactions, 9638 (42.6%) contain GPR associations, linking metabolic transformations to 2887 unique Ensembl gene identifiers (ENSG). Pathway integration yielded 2194 unique Reactome identifiers, providing structured pathway-level organisation of metabolic functions. Thermodynamic refinement reduced infeasible energy-generating cycles and improved reaction directionality coherence while preserving global network connectivity. The reconstruction is fully SBML-compliant and portable across major modelling platforms. Compared with Recon3D and Human1, EHMN 2026 uniquely combines native Reactome reaction-level annotation, systematic MetaNetX identifier harmonisation, documented thermodynamic cycle elimination (37 cycles, 0 remaining), and an 11-compartment architecture supporting organelle-specific modelling—features designed for QSP and multi-layer integration applications. Conclusions: EHMN 2026 delivers a rigorously harmonised, thermodynamically refined, and pathway-annotated human metabolic reconstruction with enhanced annotation depth and standards-based interoperability. By combining genome-scale coverage with structured gene and pathway integration, the model establishes a robust computational backbone for reproducible metabolic analysis and provides a scalable foundation for future multi-layer systems pharmacology and integrative modelling frameworks. Full article
Show Figures

Figure 1

22 pages, 3370 KB  
Article
Phylogenetic Analyses of RdRp Region and VP1 Gene in Human Norovirus Genotype GII.17[P17] Variants
by Fuminori Mizukoshi, Yen Hai Doan, Asumi Hirata-Saito, Hiroyuki Tsukagoshi, Takumi Motoya, Ryusuke Kimura, Tomoko Takahashi, Yuriko Hayashi, Yuki Matsushima, Kei Miyakawa, Naomi Sakon, Kenji Sadamasu, Kazuhisa Yoshimura, Nobuhiro Saruki, Yoshiyuki Suzuki, Masashi Uema, Kosuke Murakami, Kazuhiko Katayama, Akihide Ryo, Tsutomu Kageyama and Hirokazu Kimuraadd Show full author list remove Hide full author list
Microorganisms 2026, 14(4), 770; https://doi.org/10.3390/microorganisms14040770 - 28 Mar 2026
Viewed by 497
Abstract
In this study, we investigated the long-term evolutionary dynamics of human norovirus GII.17[P17] using the RNA-dependent RNA polymerase (RdRp) region and the VP1 capsid gene, integrating phylogenetics, time-scaled inference, phylodynamics, and structure-based analyses. Maximum-likelihood phylogenies of both genomic regions consistently resolved [...] Read more.
In this study, we investigated the long-term evolutionary dynamics of human norovirus GII.17[P17] using the RNA-dependent RNA polymerase (RdRp) region and the VP1 capsid gene, integrating phylogenetics, time-scaled inference, phylodynamics, and structure-based analyses. Maximum-likelihood phylogenies of both genomic regions consistently resolved four major clades (Clades 1–4). VP1 patristic-distance distributions indicated higher within-clade diversity in the phylogenetically basal Clades 1 and 3, whereas Clades 2 and 4 showed lower diversity, consistent with recent demographic expansion. Similarity-plot analysis identified pronounced variability in the VP1 P2 domain, while the S and P1 domains remained comparatively conserved, supporting P2 as the primary hotspot of diversification. Bayesian time-scaled analyses estimated the most recent common ancestor around 1993 (VP1) and 2000 (RdRp) and revealed two major lineages (Clade 1/2 and Clade 3/4), with the split between Clades 3 and 4 occurring around 2016–2017. Bayesian skyline plots showed a marked increase in effective population size after 2013, and substitution-rate estimates indicated faster evolution in VP1 than in RdRp, with higher VP1 rates in the Clade 3/4 lineage than in Clade 1/2. Capsid dimer modeling further mapped high-confidence conformational B-cell epitopes and positively selected residues predominantly to the distal surface of P2, with broadly conserved spatial patterns across clades. Compared with the Clade 1 reference (Kawasaki323), Clade 2 accumulated numerous P2 substitutions, whereas Clades 3 and 4 retained fewer changes and remained closer to Clade 1 at the amino-acid level. Together, these results suggest lineage turnover within GII.17[P17] driven by constrained diversification at the P2 surface, potentially contributing to the recent predominance of the Clade 3/4 lineage. Full article
(This article belongs to the Special Issue Molecular Epidemiology and Bioinformatics in Pathogen Surveillance)
Show Figures

Figure 1

18 pages, 2168 KB  
Review
Artificial Intelligence in Transcriptomics: From Human-in-the-Loop to Agentic AI
by Giulia Gentile, Giovanna Morello, Valentina La Cognata, Maria Guarnaccia and Sebastiano Cavallaro
J. Pers. Med. 2026, 16(4), 181; https://doi.org/10.3390/jpm16040181 - 27 Mar 2026
Viewed by 884
Abstract
To better understand the complexity of biological systems, research has shifted from a reductionist to a holistic approach, expanding the focus from single genes to a genome-scale view of gene activity and regulation. This is known as transcriptomics, a continuously growing field generating [...] Read more.
To better understand the complexity of biological systems, research has shifted from a reductionist to a holistic approach, expanding the focus from single genes to a genome-scale view of gene activity and regulation. This is known as transcriptomics, a continuously growing field generating gene expression signatures from different technologies. A comparable paradigm shift has occurred in computational systems biology with the implementation of Artificial Intelligence (AI) learning models for gene expression analysis and integration. These models enable transcriptome-based profiling to address challenges of data heterogeneity, integration, and updating, assisting human intelligence and enhancing their ability to retrieve, analyze, integrate, and generate data recursively, thanks to their intrinsic predictive, inferential, reinforcement, and generative capabilities. Additionally, while scientists worldwide are still learning how to leverage AI methods that can maintain the human-in-the-loop, a new fundamental change is emerging: agentic AI, which can autonomously act and employ other AI methods to pursue its objectives. As a futuristic perspective, the proposed data analysis pipeline imagines agentic AI systems allowing the automated retrieval and pre-processing of heterogeneous transcriptomics data, analysis and integration with other omics datasets, performed with an incremental updating and recurrent analysis (IURA) model that could allow the detection of guideline updates (e.g., disease reclassification) and the generation of new hypotheses, such as candidate biomarkers or transcriptome–phenotype correlations. Since personalized medicine could derive profound benefits from its use, this scenario also raises important considerations regarding the advantages and concerns associated with the use of scientific AI agents in research and clinical practice. Full article
Show Figures

Graphical abstract

Back to TopTop