Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,003)

Search Parameters:
Keywords = lasso

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1098 KB  
Systematic Review
Shifts with Nights and Migraine Prevalence Among Nurses: A Systematic Review and Meta-Analysis
by Piedad Gómez-Torres, Azahara Ruger-Navarrete, Laura Lasso-Olayo, Isabel Blázquez-Ornat, David Peña-Otero and Sergio Galarreta-Aperte
Healthcare 2026, 14(6), 774; https://doi.org/10.3390/healthcare14060774 - 19 Mar 2026
Abstract
Background: Fixed night work and rotating schedules including nights may contribute to migraine via sleep disruption and circadian misalignment, but evidence is inconsistent and definitions vary. This systematic review and meta-analysis compared past-year migraine prevalence in nurses working night-inclusive schedules versus day-only [...] Read more.
Background: Fixed night work and rotating schedules including nights may contribute to migraine via sleep disruption and circadian misalignment, but evidence is inconsistent and definitions vary. This systematic review and meta-analysis compared past-year migraine prevalence in nurses working night-inclusive schedules versus day-only or non-night schedules. Methods: Following PRISMA 2020 and registered in PROSPERO (CRD420261304288), we searched PubMed, Scopus, Web of Science, CINAHL, and the Cochrane Library from inception to 3 February 2026 (English/Spanish). Observational studies in nurses (≥18 years) reporting past-year migraine prevalence by shift pattern were eligible. All included studies assessed past-year prevalence; pooled PRs reflect 1-year prevalence. Crude prevalence ratios (PRs) were calculated from contingency tables and pooled quantitatively. Risk of bias was assessed with the JBI prevalence checklist. Results: We identified 54 records; 4 studies were included (N = 3843) of which 3323 participants contributed to the comparative meta-analysis because complete disaggregated data were available to construct contingency tables. The pooled association between night-inclusive schedules and migraine prevalence was not statistically significant (PR = 0.95, 95% CI 0.82–1.10; I2 = 0%). Secondary intensity contrasts were inconclusive (high vs. low: PR = 1.24, 95% CI 0.46–3.36; high vs. zero nights: PR = 0.85, 95% CI 0.38–1.93). Conclusions: Current nurse-specific evidence does not show a statistically significant difference in migraine prevalence between night-inclusive and non-night schedules; however, the small evidence base and limited generalizability preclude firm conclusions. Future longitudinal studies are needed to clarify this association. Full article
(This article belongs to the Special Issue Innovative Approaches to Healthcare Worker Wellbeing)
Show Figures

Figure 1

12 pages, 1294 KB  
Article
A Nomogram for Early Prediction of Inflammation, Catabolism, and Immunosuppression Syndrome in Critically Ill Patients
by Valery Likhvantsev, Levan Berikashvili, Mikhail Yadgarov, Alexey Yakovlev and Artem Kuzovlev
Diagnostics 2026, 16(6), 918; https://doi.org/10.3390/diagnostics16060918 - 19 Mar 2026
Abstract
Background: Chronic critical illness (CCI) affects ~7.6% of ICU patients worldwide and is associated with poor outcomes, including 25% in-hospital and 50% one-year mortality. A proposed key mechanism is the inflammation-immunosuppression-catabolism (ICS) triad, which contributes to multiple organ failure and independently increases mortality. [...] Read more.
Background: Chronic critical illness (CCI) affects ~7.6% of ICU patients worldwide and is associated with poor outcomes, including 25% in-hospital and 50% one-year mortality. A proposed key mechanism is the inflammation-immunosuppression-catabolism (ICS) triad, which contributes to multiple organ failure and independently increases mortality. Although early identification of ICS could improve risk stratification, no clinically applicable predictive tool currently exists. This study aimed to develop and validate a prognostic nomogram to predict ICS development in ICU (Intensive Care Unit) patients. Methods: This real-world analysis used electronic health records from the Russian Intensive Care Dataset (RICD). ICS was defined as C-reactive protein > 20 mg/L, albumin < 30 g/L, and lymphocyte count < 0.8 × 109/L. Variables with >30% missing data were excluded, and remaining missing values were handled by multiple imputation. A Cox proportional hazards model was used to construct the nomogram. Internal validation was performed using an 8:2 training–validation split. Results: Among 1963 eligible patients, 540 (27.5%) developed ICS. LASSO (Least Absolute Shrinkage and Selection Operator) regression identified nine significant predictors: age, body mass index, SOFA (Sequential Organ Failure Assessment) and FOUR (Full Outline of UnResponsiveness) scores at admission, pneumonia and anemia at admission, platelet count, total protein, and creatinine. The nomogram showed good discrimination, with C-indices of 0.763 (95% CI: 0.741–0.783) in the training set and 0.735 (95% CI: 0.689–0.784) in the validation set. At the optimal cutoff, sensitivity was 0.75, specificity was 0.63, positive predictive value was 0.43, and negative predictive value was 0.87. Conclusions: This study presents the first nomogram for predicting ICS in ICU patients, using nine admission variables to reliably identify low-risk individuals. Further external validation is required. Full article
(This article belongs to the Section Clinical Laboratory Medicine)
Show Figures

Figure 1

20 pages, 17077 KB  
Article
Comparative Analysis of Machine Learning Algorithms to Predict Municipal Solid Waste
by Pedro Aguilar-Encarnacion, Pedro Peñafiel-Arcos, Marcos Barahona Morales and Wilson Chango
Computation 2026, 14(3), 72; https://doi.org/10.3390/computation14030072 - 19 Mar 2026
Abstract
The management of municipal solid waste in intermediate cities exhibits high daily variability and source heterogeneity, which hinders operational sizing and material recovery. Reliable predictions are required from heterogeneous and often-scarce data. However, studies that compare multiple machine learning algorithms with temporal validation [...] Read more.
The management of municipal solid waste in intermediate cities exhibits high daily variability and source heterogeneity, which hinders operational sizing and material recovery. Reliable predictions are required from heterogeneous and often-scarce data. However, studies that compare multiple machine learning algorithms with temporal validation on short time series in intermediate cities are still limited. This study compares fourteen machine learning algorithms to predict the daily generation of organic and inorganic waste in La Joya de los Sachas, Ecuador, formulating the problem as a multi-output regression problem. An adapted CRISP-DM design was employed, using primary data from a waste characterization campaign, temporal feature engineering, variable encoding, and an expanding-window backtesting protocol against lag-7 persistence and ARIMA. Tree-based ensembles achieved the best performance. AdaBoost provided the best organic forecasts (R2=0.985, RMSE =0.081, MAE=0.061 in rate space), while Random Forest was best for inorganic (R2=0.965, RMSE =0.049, MAE=0.040). Linear models were stable but slightly inferior, and other approaches (SVR, KNN, MLP, Lasso, ElasticNet) showed lower generalization capacity. The study provides a multi-output regression protocol with temporal validation for municipal contexts with short time series, comparative evidence across fourteen algorithms, and a conversion from rates to kilograms for operational use. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

16 pages, 1951 KB  
Article
A Pathomics-Based Prognostic Model for Disease-Free Survival in Resected Gastric Cancer
by Liyun Zheng, Zhiying Jin, Fazong Wu, Shiman Zhu, Yeyu Zhang, Li Chen, Wanbin Chen, Chaoming Huang, Lingyi Zhu, Shiji Fang, Zijian Zhu, Qi Huang, Minjiang Chen, Zhongwei Zhao, Weiwen Li and Shimiao Cheng
Cancers 2026, 18(6), 993; https://doi.org/10.3390/cancers18060993 - 19 Mar 2026
Abstract
Objectives: This study aims to develop and validate a prognostic risk model by integrating pathomics features with clinical variables to predict disease-free survival (DFS) in patients with gastric cancer (GC). Methods: Patients with GC who were pathologically diagnosed and subsequently treated with curative [...] Read more.
Objectives: This study aims to develop and validate a prognostic risk model by integrating pathomics features with clinical variables to predict disease-free survival (DFS) in patients with gastric cancer (GC). Methods: Patients with GC who were pathologically diagnosed and subsequently treated with curative gastrectomy and D2 lymphadenectomy at the Fifth Affiliated Hospital of Wenzhou Medical University between January 2017 and April 2023 were retrospectively enrolled and assigned to a training cohort (n = 275) and an independent validation cohort (n = 118). Pathomics features were extracted from pathological images, and LASSO-Cox regression was used to identify pathomics features significantly associated with DFS. The selected pathomics features were integrated with clinical factors to create a prognostic model. Predictive accuracy was evaluated using time-dependent ROC analysis, and the model’s performance was compared with the clinic-only and pathomics-only models. A nomogram was constructed to provide individualized DFS predictions. Results: A total of 16 pathomics features were selected, and the cut-off for the pathomics scores was set at 0.27. High-risk patients exhibited significantly worse DFS compared to low-risk patients in both the training cohort (HR = 4.57, 95% CI: 3.118–6.697, p < 0.0001) and the validation cohort (HR = 2.264, 95% CI: 1.255–4.083, p < 0.0001). The clinic–pathomics model demonstrated strong predictive performance in both cohorts, with AUCs for 1-, 3-, and 5-year survival of 0.832, 0.821, and 0.851 in the training cohort, and 0.671, 0.702, and 0.682 in the validation cohort. The nomogram, incorporating the pathomics score, T stage, differentiation degree, and ECOG performance status, showed high calibration accuracy, as confirmed by calibration plots, and outperformed both the clinic-only and pathomics-only models in decision curve analysis. Conclusions: A clinic–pathomics model integrating pathomics features with clinical data provides a reliable tool for DFS prediction in patients with GC, which facilitates individualized DFS predictions and personalized treatment strategies. Full article
(This article belongs to the Section Cancer Pathophysiology)
Show Figures

Figure 1

25 pages, 12954 KB  
Article
From a Multi-Omics Signature to a Therapeutic Candidate: Computational Prediction and Experimental Validation in Liver Fibrosis
by Yingying Qin, Shuoshuo Ma, Haoyuan Hong, Deyuan Zhong, Yuxin Liang, Yuhao Su, Yahui Chen, Xing Chen, Yizhun Zhu and Xiaolun Huang
Pharmaceuticals 2026, 19(3), 495; https://doi.org/10.3390/ph19030495 - 17 Mar 2026
Abstract
Background: Advanced liver fibrosis (LF) is a major determinant of prognosis across chronic liver diseases. Current biomarkers are often etiology-specific and lack cross-cohort robustness. Shared molecular drivers across etiologies remain incompletely defined, and effective anti-fibrotic therapies are limited. Methods: We developed [...] Read more.
Background: Advanced liver fibrosis (LF) is a major determinant of prognosis across chronic liver diseases. Current biomarkers are often etiology-specific and lack cross-cohort robustness. Shared molecular drivers across etiologies remain incompletely defined, and effective anti-fibrotic therapies are limited. Methods: We developed a multi-algorithm consensus machine-learning framework to derive a robust LF progression signature. In the training non-alcoholic fatty liver disease (NAFLD) cohort GSE213621 (n = 368), samples were formulated as a binary classification task (mild fibrosis, F0–F2; advanced fibrosis, F3–F4). Candidate genes were screened in parallel using Boruta, Least Absolute Shrinkage and Selection Operator (LASSO), random forest, and eXtreme Gradient Boosting (XGBoost). Genes selected by at least two algorithms were defined as a high-consensus pool, and genes consistently selected by all four algorithms were prioritized to construct a core signature. Model performance was evaluated by stratified cross-validation in the training cohort and externally validated in four independent cohorts of different etiologies (GSE49541, GSE84044, GSE130970, and GSE276114). Cellular sources of signature genes were characterized using single-cell RNA sequencing (scRNA-seq) datasets GSE136103 (human) and GSE172492 (mouse). For therapeutic discovery, the high-consensus expression profile was queried against the Connectivity Map (CMap) to prioritize compounds predicted to reverse the fibrotic transcriptional program. Withaferin A (WFA) was selected for experimental validation in a carbon tetrachloride (CCl4)-induced mouse LF model and in the transforming growth factor-β1 (TGF-β1)-stimulated human hepatic stellate cell line LX-2. Bulk liver RNA-seq profiling was performed to interrogate WFA-associated molecular changes in vivo. Results: We identified a six-gene signature (CLEC4M, COL25A1, ITGBL1, NALCN, PAPPA, and PEG3) that discriminated advanced from mild fibrosis, achieving a mean AUC of 0.890 in internal cross-validation and an average AUC of 0.864 across external validation cohorts. scRNA-seq analysis revealed cell-type-specific expression with prominent enrichment in fibroblast populations. In vivo, WFA markedly attenuated CCl4-induced fibrosis (p < 0.05) and reversed 1314 fibrosis-associated differentially expressed genes (adjusted p < 0.05), which were enriched in fatty acid metabolism and PPAR signaling, as well as extracellular matrix (ECM)–receptor interaction and focal adhesion (adjusted p < 0.05). In vitro, WFA suppressed TGF-β1-induced LX-2 activation, reducing α-SMA and Fibronectin expression (p < 0.05). Conclusions: We report a six-gene signature that robustly predicts advanced LF across etiologies, define its cellular context using single-cell atlases, and validate the anti-fibrotic activity of WFA in both in vivo and in vitro models. Bulk liver RNA-seq and cellular evidence further suggest that WFA-associated effects are linked to lipid metabolic programs, ECM remodeling, and attenuation of hepatic stellate cell activation. Full article
(This article belongs to the Section Medicinal Chemistry)
Show Figures

Figure 1

21 pages, 11307 KB  
Article
A Symmetry-Preserving Extrapolated Primal-Dual Hybrid Gradient Method for Saddle-Point Problems
by Xiayang Zhang, Wenzhuo Li, Bowen Chang, Wei Liu and Shiyu Zhang
Axioms 2026, 15(3), 219; https://doi.org/10.3390/axioms15030219 - 16 Mar 2026
Abstract
The primal-dual hybrid gradient (PDHG) method is widely used for convex–concave saddle-point problems, yet its extrapolated variants are typically asymmetric because only one side is extrapolated. We propose a symmetry-preserving refinement, E-PDHG, which performs dual-side extrapolation followed by an explicit correction step. Under [...] Read more.
The primal-dual hybrid gradient (PDHG) method is widely used for convex–concave saddle-point problems, yet its extrapolated variants are typically asymmetric because only one side is extrapolated. We propose a symmetry-preserving refinement, E-PDHG, which performs dual-side extrapolation followed by an explicit correction step. Under standard step-size conditions, we establish global convergence for all η(1,1) and derive a pointwise (non-ergodic) O(1/t) rate for the last iterate. The method does not improve the asymptotic complexity order of PDHG; instead, it enlarges the practically stable parameter region while retaining the same per-iteration cost. Numerical experiments on image deblurring/inpainting and additional machine learning benchmarks (logistic regression and LASSO) demonstrate improved finite-iteration stability and efficiency. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

16 pages, 6943 KB  
Article
Integration of RNA Editing into Multiomics Machine Learning Models for Predicting Drug Responses in Breast Cancer Patients
by Yanara A. Bernal, Alejandro Blanco, Karen Oróstica, Iris Delgado and Ricardo Armisén
Biomedicines 2026, 14(3), 665; https://doi.org/10.3390/biomedicines14030665 - 14 Mar 2026
Abstract
Background: The integration of multi-omics data, such as genomics and transcriptomics, into artificial intelligence models has advanced precision medicine. However, their clinical applicability remains limited due to model complexity. We integrated DNA mutation, RNA expression, and A>I(G) RNA editing data to develop [...] Read more.
Background: The integration of multi-omics data, such as genomics and transcriptomics, into artificial intelligence models has advanced precision medicine. However, their clinical applicability remains limited due to model complexity. We integrated DNA mutation, RNA expression, and A>I(G) RNA editing data to develop a predictive model for drug response in breast cancer. Methods: We analyzed 104 patients from the Breast Cancer Genome-Guided Therapy Study (ClinicalTrials.gov: NCT02022202). Clinical variables, gene expression, tumor and germline DNA variants, and RNA editing features were integrated into machine learning models to predict therapy response. Generalized linear models (GLM), random forest (RF), and support vector machines (SVM) were trained and evaluated across multiple random 70/30 train-test splits. Feature selection was performed exclusively within the training set using LASSO regularization. Model performance was assessed using the F1-score on independent test sets. The additive effect of RNA editing was evaluated using paired comparisons across identical train/test splits. Results: We characterized the cohort using clinical, mutational, transcriptomic, and RNA editing profiles in 69 non-responders and 35 responders. Across repeated splits, adding RNA editing frequently maintained or modestly improved predictive performance, particularly in expression-based models, with paired analyses showing a statistically significant increase in F1-score. Conclusions: RNA editing represents a complementary molecular layer that can enhance multi-omic models for therapy response prediction in breast cancer, supporting further investigation of epitranscriptomic features in precision oncology. Full article
(This article belongs to the Special Issue Bioinformatics Analysis of RNA for Human Health and Disease)
Show Figures

Figure 1

26 pages, 2590 KB  
Article
A Machine Learning Framework for the Reconstruction of Composite Fatigue and Fracture Properties: A Synthetic Data Study
by Saurabh Tiwari and Aman Gupta
Materials 2026, 19(6), 1131; https://doi.org/10.3390/ma19061131 - 14 Mar 2026
Abstract
This study presents a machine learning framework for the reconstruction of fatigue life and fracture toughness in natural fiber-reinforced composites, evaluating the predictive accuracy of six regression algorithms—Random Forest, Gradient Boosting, Support Vector Machine, Neural Network, Ridge Regression, and Lasso Regression—using a controlled [...] Read more.
This study presents a machine learning framework for the reconstruction of fatigue life and fracture toughness in natural fiber-reinforced composites, evaluating the predictive accuracy of six regression algorithms—Random Forest, Gradient Boosting, Support Vector Machine, Neural Network, Ridge Regression, and Lasso Regression—using a controlled synthetic dataset of 600 samples generated from established Basquin fatigue and Rule of Mixtures fracture equations, incorporating stochastic noise calibrated to experimental scatter (CV = 15–50%), with log-normal noise standard deviation of 0.20 for fatigue life and Gaussian noise standard deviation of 0.15 for fracture toughness. The dataset encompasses eight natural fiber types (flax, jute, sisal, hemp, bamboo, coconut, banana, and pineapple) and five matrix systems (epoxy, polyester, PLA, vinyl ester, and polyurethane). Models were evaluated using a 70-15-15 train–validation–test split with 5-fold cross-validation and exhaustive grid search hyperparameter optimisation. Gradient Boosting achieved R2 = 0.93 for fatigue life and Stacking Ensemble achieved R2 = 0.87 for fracture toughness, representing 97% and 89% of their respective noise-ceiling values (theoretical maximum R2 of 0.96 and 0.98 given the programmed noise levels). The ML models perform supervised function approximation—learning to reconstruct the programmed generation equations rather than discovering novel physical composite behaviour—and function as automated surrogates for the governing equations. Feature importance analysis identified engineered composite indicators, stress amplitude, and fiber length as the most influential parameters. The framework provides a reproducible ML evaluation pipeline as a methodological template for future experimental composite studies. Full article
Show Figures

Figure 1

22 pages, 4100 KB  
Article
Explainable Machine Learning-Based Urban Waterlogging Prediction Framework
by Yinghua Deng and Xin Lu
Urban Sci. 2026, 10(3), 156; https://doi.org/10.3390/urbansci10030156 - 13 Mar 2026
Viewed by 65
Abstract
Urban waterlogging has become a critical challenge to urban sustainability under the combined pressures of rapid urbanization and increasingly frequent extreme weather events. However, traditional predictive models struggle to achieve real-time, point-specific early warning effectively, primarily due to the interference of redundant high-dimensional [...] Read more.
Urban waterlogging has become a critical challenge to urban sustainability under the combined pressures of rapid urbanization and increasingly frequent extreme weather events. However, traditional predictive models struggle to achieve real-time, point-specific early warning effectively, primarily due to the interference of redundant high-dimensional data and the inability to handle severe data imbalance. This study proposes a lightweight and interpretable machine learning framework for real-time waterlogging hotspot prediction, based on a multi-dimensional feature space. Specifically, we implement a Lasso-based mechanism to distill 37 multi-source variables into five core determinants. This process effectively isolates dominant environmental drivers while filtering noise. To further overcome the recall bottleneck, we propose a Synthetic Minority Over-sampling Technique based on Weighted Distance and Cleaning (SMOTE-WDC) algorithm that incorporates weighted feature distances and density-based noise cleaning. Validating the framework on datasets from Shenzhen (2023–2024), we demonstrate that the integrated Gradient Boosting Decision Tree (GBDT) model integrated with this strategy achieves optimal performance using only five features, yielding an F1-score of 0.808 and an Area Under the Precision-Recall Curve (AUC-PR) of 0.895. Notably, a Recall of 0.882 is attained, representing a 4.6% improvement over the baseline. This study contributes a cost-effective, high-sensitivity approach to disaster risk reduction, advancing predictive urban waterlogging management. Full article
Show Figures

Figure 1

16 pages, 625 KB  
Article
Benchmarking Training Emissions of Regression Models for Vehicle CO2 Prediction
by Mahmut Turhan, Murat Emeç and Muzaffer Ertürk
Sustainability 2026, 18(6), 2830; https://doi.org/10.3390/su18062830 - 13 Mar 2026
Viewed by 82
Abstract
The urgency of climate action has intensified the use of machine learning (ML) to predict vehicular CO2 emissions; however, the training of machine learning models also generates computational emissions that are seldom reported. This study addresses a paradox central to Green AI: [...] Read more.
The urgency of climate action has intensified the use of machine learning (ML) to predict vehicular CO2 emissions; however, the training of machine learning models also generates computational emissions that are seldom reported. This study addresses a paradox central to Green AI: can carbon-intensive algorithms be justified for predicting carbon emissions? Using a public dataset of 7385 light-duty vehicles, we trained nine widely used regression models spanning simple linear baselines, polynomial and regularised linear methods, tree-based learners, ensembles, and a neural network. All experiments were instrumented with CodeCarbon to quantify real-time training footprints under a grid carbon intensity of 450 g CO2/kWh. Across models, test performance ranged from R2 = 0.72 to 0.99, yet training emissions varied by four orders of magnitude, from 0.001 g CO2 (simple linear regression) to 2.3 g CO2 (XGBoost). Although XGBoost achieved the highest accuracy (R2 = 0.9947), it emitted approximately 2300× more CO2 than regularised polynomial linear models for only a 0.39-point gain in R2. Pareto analysis identifies Lasso and Ridge regression with degree-4 polynomial features as sustainability-optimal, reaching R2 = 0.9908 at ~0.004 g CO2. To unify predictive and environmental efficiency, we introduce Accuracy-per-Gram (APG = R2/CO2) and Marginal Emissions Cost (MEC = ΔCO2/ΔR2), demonstrating a steep efficiency cliff beyond regularised linear models. At the fleet scale (100 million vehicles with daily retraining), algorithm choice implies ~84 t CO2/year for XGBoost versus ~0.15 t for Lasso, highlighting the potential climate cost of marginal accuracy gains. We provide a reproducible carbon-tracking pipeline, Green-AI evaluation metrics, and deployment guidance, arguing that computational sustainability must co-determine model selection for emissions-related ML systems. Most critically, we identify a clear accuracy–carbon emission Pareto frontier, demonstrating that regularised polynomial linear models lie on the sustainability-optimal boundary, while widely used ensemble methods such as XGBoost sit beyond an “efficiency cliff,” where marginal accuracy improvements incur disproportionately high carbon costs. Full article
Show Figures

Figure 1

21 pages, 4894 KB  
Article
Proposed Role of Circadian Clock Genes in Pathogenesis of HCC: Molecular Subtyping and Characterization
by Zhikui Lu, Yi Zhou, Jian Luo, Zhicheng Liu and Zhenyu Xiao
Biomedicines 2026, 14(3), 645; https://doi.org/10.3390/biomedicines14030645 - 12 Mar 2026
Viewed by 142
Abstract
Background: Hepatocellular carcinoma (HCC) stands as a prevalent global health issue with increasing incidence and mortality rates. Hepatocellular carcinoma (HCC) exhibits profound molecular and clinical heterogeneity, which limits the effectiveness of current therapeutic strategies. Circadian rhythm disruption has been implicated in metabolic reprogramming, [...] Read more.
Background: Hepatocellular carcinoma (HCC) stands as a prevalent global health issue with increasing incidence and mortality rates. Hepatocellular carcinoma (HCC) exhibits profound molecular and clinical heterogeneity, which limits the effectiveness of current therapeutic strategies. Circadian rhythm disruption has been implicated in metabolic reprogramming, proliferation, and immune modulation in cancer, but its role in shaping HCC heterogeneity remains poorly defined. Methods: Four public HCC transcriptomic cohorts (TCGA-LIHC, CHCC, LIRI, LICA) were integrated using RMA normalization and ComBat for batch correction. Consensus clustering based on 31 core circadian clock genes (CCGs) identified robust molecular subtypes. Multi-omics characterization—including genomic alterations, pathway activity (GSEA/GSVA), immune microenvironment profiling (CIBERSORT, EPIC, MCP-counter, xCell), and drug-sensitivity prediction (pRRophetic/oncoPredict)—was performed to delineate subtype-specific biological properties. A nine-gene CCG-based RiskScore model was constructed using LASSO Cox regression to internally validate subtype robustness and intra-subtype risk stratification. Results: Using consensus clustering of 31 core CCGs in TCGA-LIHC and three independent validation cohorts (CHCC, LIRI, LICA), we identified three reproducible subtypes—Cluster-1 (metabolic–quiescent), Cluster-2 (transition–intermediate), and Cluster-3 (proliferation–inflammatory)—which were recapitulated across cohorts and showed distinct overall survival (Cluster-3 worst; log-rank p values significant across datasets). Multi-omic characterization revealed that Cluster-3 exhibits the highest tumor mutational burden and CNV burden with enrichment of TP53/AXIN1/TERT alterations, strong activation of cell-cycle, E2F, and G2M programs, and an immune-hot yet immunosuppressed microenvironment enriched for TAMs, Tregs and MDSCs. By contrast, Cluster-1 shows relative genomic stability, dominant hepatic metabolic signatures (fatty-acid oxidation, bile-acid and xenobiotic metabolism) and an immune-cold phenotype. Single-cell mapping linked ALAS1 expression to malignant hepatocytes predominating in Cluster-1, whereas NONO and CSNK1D localized to stromal (CAFs/TECs) and both malignant/immune compartments respectively in Cluster-3, providing a cellular mechanism for subtype-specific metabolism, angiogenesis and immune modulation. Finally, a nine-gene CCG-based RiskScore validated prognostic stratification and drug-sensitivity predictions indicated subtype-specific therapeutic vulnerabilities (notably increased predicted TKI sensitivity in Cluster-3). Conclusion: In conclusion, this study proposes a robust circadian rhythm-based molecular classification of hepatocellular carcinoma, revealing three biologically and clinically distinct subtypes characterized by divergent genomic alterations, metabolic programs, immune microenvironment states, and prognostic patterns. By integrating bulk and single-cell transcriptomic data, we identify subtype-specific roles of key circadian regulators—including ALAS1, NONO, and CSNK1D—in shaping tumor metabolism, proliferation, stromal remodeling, and immune suppression. These findings highlight circadian dysregulation as a potential upstream factor associated with HCC heterogeneity and provide a conceptual framework for developing subtype-tailored mechanistic studies and circadian-informed therapeutic strategies. Full article
(This article belongs to the Section Molecular Genetics and Genetic Diseases)
Show Figures

Figure 1

17 pages, 1093 KB  
Article
A LASSO-Based Nomogram for Predicting Focal Complications in Brucellosis: A Multicenter Retrospective Cohort Study
by Enes Dalmanoğlu, Sevda Ozdemir Al and Ünsal Bağın
J. Clin. Med. 2026, 15(6), 2180; https://doi.org/10.3390/jcm15062180 - 12 Mar 2026
Viewed by 158
Abstract
Background: Up to one-third of brucellosis patients develop focal organ involvement, contributing to increased morbidity and therapeutic failure, yet no clinically validated instrument exists to stratify risk at presentation. Methods: In this three-center retrospective cohort from Türkiye (2015–2025), 355 adults with [...] Read more.
Background: Up to one-third of brucellosis patients develop focal organ involvement, contributing to increased morbidity and therapeutic failure, yet no clinically validated instrument exists to stratify risk at presentation. Methods: In this three-center retrospective cohort from Türkiye (2015–2025), 355 adults with confirmed brucellosis were enrolled. Thirty-two candidate variables spanning demographics, comorbidities, symptoms, routine laboratory values, and composite inflammation indices underwent LASSO-penalized regression with 10-fold cross-validation for predictor selection, after which a nomogram was constructed and internally validated via 1000-iteration bootstrap resampling. Results: Ninety-two patients (25.9%) developed focal complications. Five predictors were retained by LASSO—prognostic nutritional index (PNI), erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), chronic disease stage, and hypertension—and combined with age and sex (retained a priori) into a seven-predictor nomogram. PNI was the strongest contributor (OR = 0.901, 95% CI: 0.857–0.948). Apparent C-statistic reached 0.782 (optimism-corrected 0.762), with a calibration slope of 0.894 and Brier score of 0.154. Decision curve analysis indicated net clinical benefit over the 5–55% threshold probability range. Conclusions: This PNI-anchored LASSO nomogram offers a practical bedside risk stratification instrument for brucellosis-related focal involvement. Prospective external validation across geographically diverse endemic regions is warranted before clinical adoption. Full article
(This article belongs to the Section Infectious Diseases)
Show Figures

Figure 1

30 pages, 954 KB  
Article
Poisson Mixed-Effects Count Regression Model Based on Double SCAD Penalty and Its Simulation Study
by Keqian Li, Xueni Ren, Hanfang Li and Youxi Luo
Axioms 2026, 15(3), 214; https://doi.org/10.3390/axioms15030214 - 12 Mar 2026
Viewed by 56
Abstract
This paper focuses on variable selection and parameter estimation for mixed-effects Poisson count regression models. To simultaneously select important variables in both fixed effects and random effects, we propose a double-penalized Poisson count regression model with the Smoothly Clipped Absolute Deviation (SCAD) penalty [...] Read more.
This paper focuses on variable selection and parameter estimation for mixed-effects Poisson count regression models. To simultaneously select important variables in both fixed effects and random effects, we propose a double-penalized Poisson count regression model with the Smoothly Clipped Absolute Deviation (SCAD) penalty imposed on both components. To estimate the unknown parameters, we develop a new iterative algorithm called the Double SCAD–Local Quadratic Approximation (DSCAD-LQA) algorithm. Under regularity conditions, the consistency and Oracle property of the proposed estimator are established. Simulation studies are conducted under two types of penalty parameter selection criteria: the Schwarz Information Criterion (SIC) and the Generalized Approximate Cross-Validation (GACV). We evaluate the performance of the proposed method under different levels of correlation among explanatory variables and different covariance structures of random effects. Comparisons are also carried out with the non-penalized model, the single-penalized model, and the double LASSO-penalized model. The results demonstrate that the proposed double SCAD penalty method performs better than the other three methods in terms of important variable selection and coefficient estimation, and is especially effective for sparse models. Full article
Show Figures

Figure 1

10 pages, 1377 KB  
Article
Augmenting 16-Run Two-Level Non-Regular Fractional Factorial Designs
by Hanan Alqarni, Douglas C. Montgomery and Carly E. Metcalfe
Mathematics 2026, 14(6), 957; https://doi.org/10.3390/math14060957 - 12 Mar 2026
Viewed by 114
Abstract
When the resources for experimentation are limited, experimenters usually turn to the class of 2-level fractional factorial designs. Resolution III fractional factorial designs are the smallest available designs, but they alias main effects and 2-factor interactions. The class of Resolution IV designs avoids [...] Read more.
When the resources for experimentation are limited, experimenters usually turn to the class of 2-level fractional factorial designs. Resolution III fractional factorial designs are the smallest available designs, but they alias main effects and 2-factor interactions. The class of Resolution IV designs avoids this and provides clear estimates of the main effects, assuming that 3-factor and higher-order interactions are not active. Meanwhile, some two-factor interactions remain aliased with each other. Resolution V designs have no aliasing of main effects and two-factor interactions, assuming that all higher-order interactions are inactive. However, they are often too large for situations with six or more factors of interest. For example, with six factors, the only design capable of estimating all main effects and 2-factor interactions has 32 runs. Consequently, resource restrictions often require experimenters to use smaller designs of lower resolution, typically Resolution IV. The aliasing of effects often requires additional follow-up experimentation to de-alias all active effects. However, there are situations in which follow-up experiments are impossible to perform due to the unavailability of certain test resources. An alternative to using a 16-run Resolution IV design and a follow-up experiment is to use a design with more than 16 runs as the initial experiment. We investigated a strategy for initially augmenting a class of 16-run Resolution IV designs with either 4 or 8 runs. We use a simulation study to show that this augmentation strategy improves the ability to estimate active factors when standard analysis methods are employed. The analysis methods used in this study are Stepwise, LASSO, and Dantzig. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

20 pages, 1559 KB  
Article
Prediction of Bulk Density in Laser Powder Bed Fusion of Pure Zinc Using Supervised Machine Learning
by Kristijan Šket, Snehashis Pal, Tomaž Brajlih, Igor Drstvenšek and Mirko Ficko
Metals 2026, 16(3), 309; https://doi.org/10.3390/met16030309 - 11 Mar 2026
Viewed by 134
Abstract
This work used machine learning to forecast product density and optimize the laser powder bed fusion (LPBF) process for parts made of pure zinc (Zn). A relative density of 90–97% (6.42–6.95 g/cm3) was obtained by varying combinations of key process parameters, [...] Read more.
This work used machine learning to forecast product density and optimize the laser powder bed fusion (LPBF) process for parts made of pure zinc (Zn). A relative density of 90–97% (6.42–6.95 g/cm3) was obtained by varying combinations of key process parameters, including laser power, scanning speed, track overlapping, hatch spacing, and layer thickness. Machine learning provided models for density prediction and better comprehension of the impact of input parameters. A SHapley Additive exPlanation (SHAP) analysis quantified the contributions of specific features, enhancing model interpretability. Fifty-one experimental runs were used to test several methods, including Bayesian ridge, CatBoost, elastic net, lasso, linear regression, random forest, ridge regression, and XGBoost. CatBoost performed best, with a test coefficient of determination (R2) of 0.893, a mean absolute error (MAPE) of 0.010 and a root mean square error (RMSE) of 0.015. A feature importance analysis showed that laser power (49%) and scanning speed (42%) had the greatest influence, while hatch spacing (5%) and layer thickness (4%) had minimal impacts on product density. Therefore, selecting the correct optimized set of process parameters determines the resulting density and can support more efficient LPBF process development. Full article
(This article belongs to the Special Issue Advances in Metal Additive Manufacturing: Process and Performance)
Show Figures

Figure 1

Back to TopTop