Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,565)

Search Parameters:
Keywords = threshold optimization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1973 KB  
Article
Continuous Smartphone Authentication via Multimodal Biometrics and Optimized Ensemble Learning
by Chia-Sheng Cheng, Ko-Chien Chang, Hsing-Chung Chen and Chao-Lung Chou
Mathematics 2026, 14(2), 311; https://doi.org/10.3390/math14020311 (registering DOI) - 15 Jan 2026
Abstract
The ubiquity of smartphones has transformed them into primary repositories of sensitive data; however, traditional one-time authentication mechanisms create a critical trust gap by failing to verify identity post-unlock. Our aim is to mitigate these vulnerabilities and align with the Zero Trust Architecture [...] Read more.
The ubiquity of smartphones has transformed them into primary repositories of sensitive data; however, traditional one-time authentication mechanisms create a critical trust gap by failing to verify identity post-unlock. Our aim is to mitigate these vulnerabilities and align with the Zero Trust Architecture (ZTA) framework and philosophy of “never trust, always verify,” as formally defined by the National Institute of Standards and Technology (NIST) in Special Publication 800-207. This study introduces a robust continuous authentication (CA) framework leveraging multimodal behavioral biometrics. A dedicated application was developed to synchronously capture touch, sliding, and inertial sensor telemetry. For feature modeling, a heterogeneous deep learning pipeline was employed to capture modality-specific characteristics, utilizing Convolutional Neural Networks (CNNs) for sensor data, Long Short-Term Memory (LSTM) networks for curvilinear sliding, and Gated Recurrent Units (GRUs) for discrete touch. To resolve performance degradation caused by class imbalance in Zero Trust environments, a Grid Search Optimization (GSO) strategy was applied to optimize a weighted voting ensemble, identifying the global optimum for decision thresholds and modality weights. Empirical validation on a dataset of 35,519 samples from 15 subjects demonstrates that the optimized ensemble achieves a peak accuracy of 99.23%. Sensor kinematics emerged as the primary biometric signature, followed by touch and sliding features. This framework enables high-precision, non-intrusive continuous verification, bridging the critical security gap in contemporary mobile architectures. Full article
Show Figures

Figure 1

23 pages, 5085 KB  
Article
Carbon Reduction Benefits and Economic Performance Analysis of Lattice Structural Systems Utilizing Small-Diameter Round Timber as the Primary Material
by Ying Wu, Jianmei Wu, Hongpeng Xu, Jiayi Li and Yuncheng Ji
Buildings 2026, 16(2), 372; https://doi.org/10.3390/buildings16020372 - 15 Jan 2026
Abstract
To address the imbalance between the “ecological advantage” and “economic benefit” of wooden structure buildings, this study examines two structural construction methods utilizing inexpensive and readily available small-diameter round timber as the primary material. It demonstrates the advantages of these two structural systems [...] Read more.
To address the imbalance between the “ecological advantage” and “economic benefit” of wooden structure buildings, this study examines two structural construction methods utilizing inexpensive and readily available small-diameter round timber as the primary material. It demonstrates the advantages of these two structural systems in terms of material consumption, life cycle carbon emissions, and economic efficiency. Through the research methods and processes of “Preliminary analysis–Proposing the construction system–The feasibility analysis of structural technology–Efficiency assessment”, the sustainable wood structure technical system suitable for the development of China is explored. The main conclusions are as follows: (1) Employing the preliminary analysis method, this paper examines and analyzes construction cases that primarily utilize small-diameter round timber as the main material. It delineates specific construction types based on the characteristics of small-diameter round timber. Additionally, it technically reconstructs the methodology for utilizing small-diameter round timber. (2) Two lattice structural systems are proposed, leveraging the mechanical properties and fundamental morphological characteristics of inexpensive and readily available small-diameter round timber of fast-growing Northeast larch. The technical feasibility of these two small-diameter log structure systems is validated through simulation analysis of their spatial threshold suitability. (3) This study conducted a comprehensive comparison between the two small-diameter round timber structural systems and the conventional grain-parallel glued laminated timber (Cross-Laminated Timber) frame structural systems. The analysis was performed from three perspectives. As the primary structural material, grain-parallel glued laminated timber frame structural systems exhibits significant advantages in terms of timber utilization per unit area of the structural system. From a life cycle carbon emission analysis perspective, compared to grain-parallel glued laminated timber frame structures, small-diameter round timber structures can achieve carbon emission reductions ranging from 79.19% to 97.74%. Additionally, the unit area cost of small-diameter round timber structures is reduced by 21.02% to 40.42% relative to grain-parallel glued laminated timber frame structures. Consequently, it can be concluded that small-diameter round timber structural systems possess technical feasibility and construction advantages for small and medium-sized buildings, offering practical value in optimizing technical systems to meet the objective needs of ecological construction. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
22 pages, 15052 KB  
Article
Bi-Level Decision-Making for Commercial Charging Stations in Demand Response Considering Nonlinear User Satisfaction
by Weiqing Sun, En Xie and Wenwei Yang
Sustainability 2026, 18(2), 907; https://doi.org/10.3390/su18020907 - 15 Jan 2026
Abstract
With the widespread adoption of electric vehicles, commercial charging stations (CCS) have grown rapidly as a core component of charging infrastructure. Due to the concentrated and high-power charging load characteristics of CCS, a ‘peak on peak’ phenomenon can occur in the power distribution [...] Read more.
With the widespread adoption of electric vehicles, commercial charging stations (CCS) have grown rapidly as a core component of charging infrastructure. Due to the concentrated and high-power charging load characteristics of CCS, a ‘peak on peak’ phenomenon can occur in the power distribution network. Demand response (DR) serves as an important and flexible regulation tool for power systems, offering a new approach to addressing this issue. However, when CCS participates in DR, it faces a dual dilemma between operational revenue and user satisfaction. To address this, this paper proposes a bi-level, multi-objective framework that co-optimizes station profit and nonlinear user satisfaction. An asymmetric sigmoid mapping is used to capture threshold effects and diminishing marginal utility. Uncertainty in users’ charging behaviors is evaluated using a Monte Carlo scenario simulation together with chance constraints enforced at a 0.95 confidence level. The model is solved using the fast non-dominated sorting genetic algorithm, NSGA-II, and the compromise optimal solution is identified via the entropy-weighted Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). Case studies show robust peak shaving with a 6.6 percent reduction in the daily maximum load, high satisfaction with a mean of around 0.96, and higher revenue with an improvement of about 12.4 percent over the baseline. Full article
(This article belongs to the Section Energy Sustainability)
30 pages, 7257 KB  
Article
Water Surface Ratio and Inflow Rate of Paddy Polder Under the Stella Nitrogen Cycle Model
by Yushan Jiang, Junyu Hou, Fanyu Zeng, Jilin Cheng and Liang Wang
Sustainability 2026, 18(2), 897; https://doi.org/10.3390/su18020897 - 15 Jan 2026
Abstract
To address the challenge of optimizing hydrological parameters for nitrogen pollution control in paddy polders, this study coupled the Stella eco-dynamics model with an external optimization algorithm and developed a nonlinear programming framework using the water surface ratio and inflow rate as decision [...] Read more.
To address the challenge of optimizing hydrological parameters for nitrogen pollution control in paddy polders, this study coupled the Stella eco-dynamics model with an external optimization algorithm and developed a nonlinear programming framework using the water surface ratio and inflow rate as decision variables and the maximum nitrogen removal rate as the objective function. The simulation and optimization conducted for the Hongze Lake polder area indicated that the model exhibited strong robustness, as verified through Monte Carlo uncertainty analysis, with coefficients of variation (CV) of nitrogen outlet concentrations all below 3%. Under the optimal regulation scheme, the maximum nitrogen removal rates (η1, η2, and η4) during the soaking, tillering, and grain-filling periods reached 98.86%, 98.74%, and 96.26%, respectively. The corresponding optimal inflow rates (Q*) were aligned with the lower threshold limits of each growth period (1.20, 0.80, and 0.50 m3/s). The optimal channel water surface ratios (A1*) were 3.81%, 3.51%, and 3.34%, respectively, while the optimal pond water surface ratios (A2*) were 19.94%, 16.30%, and 17.54%, respectively. Owing to the agronomic conflict between “water retention without drainage” and concentrated fertilization during the heading period, the maximum nitrogen removal rate (η3) during this stage was only 37.34%. The optimal channel water surface ratio (A1*) was 2.37%, the pond water surface ratio (A2*) was 19.04%, and the outlet total nitrogen load increased to 8.39 mg/L. Morphological analysis demonstrated that nitrate nitrogen and organic nitrogen dominated the outlet water body. The “simulation–optimization” coupled framework established in this study can provides quantifiable decision-making tools and methodological support for the precise control and sustainable management of agricultural non-point source pollution in the floodplain area. Full article
21 pages, 830 KB  
Article
Predicting Breast Cancer Mortality Using SEER Data: A Comparative Analysis of L1-Logistic Regression and Neural Networks
by Mayra Cruz-Fernandez, Francisco Antonio Castillo-Velásquez, Carlos Fuentes-Silva, Omar Rodríguez-Abreo, Rafael Rojas-Galván, Marcos Avilés and Juvenal Rodríguez-Reséndiz
Technologies 2026, 14(1), 66; https://doi.org/10.3390/technologies14010066 - 15 Jan 2026
Abstract
Breast cancer remains a leading cause of mortality among women worldwide, motivating the development of transparent and reproducible risk models for clinical decision making. Using the open-access SEER Breast Cancer dataset (November 2017 release), we analyzed 4005 women diagnosed between 2006 and 2010 [...] Read more.
Breast cancer remains a leading cause of mortality among women worldwide, motivating the development of transparent and reproducible risk models for clinical decision making. Using the open-access SEER Breast Cancer dataset (November 2017 release), we analyzed 4005 women diagnosed between 2006 and 2010 with infiltrating duct and lobular carcinoma (ICD-O-3 8522/3). Thirty-one clinical and demographic variables were preprocessed with one-hot encoding and z-score standardization, and the lymph node ratio was derived to characterize metastatic burden. Two supervised models, L1-regularized logistic regression and a feedforward artificial neural network, were compared under identical preprocessing, fixed 60/20/20 data splits, and stratified five-fold cross-validation. To define clinically meaningful endpoints and handle censoring, we reformulated mortality prediction as fixed-horizon classification at 3 and 5 years, and evaluated discrimination, calibration, and operating thresholds. Logistic regression demonstrated consistently strong performance, achieving test ROC-AUC values of 0.78 at 3 years and 0.75 at 5 years, with substantially superior calibration (Brier score less than or equal to 0.12, ECE less than or equal to 0.03). A structured hyperparameter search with repeated-seed evaluation identified optimal neural network architectures for each horizon, yielding test ROC-AUC values of 0.74 at 3 years and 0.73 at 5 years, but with markedly poorer calibration (ECE 0.19 to 0.23). Bootstrap analysis showed no significant AUC difference between models at 3 years, but logistic regression exhibited greater stability across folds and lower sensitivity to feature pruning. Overall, L1-regularized logistic regression provides competitive discrimination (ROC-AUC 0.75 to 0.78), markedly superior probability calibration (ECE below 0.03 versus 0.19 to 0.23 for the neural network), and approximately 40% lower cross-validation variance, supporting its use for scalable screening, risk stratification, and triage workflows on structured registry data. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

21 pages, 2947 KB  
Article
HFSOF: A Hierarchical Feature Selection and Optimization Framework for Ultrasound-Based Diagnosis of Endometrial Lesions
by Yongjun Liu, Zihao Zhang, Tongyu Chai and Haitong Zhao
Biomimetics 2026, 11(1), 74; https://doi.org/10.3390/biomimetics11010074 - 15 Jan 2026
Abstract
Endometrial lesions are common in gynecology, exhibiting considerable clinical heterogeneity across different subtypes. Although ultrasound imaging is the preferred diagnostic modality due to its noninvasive, accessible, and cost-effective nature, its diagnostic performance remains highly operator-dependent, leading to subjectivity and inconsistent results. To address [...] Read more.
Endometrial lesions are common in gynecology, exhibiting considerable clinical heterogeneity across different subtypes. Although ultrasound imaging is the preferred diagnostic modality due to its noninvasive, accessible, and cost-effective nature, its diagnostic performance remains highly operator-dependent, leading to subjectivity and inconsistent results. To address these limitations, this study proposes a hierarchical feature selection and optimization framework for endometrial lesions, aiming to enhance the objectivity and robustness of ultrasound-based diagnosis. Firstly, Kernel Principal Component Analysis (KPCA) is employed for nonlinear dimensionality reduction, retaining the top 1000 principal components. Secondly, an ensemble of three filter-based methods—information gain, chi-square test, and symmetrical uncertainty—is integrated to rank and fuse features, followed by thresholding with Maximum Scatter Difference Linear Discriminant Analysis (MSDLDA) for preliminary feature selection. Finally, the Whale Migration Algorithm (WMA) is applied to population-based feature optimization and classifier training under the constraints of a Support Vector Machine (SVM) and a macro-averaged F1 score. Experimental results demonstrate that the proposed closed-loop pipeline of “kernel reduction—filter fusion—threshold pruning—intelligent optimization—robust classification” effectively balances nonlinear structure preservation, feature redundancy control, and model generalization, providing an interpretable, reproducible, and efficient solution for intelligent diagnosis in small- to medium-scale medical imaging datasets. Full article
(This article belongs to the Special Issue Bio-Inspired AI: When Generative AI and Biomimicry Overlap)
Show Figures

Figure 1

50 pages, 12973 KB  
Article
Deepening the Diagnosis: Detection of Midline Shift Using an Advanced Deep Learning Architecture
by Tuğrul Hakan Gençtürk, İsmail Kaya and Fidan Kaya Gülağız
Appl. Sci. 2026, 16(2), 890; https://doi.org/10.3390/app16020890 - 15 Jan 2026
Abstract
Midline shift (MLS) is one of the conditions that strongly affects mortality and prognosis in critical neurological emergencies such as traumatic brain injury (TBI). Especially, MLS over 5 mm requires urgent diagnosis and treatment. Despite widespread tomography imaging capabilities, the lack of radiologists [...] Read more.
Midline shift (MLS) is one of the conditions that strongly affects mortality and prognosis in critical neurological emergencies such as traumatic brain injury (TBI). Especially, MLS over 5 mm requires urgent diagnosis and treatment. Despite widespread tomography imaging capabilities, the lack of radiologists capable of interpreting the images causes delays in the diagnosis process. Therefore, there is a need for AI-supported diagnostic systems specifically tailored to the field for MLS detection. However, the lack of open, disorder-specific datasets in the literature has limited research in the field and hindered the ability to make comparisons against a reliable reference point. Therefore, the current state of deep learning (DL) methods in the field is not sufficiently addressed. Within the scope of this study, a DL architecture is proposed for MLS detection as a classification task, with millimeter-scale MLS measurements used for evaluation and stratified analysis. This process also comprehensively addresses the status of MLS detection in contemporary DL architecture. Furthermore, to address the lack of open datasets in the literature, two publicly available datasets originally collected with a primary focus on TBI have been annotated for MLS detection. The proposed model was tested on two different open datasets and achieved mean sensitivity values of 0.9467–0.9600 for the Radiological Society of North America (RSNA) dataset and 0.8623–0.8984 for the CQ500 dataset in detecting MLS presence above 5 mm across two different scenarios. It achieved a mean Area Under the Curve-Receiver Operating Characteristic (AUC-ROC) value of 0.9219–0.9816 for the RSNA dataset and 0.9443–0.9690 for the CQ500 dataset. The aim of the study is to detect not only emergency cases but also small MLSs independent of quantity for patient follow-up, so the overall performance of the proposed model (MLS present/absent) was calculated without an MLS quantity threshold. Mean F1 Score values of 0.7403 for the RSNA dataset and 0.7271 for the CQ500 dataset were obtained, along with mean AUC-ROC values of 0.8941 for the RSNA dataset and 0.9301 for the CQ500 dataset. The study presents a clinically applicable, optimized, fast, reliable, up-to-date, and successful DL solution for the rapid diagnosis of MLS, intervention in emergencies, and monitoring of small MLS. It also contributes to the literature by enabling a high level of reproducibility in the scientific community with labeled open data. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medicine and Healthcare—2nd Edition)
Show Figures

Figure 1

31 pages, 3317 KB  
Review
Reactive Oxygen Species in Embryo Development: Sources, Impacts, and Implications for In Vitro Culture Systems
by Sajuna Sunuwar and Yun Seok Heo
Life 2026, 16(1), 136; https://doi.org/10.3390/life16010136 - 15 Jan 2026
Abstract
Reactive oxygen species (ROS) are essential regulators of fertilization and early embryo development in mammals, including humans and various animal models, but they exert detrimental effects when produced in excess. In assisted reproductive technologies (ART), particularly in vitro fertilization (IVF), exposure to non-physiological [...] Read more.
Reactive oxygen species (ROS) are essential regulators of fertilization and early embryo development in mammals, including humans and various animal models, but they exert detrimental effects when produced in excess. In assisted reproductive technologies (ART), particularly in vitro fertilization (IVF), exposure to non-physiological conditions increases oxidative stress (OS), impairing gamete quality, embryo viability, and clinical outcomes. This review synthesizes experimental and clinical studies describing the endogenous and exogenous sources of ROS relevant to embryo development in IVF. Endogenous ROS arise from intrinsic metabolic pathways such as oxidative phosphorylation, NADPH oxidase, and xanthine oxidase. Exogenous sources include suboptimal laboratory conditions characterized by factors such as high oxygen tension, temperature shifts, pH instability, light exposure, media composition, osmolarity, and cryopreservation procedures. Elevated ROS disrupt oocyte fertilization, embryonic cleavage, compaction, blastocyst formation, and implantation by inducing DNA fragmentation, lipid peroxidation, mitochondrial dysfunction, and apoptosis. In addition, the review highlights how parental health factors establish the initial redox status of gametes, which influences subsequent embryo development in vitro. While antioxidant supplementation and optimized culture conditions can mitigate oxidative injury, the precise optimal redox environment remains a subject of ongoing research. This review emphasizes that future research should focus on defining specific redox thresholds and developing reliable, non-invasive indicators of embryo oxidative status to improve the success rates of ART. Full article
(This article belongs to the Special Issue Advances in Livestock Breeding, Nutrition and Metabolism)
Show Figures

Figure 1

15 pages, 1247 KB  
Case Report
Off-Label Ustekinumab and Vedolizumab in Pediatric Anti-TNFα Refractory IBD: Therapeutic Drug Monitoring Insights from a Case Series
by Stefania Cheli, Giulia Mosini, Vera Battini, Carla Carnovale, Sonia Radice, Marta Lebiu, Alessandro Cattoni, Giovanna Zuin and Emilio Clementi
Pharmaceuticals 2026, 19(1), 154; https://doi.org/10.3390/ph19010154 - 15 Jan 2026
Abstract
Background: Vedolizumab and ustekinumab are increasingly used off-label in pediatric inflammatory bowel disease (IBD) unresponsive or refractory to anti–TNFα therapy. Despite their increasing use in clinical practice, evidence in the pediatric population remains limited, especially regarding therapeutic exposure thresholds and the clinical [...] Read more.
Background: Vedolizumab and ustekinumab are increasingly used off-label in pediatric inflammatory bowel disease (IBD) unresponsive or refractory to anti–TNFα therapy. Despite their increasing use in clinical practice, evidence in the pediatric population remains limited, especially regarding therapeutic exposure thresholds and the clinical utility of therapeutic drug monitoring (TDM). Methods: We report a series of five pediatric cases with Crohn’s disease or ulcerative colitis treated with ustekinumab or vedolizumab after anti-TNFα failure. Trough drug concentrations, anti-drug antibodies (ADAs), clinical scores (PCDAI/PUCAI), biomarkers (fecal calprotectin, C-reactive protein), and endoscopic findings were assessed longitudinally. Results: In all cases, we observed recurrent discordance between clinical indices (PCDAI/PUCAI), biochemical markers, and endoscopic activity. Clinical improvement frequently correlated with trough concentrations above commonly cited adult-derived reference ranges (>15 µg/mL for vedolizumab; >3 µg/mL for ustekinumab), although this alignment was not uniform across patients. Notably, one patient developed high-titre ADAs with undetectable ustekinumab levels, yet remained clinically stable, suggesting substantial interindividual variability in pharmacokinetics, immunogenicity, and disease control. Conclusions: Ustekinumab and vedolizumab are promising off-label options for pediatric refractory IBD. In this case series, TDM contributed to the interpretation of pharmacokinetic variability and immunogenicity, offering contextual insights that may support dose adjustments and therapeutic decision-making. Integrating TDM with clinical, biochemical, and endoscopic monitoring may improve optimize individualized treatment in this complex and vulnerable patient group. Full article
(This article belongs to the Special Issue Pharmacotherapy of Inflammatory Bowel Disease, 2nd Edition)
Show Figures

Figure 1

13 pages, 536 KB  
Article
Multi-Marker Evaluation of Creatinine, Cystatin C and β2-Microglobulin for GFR Estimation in Stage 3–4 CKD Using the 2021 CKD-EPI Equations
by Nurulamin Abu Bakar, Nurul Izzati Hamzan, Siti Nurwani Ahmad Ridzuan, Izatus Shima Taib, Zariyantey Abdul Hamid, Anasufiza Habib and Noor Hafizah Hassan
Int. J. Mol. Sci. 2026, 27(2), 862; https://doi.org/10.3390/ijms27020862 - 15 Jan 2026
Abstract
Chronic kidney disease (CKD) is a progressive disease in which accurate estimation of glomerular filtration rate (GFR) is essential for staging and guiding therapy. Serum creatinine is widely used but influenced by non-renal factors, while cystatin C and β2-microglobulin (β2M) may provide complementary [...] Read more.
Chronic kidney disease (CKD) is a progressive disease in which accurate estimation of glomerular filtration rate (GFR) is essential for staging and guiding therapy. Serum creatinine is widely used but influenced by non-renal factors, while cystatin C and β2-microglobulin (β2M) may provide complementary information related to filtration and tubular or inflammatory factors. This study compared the discriminatory performance of creatinine, cystatin C and β2M for separating CKD stage 3 from stage 4 within the 2021 CKD-EPI eGFR framework in 45 adults with CKD stages 3–4. CKD stage classification was defined using the 2021 CKD-EPI creatinine and creatinine–cystatin C equations (eGFRcr, eGFRcr–cys) with a threshold of 30 mL/min/1.73 m2. Receiver operating characteristic (ROC) analysis evaluated each marker’s ability to distinguish moderate from severe CKD. Creatinine showed high diagnostic accuracy (AUC up to 0.98). Cystatin C achieved 100% specificity at the optimal cut-off for severe CKD and showed comparable diagnostic accuracy to creatinine under the eGFRcr–cys framework (AUC 0.978 vs. 0.957). β2M demonstrated AUCs up to 0.97, with sensitivity and specificity above 90%. These findings support a multi-marker evaluation within the 2021 CKD-EPI-based staging, rather than validation against measured GFR. Larger studies incorporating measured GFR and relevant clinical confounders are warranted. Full article
Show Figures

Figure 1

22 pages, 1943 KB  
Article
Repairing the Urban Metabolism: A Dynamic Life-Cycle and HJB Optimization Model for Resolving Spatio-Temporal Conflicts in Shared Parking Systems
by Jiangfeng Li, Jianlong Xiang, Fujian Chen, Longxin Zeng, Haiquan Wang, Yujie Li and Zhongyi Zhai
Systems 2026, 14(1), 91; https://doi.org/10.3390/systems14010091 - 14 Jan 2026
Abstract
Urban shared parking systems represent a complex socio-technical challenge. Despite vast potential, utilization remains persistently low (<15%), revealing a critical policy failure. To address this, this study develops a dynamic system framework based on Life-Cycle Cost (LCC) and Hamilton-Jacobi-Bellman (HJB) optimization to analyze [...] Read more.
Urban shared parking systems represent a complex socio-technical challenge. Despite vast potential, utilization remains persistently low (<15%), revealing a critical policy failure. To address this, this study develops a dynamic system framework based on Life-Cycle Cost (LCC) and Hamilton-Jacobi-Bellman (HJB) optimization to analyze and calibrate the key policy levers influencing owner participation timing (T*). The model, resolved using finite difference methods, captures the system’s non-linear threshold effects by simulating critical system parameters, including system instability (price volatility, σp), internal friction (management fee, wggt), and demand signals (transaction ratio, Q). Simulations reveal extreme non-linear system responses: a 100% increase in system instability (σp) delays participation by 325.5%. More critically, a 100% surge in internal friction (management fees) delays T* by 492% and triggers a 95% revenue collapse—demonstrating the risk of systemic collapse. Conversely, a 20% rise in the demand signal (Q) advances T* by 100% (immediate participation), indicating the system can be rapidly shifted to a new equilibrium by activating positive feedback loops. These findings support a sequenced calibration strategy: regulators must first manage instability via price stabilization, then counteract high friction with subsidies (e.g., 60%), and amplify demand loops. The LCC framework provides a novel dynamic decision support system for calibrating complex urban transportation systems, offering policymakers a tool for scenario testing to accelerate policy adoption and alleviate urban congestion. Full article
(This article belongs to the Section Complex Systems and Cybernetics)
Show Figures

Figure 1

14 pages, 5251 KB  
Article
Facade Unfolding and GANs for Rapid Visual Prediction of Indoor Daylight Autonomy
by Jiang An, Jiuhong Zhang, Xiaomeng Si, Mingxiao Ma, Chen Du, Xiaoqian Zhang, Longxuan Che and Zhiyuan Lin
Buildings 2026, 16(2), 351; https://doi.org/10.3390/buildings16020351 - 14 Jan 2026
Abstract
Achieving optimal daylighting is a cornerstone of sustainable architectural design, impacting energy efficiency and occupant well-being. Fast and accurate prediction during the conceptual phase is crucial but challenging. While physics-based simulations are accurate but slow, existing machine learning methods often rely on restrictive [...] Read more.
Achieving optimal daylighting is a cornerstone of sustainable architectural design, impacting energy efficiency and occupant well-being. Fast and accurate prediction during the conceptual phase is crucial but challenging. While physics-based simulations are accurate but slow, existing machine learning methods often rely on restrictive parametric inputs, limiting their application across free-form designs. This study presents a novel, geometry-agnostic framework that uses only building facade unfolding diagrams as input to a Generative Adversarial Network (GAN). Our core innovation is a 2D representation that preserves 3D facade geometry and orientation by “unfolding” it onto the floor plan, eliminating the need for predefined parameters or intermediate features during prediction. A Pix2pixHD model was trained, validated, and tested on a total of 720 paired diagram-simulation images (split 80:10:10). The model achieves high-fidelity visual predictions, with a mean Structural Similarity Index (SSIM) of 0.93 against RADIANCE/Daysim benchmarks. When accounting for the practical time of diagram drafting, the complete workflow offers a speedup of approximately 1.5 to 52 times compared to conventional simulation. This work provides architects with an intuitive, low-threshold tool for rapid daylight performance feedback in early-stage design exploration. Full article
(This article belongs to the Special Issue Daylighting and Environmental Interactions in Building Design)
Show Figures

Figure 1

21 pages, 4697 KB  
Article
High-Throughput, Quantitative Detection of Pseudoperonospora cubensis Sporangia in Cucumber by Flow Cytometry: A Tool for Early Disease Diagnosis
by Baoyu Hao, Siming Chen, Weiwen Qiu, Kaige Liu, Antonio Cerveró Domenech, Juan Antonio Benavente Fernandez, Jian Shen, Ming Li and Xinting Yang
Agronomy 2026, 16(2), 205; https://doi.org/10.3390/agronomy16020205 - 14 Jan 2026
Abstract
Cucumber downy mildew, caused by the obligate parasitic oomycete Pseudoperonospora cubensis [(Berkeley & M. A. Curtis) Rostovzev], is a major threat to global cucumber production. Effective disease management relies on rapid and accurate pathogen detection. However, due to the specialized parasitic nature of [...] Read more.
Cucumber downy mildew, caused by the obligate parasitic oomycete Pseudoperonospora cubensis [(Berkeley & M. A. Curtis) Rostovzev], is a major threat to global cucumber production. Effective disease management relies on rapid and accurate pathogen detection. However, due to the specialized parasitic nature of P. cubensis, conventional methods are often laborious, low-throughput and inadequate, necessitating the development of a new approach for high-throughput sporangia counting. To address this limitation, we developed a rapid, high-throughput flow cytometry (FCM) assay for the direct quantification of P. cubensis sporangia. The optimal staining protocol involved adding 30 µL of 1000× diluted SYBR Green I to 500 µL of sporangial suspension and incubating at room temperature for 20 min. The flow cytometry parameters were set to a high sample loading speed with a 30-s acquisition time. Instrumental settings included an FL1 (green fluorescence) threshold of 8 × 104 and an SSC (side scatter) threshold of 3 × 105, with low gain. Validation against hemocytometer counts revealed a strong positive correlation (r = 0.8352). The assay demonstrated high reproducibility, with relative standard deviations (RSDs) ranging from 1.96–9.84%, and a detection limit of 1–10 sporangia/µL. Operator-dependent variability ranged from 8.85% to 18.79%. These results confirm that the established flow cytometry assay is a reliable and efficient tool for P. cubensis quantification, offering considerable potential for improving cucumber downy mildew monitoring and control strategies. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

29 pages, 3701 KB  
Article
Intelligent Prediction Model for Icing of Asphalt Pavements in Cold Regions Oriented to Geothermal Deicing Systems
by Junming Mo, Ke Wu, Jiading Jiang, Lei Qu, Wenbin Wei and Jinfu Zhu
Processes 2026, 14(2), 294; https://doi.org/10.3390/pr14020294 - 14 Jan 2026
Abstract
To address traffic safety hazards from asphalt pavement icing in Xinjiang’s cold regions and inefficiencies of conventional deicing and imprecise geothermal deicing systems, this study focused on local asphalt surfaces. Using “outdoor qualitative screening and indoor quantitative verification”, key variables were identified via [...] Read more.
To address traffic safety hazards from asphalt pavement icing in Xinjiang’s cold regions and inefficiencies of conventional deicing and imprecise geothermal deicing systems, this study focused on local asphalt surfaces. Using “outdoor qualitative screening and indoor quantitative verification”, key variables were identified via controlled tests and their coupling effects on the time to complete icing were quantified through an L16(44) orthogonal test (a 4-factor, 4-level design encompassing 16 test groups). A Backpropagation (BP) neural network model (3 inputs, 5 hidden neurons, and a learning rate of 0.7) optimized with 64 datasets was established to predict the time to complete icing of asphalt pavements, achieving a prediction accuracy (PA) of 90.7% for the time to complete icing and a mean error of merely 0.71 min. Dynamic icing risk thresholds (high/medium/low) were established via K-means clustering and statistical tests, enabling data-driven precise activation and on-demand regulation of geothermal deicing systems. This resolves energy waste and deicing delays, offering technical support for efficient geothermal utilization in cold-region transportation infrastructure, and provides a scalable “factor screening + model prediction” framework for asphalt pavement anti-icing practice. Full article
(This article belongs to the Special Issue Innovative Technologies and Processes in Geothermal Energy Systems)
27 pages, 5583 KB  
Article
Influence of Filling Rate and Support Beam Optimization on Surface Subsidence in Sustainable Ultra-High-Water Backfill Mining: A Case Study
by Xuyang Chen, Xufeng Wang, Chenlong Qian, Dongdong Qin, Zechao Chang, Zhiwei Feng and Zhijun Niu
Sustainability 2026, 18(2), 854; https://doi.org/10.3390/su18020854 - 14 Jan 2026
Abstract
As a key sustainable green-mining technology, ultra-high-water backfill mining is widely used to control surface subsidence and sustain extraction of constrained coal seams. Focusing on the Hengjian coal mine in the Handan mining area, this study uses physical modeling and industrial tests to [...] Read more.
As a key sustainable green-mining technology, ultra-high-water backfill mining is widely used to control surface subsidence and sustain extraction of constrained coal seams. Focusing on the Hengjian coal mine in the Handan mining area, this study uses physical modeling and industrial tests to clarify surface subsidence under different filling rates and identify the rock layers that hydraulic supports must control at various equivalent mining heights. A method is proposed to improve the filling rate by optimizing the thickness of the hydraulic support canopy through topological analysis. Results show that, compared with a filling rate of 85%, a 90% filling rate reduces subsidence of the basic roof, key layer, and surface by 51%, 57%, and 63%, respectively, while the industrial practice results have verified that the filling rate can significantly control surface subsidence. The equivalent mining height thresholds for instability of the immediate roof and high basic roof at the 2515 working face are 0.44 m and 1.26 m. Reducing the trailing beam thickness by 10 cm can theoretically raise the filling rate of the 2515 working face by about 2%, offering guidance for similar mines. Full article
Show Figures

Figure 1

Back to TopTop