Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (321,318)

Search Parameters:
Keywords = factors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 25020 KB  
Article
Assessing Ecological Vulnerability in the Northern Guangdong Mountains Using Deep Learning
by Wenwen Tong, Zongwang Yi, Hao Chen, Hong Liu, Jinghua Zhang, Wenlong Gao, Zining Liu and Yu Guo
Sustainability 2026, 18(9), 4472; https://doi.org/10.3390/su18094472 - 1 May 2026
Abstract
Ecological vulnerability assessment serves as a prerequisite for ecological governance, yet evaluating large-scale ecological vulnerability remains challenging. To address this challenge, this study integrates geological elements into ecological vulnerability assessment, taking Ruyuan Area in the Northern Guangdong Mountains, China, as a case study. [...] Read more.
Ecological vulnerability assessment serves as a prerequisite for ecological governance, yet evaluating large-scale ecological vulnerability remains challenging. To address this challenge, this study integrates geological elements into ecological vulnerability assessment, taking Ruyuan Area in the Northern Guangdong Mountains, China, as a case study. The area faces ecological hazards such as land desertification and soil erosion, indicating severe governance challenges. This study selected 14 ecological vulnerability factors and constructed assessment models based on Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs). A total of 800 ecological vulnerability sampling points were obtained by combining field survey data with remote sensing imagery. The models were trained using binary vulnerability labels. The resulting continuous probability outputs were then classified into five vulnerability levels using the natural breaks method to generate the final ecological vulnerability map. It should be noted that the multi-level vulnerability map represents graded probability-based differentiation rather than supervised multi-class prediction. Model performance was validated using three metrics: Area Under Receiver Operating Characteristic Curve (AUC–ROC), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE). The CNN (AUC = 0.916) model outperformed the DNN model (AUC = 0.895). According to the CNN-based classification results, non-vulnerable, slightly vulnerable, mildly vulnerable, moderately vulnerable, and highly vulnerable areas accounted for 36.19%, 22.85%, 14.24%, 12.31%, and 14.41% of the total area, respectively. High ecological vulnerability zones were concentrated in Daqiao, Luoyang, Dabu, and parts of Rucheng towns, with soil parent material and vegetation coverage identified as the main contributing factors, among which parent material was the most important. This finding underscores the notable impact of geological factors on local ecological vulnerability. Based on these results, nine ecological–geological subareas were delineated, and targeted ecological protection and restoration recommendations were proposed. This study, employing machine learning techniques, constructed an ecological vulnerability assessment model incorporating geological elements, thereby providing scientific support for targeted ecological governance in the study area. Full article
(This article belongs to the Topic Water-Soil Pollution Control and Environmental Management)
Show Figures

Figure 1

25 pages, 2126 KB  
Article
Crying Wolf in Cyberspace: A Cybersecurity Dynamics Study of Alarm Fatigue Attacks
by Enrico Barbierato
Information 2026, 17(5), 434; https://doi.org/10.3390/info17050434 - 1 May 2026
Abstract
Modern cyber–physical infrastructures rely heavily on alarm and notification systems to direct human attention when abnormal conditions occur. These mechanisms support timely and safe responses by informing operators and occupants about potential hazards. At the same time, research in human factors has shown [...] Read more.
Modern cyber–physical infrastructures rely heavily on alarm and notification systems to direct human attention when abnormal conditions occur. These mechanisms support timely and safe responses by informing operators and occupants about potential hazards. At the same time, research in human factors has shown that repeated or excessive alerts can weaken vigilance, slow reactions, and reduce confidence in warning systems. This behavioral pattern is commonly described as alarm fatigue. This paper examines how that vulnerability can be exploited intentionally. We refer to this adversarial strategy as alarm poisoning: the deliberate injection of false or misleading alerts in order to increase alarm pressure, erode trust in the monitoring infrastructure, and degrade organizational responsiveness over time. To study this process, we develop a stochastic Cybersecurity Dynamics model representing the interaction among attackers, defenders, alarm infrastructure, and a population of employees. Employee behavior is modeled through evolving trust and fatigue levels, while the overall system is formulated as a continuous–time Markov chain and simulated using the Gillespie Stochastic Simulation Algorithm. A Monte–Carlo campaign is used to analyze the resulting socio–technical dynamics under alternative attacker strategies. The study evaluates time-dependent trust, fatigue, and alarm-pressure trajectories, the distribution of times to behavioral collapse, and defender timing through Trust–Resilience–Agility–Mitigation (TRAM) metrics. The revised analysis also includes replication-sufficiency diagnostics, one-at-a-time sensitivity analysis, and threshold-robustness checks for the collapse criterion. The results show that false alarms with high perceived severity drive alarm pressure upward and degrade trust faster than nuisance-dominated campaigns, even when the total fake-alarm intensity is held constant across strategies. Collapse timing remains highly variable across stochastic realizations, and a non-negligible fraction of runs do not reach the collapse threshold within the simulation horizon. Sensitivity analysis indicates that the main qualitative ranking of attacker strategies is robust across most tested perturbations, with fatigue recovery and defender escalation emerging as particularly influential mechanisms. Overall, the findings support the view that alarm poisoning is a credible socio–technical attack vector and highlight the importance of rapid mitigation, robust alarm management, and human-centered defensive design in cyber–physical security systems. Full article
(This article belongs to the Special Issue Generative AI for Data Privacy and Anomaly Detection)
Show Figures

Figure 1

28 pages, 3985 KB  
Article
Analysis and Prediction of Vegetation Phenological Changes in Changbai Mountain Nature Reserve Based on MODIS and PSO-LSSVM
by Anqi He, Jie Zhang, Lv Zhou, Fei Yang, Yanzhao Yang, Xianbin Wang, Xin Wang and Jiasi Yan
Appl. Sci. 2026, 16(9), 4452; https://doi.org/10.3390/app16094452 - 1 May 2026
Abstract
Vegetation phenology is a key indicator of ecosystem responses to climate change. This study investigates the spatial-temporal dynamics of vegetation phenology in the Changbai Mountain Nature Reserve from 2001 to 2025 and projects future changes under CMIP6 scenarios using a particle swarm optimization–least [...] Read more.
Vegetation phenology is a key indicator of ecosystem responses to climate change. This study investigates the spatial-temporal dynamics of vegetation phenology in the Changbai Mountain Nature Reserve from 2001 to 2025 and projects future changes under CMIP6 scenarios using a particle swarm optimization–least squares support vector machine (PSO-LSSVM) model. The results show that SOS exhibits an advancing trend, while EOS is delayed, leading to an overall extension of LOS. Spatially, phenological patterns are strongly controlled by elevation, with higher elevations characterized by later SOS and shorter LOS. Correlation analysis indicates that SOS is primarily driven by spring temperature, whereas EOS is influenced by both temperature and precipitation, showing more complex responses. Notably, a negative relationship between autumn temperature and EOS suggests that factors other than temperature may play an important role. Future projections reveal that phenological changes intensify with increasing emission scenarios. By the end of the 21st century, SOS is projected to advance by 0.8–3.6 days, EOS to be delayed by 0.8–7.4 days, and LOS to extend by 1.6–11.8 days. Vegetation-type-based analysis further demonstrates significant heterogeneity in phenological responses. These findings improve the understanding of vegetation phenology in mountain ecosystems and provide a useful reference for assessing ecosystem responses under future climate change. Full article
14 pages, 3920 KB  
Article
Evaluation of Mechanical Properties of Zirconia-Based Composites Designed for Biomedical Applications
by Agnieszka Wojteczko, Sebastian Komarek and Magdalena Ziąbka
Appl. Sci. 2026, 16(9), 4455; https://doi.org/10.3390/app16094455 - 1 May 2026
Abstract
In this study, bioceramic composites based on zirconia (ZrO2) were synthesized and characterized in terms of mechanical properties. Two types of different-sized grains of zirconia powders were used to prepare the composites. A commercial zirconia micropowder (Tosoh) was used as a [...] Read more.
In this study, bioceramic composites based on zirconia (ZrO2) were synthesized and characterized in terms of mechanical properties. Two types of different-sized grains of zirconia powders were used to prepare the composites. A commercial zirconia micropowder (Tosoh) was used as a base for the composites modified with bioactive glass (BG), copper-doped bioactive glass (BGCu), and hexagonal boron nitride (hBN) with a sintering temperature of 1450 °C. The composites with the addition of hydroxyapatite, for which their sintering temperature was 1150 °C, were independently fabricated using a zirconia nanopowder prepared via co-precipitation and hydrothermal methods to achieve high densification and avoid hydroxyapatite decomposition. Mechanical performance of these composites was assessed with regard to biaxial flexural strength, Vickers hardness (HV), and fracture toughness (KIc). The reference 3Y-TZP material exhibited Vickers hardness (11.8 GPa) and fracture toughness (6.1 MPa∙m1/2 values typical for dense tetragonal zirconia ceramics. The addition of all bioactive phases resulted in significant alterations in mechanical properties. Specifically, incorporating 20 wt.% HAp led to a threefold decrease in hardness and a 40% reduction in fracture toughness, while increasing the HAp content to 40 wt.% further reduced these properties. Nonetheless, the fracture toughness of these composites remained higher than that of pure hydroxyapatite materials. The incorporation of BG and BGCu reduced the hardness values by 45% and 30%, respectively, compared to 3Y-TZP. The most significant deterioration of the properties was observed for the 3Y-TZP-hBN composite. The 3Y-TZP–BGCu composite exhibited fracture toughness (5.9 MPa∙m1/2) representing 95% of the toughness of pure zirconium dioxide, thereby showing the lowest weakness of all the other composites with bioactive additives. A slightly lower fracture toughness value (5.3 MPa∙m1/2) was also observed in the composite with bioglass but lacking the copper additive. This factor, combined with a relatively small decrease in hardness in both cases, highlights high durability for implantology applications, thus marking the indicated materials the most promising among the composites studied. Full article
(This article belongs to the Special Issue Nanomaterials and Surface Science)
Show Figures

Figure 1

25 pages, 8598 KB  
Article
Do Data Factors Empower the Realization of Ecological Product Value? Evidence from China
by Hsu-Hua Lee and Ta-Yu Chung
Sustainability 2026, 18(9), 4464; https://doi.org/10.3390/su18094464 - 1 May 2026
Abstract
With the deepening construction of ecological civilization, the realization of ecological product value, referring to the value derived from ecosystems’ material goods, regulation, support, and cultural services, has become a strategic key point for national sustainable development. Data factors, distinguished from digital technologies [...] Read more.
With the deepening construction of ecological civilization, the realization of ecological product value, referring to the value derived from ecosystems’ material goods, regulation, support, and cultural services, has become a strategic key point for national sustainable development. Data factors, distinguished from digital technologies as the actual resources used in production, exchange, and consumption, are becoming increasingly important as a new catalyst for empowering the realization of ecological product value. Drawing on panel data spanning 2011 to 2023 across China’s 31 provinces, this research employs the entropy weight method to construct evaluation indices for both the development of data factors and the realization of ecological product value, deriving weights from the data’s intrinsic variability. The effect of data factors on the realization of ecological product value is examined using a two-way fixed effects framework. Our outcomes are presented below. First, data factors can significantly promote the realization of ecological product value, and this conclusion is supported by a series of robustness checks and endogeneity treatments. Second, the mechanism analysis reveals that data factors empower the realization of ecological product value through new quality productive forces, energy consumption intensity, and innovation and entrepreneurship. Third, results from the threshold model suggest that the promoting effect of data factors on the realization of ecological product value is subject to a threshold constraint, characterized by diminishing marginal returns beyond this point. Fourth, regarding regional disparities, the results indicate that data factors primarily drive ecological product value realization in the central region, as it is at a critical stage of digital transformation, with a secondary effect in the east, while their influence in the western region remains insignificant. These findings provide important guidance for integrating data factors and ecological resources to achieve sustainable development. Full article
Show Figures

Figure 1

22 pages, 2363 KB  
Article
Machine Learning and Ranking-Based Evaluation for Prioritizing High-Potency Ionizable Lipids in LNP-Mediated RNA Delivery
by Mostafa Zahed, Maryam Skafyan and Morteza Rasoulianboroujeni
Algorithms 2026, 19(5), 353; https://doi.org/10.3390/a19050353 - 1 May 2026
Abstract
The application of machine learning (ML) models to accelerate the discovery of high-transfection-potency ionizable lipids has gained significant momentum in advancing lipid nanoparticle (LNP)-mediated RNA delivery. In the present study, we adopt a screening-oriented evaluation framework based on early-recognition ranking metrics tailored to [...] Read more.
The application of machine learning (ML) models to accelerate the discovery of high-transfection-potency ionizable lipids has gained significant momentum in advancing lipid nanoparticle (LNP)-mediated RNA delivery. In the present study, we adopt a screening-oriented evaluation framework based on early-recognition ranking metrics tailored to high-throughput discovery. Model performance was assessed using the enrichment factor (EF), normalized discounted cumulative gain (NDCG), and HitRate at the top 10% of the ranked list, with uncertainty quantified via 1000 nonparametric bootstrap resamples. To assess robustness of conclusions, additional analyses were conducted at the top 1% and top 5% thresholds, reflecting increasingly stringent prioritization scenarios. Four predictive models—XGBoost, Random Forest, Elastic Net, and Quantile Regression Forest—were evaluated across three molecular feature representations, circular Morgan fingerprints, expert-crafted descriptors, and Grover graph embeddings, using a held-out test set. Across all models and thresholds, Morgan fingerprints consistently yielded superior early-recognition performance. The best-performing configuration—XGBoost with Morgan fingerprints—achieved EF@10% = 4.850 (95% CI [3.182, 6.818]), NDCG@10% = 0.628 (95% CI [0.234, 0.909]), and HitRate@10% = 0.493 (95% CI [0.318, 0.683]), corresponding to nearly fivefold enrichment over random selection and identification of highly potent lipids in approximately half of the prioritized candidates. Threshold-sensitivity analyses revealed that although stricter cutoffs (top 1% and top 5%) exhibit greater variability, the relative performance ordering of molecular representations remains stable. Bootstrap distributional comparisons further demonstrate that Morgan fingerprints provide not only higher but also more consistent screening performance than expert descriptors and Grover embeddings. Collectively, these results indicate that molecular representation—rather than model architecture—is the primary determinant of early-recognition performance in ionizable lipid discovery and that this conclusion is robust across multiple screening depths. Full article
(This article belongs to the Special Issue Integrating Machine Learning and Physics in Engineering and Biology)
22 pages, 3310 KB  
Review
Research on the Hippo Pathway in Cancer
by Fengqiu Dang, Shuhuan Dai, Tianqi Zhao, Rong Zhang, Long Chen and Yongxiang Zhao
Cells 2026, 15(9), 833; https://doi.org/10.3390/cells15090833 - 1 May 2026
Abstract
The Hippo, as a central pathway regulating cell proliferation, apoptosis, stem cell homeostasis and organ development, is closely associated with the onset and progression of tumors, metabolic reprogramming, drug resistance and immune evasion when it is abnormally inactivated. The Hippo not only directly [...] Read more.
The Hippo, as a central pathway regulating cell proliferation, apoptosis, stem cell homeostasis and organ development, is closely associated with the onset and progression of tumors, metabolic reprogramming, drug resistance and immune evasion when it is abnormally inactivated. The Hippo not only directly promotes tumor cell proliferation, maintains cancer stem cell properties, and mediates metabolic reprogramming and treatment resistance, but also reshapes the tumor microenvironment(TME) by regulating the formation, heterogeneity and function of cancer-associated fibroblasts (CAFs). Furthermore, it mediates tumor immunosuppression and immune evasion by modulating programmed death-ligand 1(PD-L1) expression, T-cell function, macrophage polarization and cytokine secretion. At the same time, inflammatory cytokines, growth factors, metabolites and physical signals within the TME can negatively regulate the activity of the Hippo, creating a pro-tumor positive feedback loop. This article provides a systematic review of the composition and regulation of the Hippo , its mechanisms of action in the biological behavior of tumor cells and interactions within the tumor microenvironment, as well as progress in the development of drugs targeting this pathway. It offers a theoretical basis for a deeper understanding of the role of the Hippo in tumors and for the development of novel anti-tumor therapeutic strategies. Full article
16 pages, 2959 KB  
Article
Optimization of Injection-Production Volumes in Underground Gas Storage Based on Improved Non-Dominated Sorting Genetic Algorithm II
by Xufeng Yang, Fayang Jin, Yu Fu and Chao Chen
Eng 2026, 7(5), 215; https://doi.org/10.3390/eng7050215 - 1 May 2026
Abstract
As critical infrastructure for seasonal natural gas peak-shaving, the operation of underground gas storage (UGS) must consider multiple factors including risk, economics, efficiency, and technology. Traditional UGS operation schemes are heavily dependent on subjective experience and lack intelligent methods to fully leverage historical [...] Read more.
As critical infrastructure for seasonal natural gas peak-shaving, the operation of underground gas storage (UGS) must consider multiple factors including risk, economics, efficiency, and technology. Traditional UGS operation schemes are heavily dependent on subjective experience and lack intelligent methods to fully leverage historical data. This shortcoming leads to higher risks and increased compressor energy consumption. Taking S UGS as an example, the sensitivity factors of injection-production capacity are analyzed based on geological development and multi-cycle injection-production operation data. With injection-production rates as a decision variable and while considering safety and economic factors, objective functions and constraints are defined from the formation, wellbore, and surface. The proposed injection and production cycles are both 15 days, and the total injection and production volumes are 1200 × 104 m3 and 800 × 104 m3. An optimization model was constructed using the INSGA-Ⅱ and TOPSIS to determine the optimal gas injection-production volume allocation scheme. Compared with the initial scheme, the optimal injection-production volume allocation scheme reduces compressor energy consumption by 49.19% and 49.80% and formation pressure standard deviation by 78.88% and 77.21%, respectively. This effectively lowers injection-production energy consumption while improving safety, thereby ensuring the long-term safe and efficient operation of UGS. Full article
(This article belongs to the Section Chemical, Civil and Environmental Engineering)
Show Figures

Figure 1

31 pages, 44324 KB  
Article
Performance Evaluation of Post-Quantum Digital Signature in QPSK- and 16QAM-Based WDM Communication Systems
by Duaa J. Khalaf, Arwa A. Moosa and Tayseer S. Atia
Computers 2026, 15(5), 290; https://doi.org/10.3390/computers15050290 - 1 May 2026
Abstract
The integration of post-quantum digital signature (PQDS) algorithms into coherent wavelength-division multiplexing (WDM) optical networks introduces a non-negligible cryptographic overhead that fundamentally alters physical-layer performance characteristics. Unlike conventional studies that treat security and transmission independently, this work provides a cross-layer evaluation of PQDS-induced [...] Read more.
The integration of post-quantum digital signature (PQDS) algorithms into coherent wavelength-division multiplexing (WDM) optical networks introduces a non-negligible cryptographic overhead that fundamentally alters physical-layer performance characteristics. Unlike conventional studies that treat security and transmission independently, this work provides a cross-layer evaluation of PQDS-induced payload expansion and its direct impact on coherent optical system behavior under realistic, DSP-aligned conditions. A structured and reproducible evaluation framework is proposed to systematically analyze this interaction across multiple transmission scenarios, ranging from a single-channel QPSK baseline to a 16-channel WDM system employing both QPSK and 16QAM modulation formats. Key system parameters—including launch power, local oscillator power, bit rate, and fiber length—are jointly optimized, while performance is rigorously assessed in terms of bit error rate (BER), Q-factor, and maximum transmission reach. The results demonstrate a clear performance degradation trend driven by both spectral efficiency scaling and cryptographic payload expansion. The single-channel QPSK system achieves a maximum reach of 203 km, which decreases to 194 km in the 16-channel WDM QPSK configuration due to inter-channel interference and nonlinear effects. In contrast, the 16-channel WDM 16QAM system exhibits a significantly reduced reach of 103 km, reflecting its heightened sensitivity to noise, chromatic dispersion, and fiber nonlinearities. Furthermore, increased payload size associated with PQDS schemes is shown to exacerbate transmission impairments by extending frame duration and intensifying inter-channel interactions. These findings identify PQDS-induced overhead as a critical system-level constraint that directly governs transmission efficiency, scalability, and performance limits. The study highlights the necessity of cross-layer co-design strategies, where cryptographic mechanisms and physical-layer parameters are jointly optimized to enable efficient, reliable, and quantum-safe coherent optical communication systems. Full article
(This article belongs to the Special Issue Emerging Trends in Network Security and Applied Cryptography)
23 pages, 313 KB  
Article
Trust, Education, and Artificial Intelligence: Adoption, Explainability, and Epistemic Authority Among Teacher-Education Undergraduates in Greece
by Epameinondas Panagopoulos, Charalampos M. Liapis, Anthi Adamopoulou, Ioannis Kamarianos and Sotiris Kotsiantis
Algorithms 2026, 19(5), 350; https://doi.org/10.3390/a19050350 - 1 May 2026
Abstract
This study investigates how teacher-education undergraduates in Greece use, evaluate, and trust Artificial Intelligence (AI) in higher education, with particular attention to the gap between widespread adoption and limited epistemic trust. The topic is important because generative AI is rapidly entering universities, reshaping [...] Read more.
This study investigates how teacher-education undergraduates in Greece use, evaluate, and trust Artificial Intelligence (AI) in higher education, with particular attention to the gap between widespread adoption and limited epistemic trust. The topic is important because generative AI is rapidly entering universities, reshaping learning practices, academic integrity, and the legitimacy of knowledge, while learners often rely on systems whose outputs are not easily verifiable. The study focuses on future teachers because they are both current users of AI in higher education and likely future mediators of its use in school settings. Addressing this problem, the study contributes empirical evidence on how AI adoption relates to epistemic authority and institutional legitimacy within teacher education rather than across university students in general. A mixed-methods design was employed using a structured questionnaire completed by 363 teacher-education undergraduates from the University of Patras and the University of Ioannina in Greece; the sample was predominantly women (86.0%) and first-year students (92.6%). Quantitative responses were analyzed statistically, open-ended answers were examined thematically, and factor analysis was used to identify latent attitudinal dimensions. The findings indicate very high AI use in everyday life (92.6%) and study practices (81.3%), but only moderate trust: 1.4% reported complete trust and 12.1% generally trusted AI-generated answers. Six dimensions explained 61.73% of total variance, pointing to a layered attitudinal structure within this teacher-education population, consistent with an adoption–trust paradox and with the need for transparent, verifiable, human-supervised educational AI. The observed verification-based trust calibration may partly reflect an emerging pedagogical orientation toward source checking and responsibility for knowledge mediation, but given the strong concentration of first-year students, this should be interpreted as characteristic of early-stage teacher education rather than of university students more broadly. Full article
Show Figures

Graphical abstract

26 pages, 7156 KB  
Article
A Hybrid Machine Learning Framework for Mechanistically Interpretable Latent Parameter Inference in a Spatiotemporal CAR-T Therapy Model for Solid Tumours
by Maxim Polyakov
Technologies 2026, 14(5), 276; https://doi.org/10.3390/technologies14050276 - 1 May 2026
Abstract
CAR-T cell therapy remains ineffective in most solid tumours because effector cells infiltrate poorly, undergo exhaustion, and face antigen escape within an immunosuppressive microenvironment. To address this, we developed a hybrid framework that combines a mechanistic spatiotemporal model with machine learning for limited [...] Read more.
CAR-T cell therapy remains ineffective in most solid tumours because effector cells infiltrate poorly, undergo exhaustion, and face antigen escape within an immunosuppressive microenvironment. To address this, we developed a hybrid framework that combines a mechanistic spatiotemporal model with machine learning for limited individual-level mechanistic personalisation under data constraints. At its core, we employed a reaction–diffusion–chemotaxis model describing functional and exhausted CAR-T cells, antigen-positive and antigen-negative tumour subpopulations, a chemoattractant, an immunosuppressive factor, and hypoxia. Gradient boosting combined with nested cross-validation was used to recover model-consistent latent-parameter pseudo-labels generated by a limited inverse problem. Within this surrogate-target setting, parameters characterising the tumour microenvironment and CAR-T cell exhaustion were reproduced most robustly, whereas antigen escape and individualised initial conditions were substantially less well constrained. As an auxiliary reference point, we also considered a direct empirical baseline for binary clinical outcomes. This baseline indicated that the observed clinical features contained a more stable signal for disease control than for objective response. A favourable response was associated with high CAR-T cell infiltration and cytotoxic potency, whereas resistance was linked to exhaustion, antigen escape, and a suppressive microenvironment. Overall, the proposed approach should be interpreted as an internally validated, hypothesis-generating proof-of-concept platform for mapping clinical features to mechanistically interpretable surrogate latent targets, rather than as evidence for validated recovery of true patient-specific biological parameters. Full article
29 pages, 4477 KB  
Article
Modeling Real-World Charging Behavior to Update SAE J2841 PHEV Utility Factors
by Michael Duoba and Jorge Pulpeiro González
World Electr. Veh. J. 2026, 17(5), 242; https://doi.org/10.3390/wevj17050242 - 1 May 2026
Abstract
The SAE J2841 utility factor (UF) estimates the fraction of driving expected to occur in charge-depleting (CD) mode for plug-in hybrid electric vehicles. Emerging in-use data suggest that real-world electric usage is lower than assumed, motivating a reassessment of how charging behavior and [...] Read more.
The SAE J2841 utility factor (UF) estimates the fraction of driving expected to occur in charge-depleting (CD) mode for plug-in hybrid electric vehicles. Emerging in-use data suggest that real-world electric usage is lower than assumed, motivating a reassessment of how charging behavior and related factors should be incorporated into the UF curve. Using trip-level data from approximately 1000 PHEVs observed over one year, we develop a charging model that captures both population-level heterogeneity in charging frequency and day-to-day characteristic temporal patterns in individual charging. The charging behavior modeling is applied to NHTS driving data to generate UF curves spanning 5 to 200 miles (8 to 322 km) of CD range. When key behavioral features are included, the resulting CD driving fractions align closely with industry-provided data. Sensitivity analysis indicates that the assumed share of habitual non-chargers is among the most influential parameters affecting the gap between the original UF and in-use data. Multiple modeling approaches were used to explore the problem and compare results, including machine learning, logistic regression, and parametric methods. Additional factors such as blended CD operation and temperature effects are discussed within a modular framework for refining J2841. These findings inform ongoing discussions on PHEV utility representation in analytical and regulatory contexts. Full article
19 pages, 2109 KB  
Article
Translation and Psychometric Validation of the Teachers’ Beliefs and Intentions Questionnaire (TBIQ) in Chilean Early Childhood Education
by Pamela Soto-Ramirez, Marigen Narea, Maria Francisca Morales and Alejandra Caqueo-Urízar
Educ. Sci. 2026, 16(5), 711; https://doi.org/10.3390/educsci16050711 - 1 May 2026
Abstract
The Teachers’ Beliefs and Intentions Questionnaire (TBIQ) assesses educators’ beliefs and intentions regarding the importance of sensitive interactions with young children. Understanding these beliefs is particularly relevant in contemporary educational contexts where teacher–child interactions are viewed as central to children’s learning and development. [...] Read more.
The Teachers’ Beliefs and Intentions Questionnaire (TBIQ) assesses educators’ beliefs and intentions regarding the importance of sensitive interactions with young children. Understanding these beliefs is particularly relevant in contemporary educational contexts where teacher–child interactions are viewed as central to children’s learning and development. Despite its use in several countries, there is no validated Spanish version available. This study aimed to translate, culturally adapt, and psychometrically validate a Spanish version of the TBIQ for early childhood education settings in Chile. Following international guidelines for cross-cultural adaptation, the questionnaire was translated into Spanish and administered to early childhood teachers and assistant teachers working in public early childhood education centers. The original two-factor structure (Beliefs and Intentions) was tested using confirmatory factor analyses with robust estimators for ordinal data. Results supported the two-factor model after removing six items with low factor loadings and indicated excellent model fit. Both scales demonstrated high internal consistency. However, measurement invariance across educator roles could not be established, and cross-group comparisons should be interpreted with caution. Despite this limitation, the Spanish version of the TBIQ demonstrates adequate validity and reliability and offers a brief and accessible instrument for research and for the assessment of educators’ beliefs and intentions regarding interaction quality in early childhood education. Full article
(This article belongs to the Special Issue Pedagogy in Early Years Education)
Show Figures

Figure 1

10 pages, 479 KB  
Article
The Moore Graph of Diameter 2 and Degree 57 via Cyclic Derangements
by Derek H. Smith and Roberto Montemanni
Axioms 2026, 15(5), 332; https://doi.org/10.3390/axioms15050332 - 1 May 2026
Abstract
The possible existence of a regular Moore graph of diameter 2 and degree 57 with the maximum number 3250 of vertices has been an open question for over 65 years. One approach to a construction focuses on the set of permutations that describe [...] Read more.
The possible existence of a regular Moore graph of diameter 2 and degree 57 with the maximum number 3250 of vertices has been an open question for over 65 years. One approach to a construction focuses on the set of permutations that describe the 1-factors that give the adjacencies between leaf vertices of pairs of branches of a tree. Most of these permutations are derangements, that is they are permutations with no fixed points. As many products of 2, 3, or 4 of these derangements must also be derangements, it is tempting to use a group of derangements, that is a group of permutations in which every non-identity element is a derangement. The first case to consider is when the group of derangements is a cyclic group of permutations. In this paper it is proved that a construction using only a cyclic group of permutations is impossible. This leaves only the possibility of using some other group of derangements, or a set of derangements that do not form a group. The prospects for extending the work to these cases is considered at the end of the paper. Full article
22 pages, 841 KB  
Article
Numerical Investigation of Die Swell Behavior in EPDM Rubber Extrusion: Effects of Compound Formulation and Processing Conditions
by Yancai Sun, Haoran Wang, Jingtao Jiang, Kongshuo Wang, Wenjuan Bai, Dianming Chu, Ranran Jian, Peiwu Hou, Yan He and Wenzhong Deng
Polymers 2026, 18(9), 1122; https://doi.org/10.3390/polym18091122 - 1 May 2026
Abstract
Die swell is the dominant source of dimensional deviation in rubber profile extrusion. Because it is driven by recoverable elastic strain, a purely viscous baseline flow field cannot reproduce its speed dependence; a viscoelastic correction is required. This study presents, to the best [...] Read more.
Die swell is the dominant source of dimensional deviation in rubber profile extrusion. Because it is driven by recoverable elastic strain, a purely viscous baseline flow field cannot reproduce its speed dependence; a viscoelastic correction is required. This study presents, to the best of our knowledge, the first controlled comparison of a Carreau–Arrhenius baseline flow field against a fractional-order viscoelastic correction for carbon-black-filled EPDM across an industrial speed window. The viscoelastic correction (PyCFD-FMM) is a post-processing fractional-order viscoelastic swell correction built on the shared non-isothermal Polyflow Carreau–Arrhenius flow field, derived from a six-mode fractional Maxwell model parameterized from dynamic mechanical analysis via the Laun rule and closed through the Tanner recoverable-strain theory. Three carbon-black-filled EPDM compounds (Shore A 60–80) were extruded at four screw speeds (15–30 rpm) under instrumented conditions. Experimentally, swell ratios of 1.12–1.15 increase monotonically with screw speed (Fisher-combined p=0.007; measurement repeatability CV 0.27% across n=4 replicates per condition). The purely viscous baseline output gives a decreasing apparent swell–speed trend—opposite to experiment—whereas PyCFD-FMM recovers the correct increasing trend for all compounds. Under single-anchor hold-out evaluation at 20/25/30 rpm, the non-anchor MAPE decreases from 0.99% for the baseline flow-field output to 0.30% (PyCFD-FMM); an anchor-sensitivity check over all four rpm choices keeps the compound-averaged non-anchor MAPE within 0.27–0.39% and preserves the correct slope sign in every case. Swell decomposition into geometric baseline and net correction factor (BPyCFD=Bgeom×fcorr) confirms that the viscous baseline flow field captures flow-geometry effects but carries no elastic memory. Within the tested window, the viscoelastic correction meets a dual-gate criterion—correct slope sign and reduced non-anchor MAPE—which the purely viscous baseline cannot satisfy by construction. Full article
Back to TopTop