Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (21,141)

Search Parameters:
Keywords = analytic model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2161 KB  
Systematic Review
Prognostic Models for Predicting Coronary Heart Disease Risk in Patients with Type 2 Diabetes Mellitus: A Systematic Review and Meta-Analysis
by Maicol Cortez-Sandoval, César J. Eras Lévano, Joaquín Fernández Álvarez, Jorge López-Leal, Lady Morán Valenzuela, Raul H. Sandoval-Ato, Hady Keita, Martin Gomez-Lujan, Fernando M. Quevedo Candela, Jesús I. Parra Prado, José Luis Muñoz-Carrillo, Oriana Rivera-Lozada and Joshuan J. Barboza
Diagnostics 2026, 16(5), 765; https://doi.org/10.3390/diagnostics16050765 (registering DOI) - 4 Mar 2026
Abstract
Background: Individuals with type 2 diabetes mellitus (T2DM) are at markedly increased risk of developing coronary heart disease (CHD); however, the generalizability and transportability of existing prediction models remain uncertain. Objective: To identify and evaluate multivariable prognostic models developed to predict [...] Read more.
Background: Individuals with type 2 diabetes mellitus (T2DM) are at markedly increased risk of developing coronary heart disease (CHD); however, the generalizability and transportability of existing prediction models remain uncertain. Objective: To identify and evaluate multivariable prognostic models developed to predict CHD in adults with T2DM. Methods: We conducted a PRISMA-guided systematic review and meta-analysis of multivariable prognostic models predicting CHD in T2DM populations. Model characteristics and performance metrics were extracted following the CHARMS and TRIPOD-SRMA frameworks, and pooled discrimination was estimated on the logit-transformed AUC scale using a random-effects model (REML, Hartung–Knapp adjustment). Between-study heterogeneity and 95% prediction intervals were quantified, while risk of bias and applicability were assessed using the PROBAST tool. Results: Thirteen studies encompassing clinical, imaging-based, and omics-augmented models met the inclusion criteria. The pooled AUC was 0.69 (95% CI: 0.66–0.71), with high heterogeneity (I2 = 97.4%; τ2 = 0.0979) and a wide 95% prediction interval (0.54–0.81). Classical regression-based models demonstrated modest discrimination, whereas machine learning, imaging, and proteomic approaches achieved higher AUC estimates but were frequently constrained by small sample sizes, internal-only validation, and poor calibration reporting. The analysis domain emerged as the principal source of bias in PROBAST evaluations, and applicability issues were most frequent in models requiring advanced imaging or molecular platforms. Conclusions: Prognostic models for CHD in T2DM demonstrate moderate-to-good discrimination but substantial heterogeneity and frequent miscalibration across studies. Their clinical utility depends on rigorous external validation and local recalibration, particularly when incorporating imaging or molecular predictors. Future research should prioritize standardized CHD outcomes, consistent calibration reporting, decision-analytic assessments, and the development of transportable multimodal prediction models across diverse populations. Full article
(This article belongs to the Section Clinical Diagnosis and Prognosis)
Show Figures

Figure 1

19 pages, 1719 KB  
Article
Deep-Neural-Network-Aided Genetic Association Testing in Samples with Related Individuals
by Xiaowei Wu
Curr. Issues Mol. Biol. 2026, 48(3), 273; https://doi.org/10.3390/cimb48030273 (registering DOI) - 4 Mar 2026
Abstract
Genome-wide association studies (GWAS) have successfully identified thousands of genetic loci associated with complex traits and diseases, providing critical insights into genetic architecture, biological pathways, and disease mechanisms. With the advance of machine learning, the analytical scope of GWAS can be substantially expanded [...] Read more.
Genome-wide association studies (GWAS) have successfully identified thousands of genetic loci associated with complex traits and diseases, providing critical insights into genetic architecture, biological pathways, and disease mechanisms. With the advance of machine learning, the analytical scope of GWAS can be substantially expanded by enabling joint modeling, nonlinear effects, and integrative analysis. However, deep learning approaches remain underutilized in augmenting traditional GWAS frameworks, particularly in the presence of cryptic relatedness among sampled individuals. In this paper, we propose a deep neural network (DNN)-based machine learning method to assist genetic association testing in samples with related individuals. By approximating the phenotype–genotype relationships in classical association tests and combining approximations across multiple tests, the proposed method aims to improve predictive performance in the identification of associated variants. Simulation studies demonstrate that our approach effectively complements conventional statistical methods and generally achieves increased power for detecting genetic associations. We further apply the method to data from the Framingham Heart Study, illustrating how DNN-based machine learning can facilitate the identification of genome-wide SNPs associated with average systolic blood pressure. Full article
Show Figures

Figure 1

26 pages, 407 KB  
Review
Machine Learning and Deep Learning for Dropout Prediction in Higher Education: A Review
by Beatriz Duro, Anabela Gomes, Fernanda Brito Correia, Ana Rosa Borges and Jorge Bernardino
Computers 2026, 15(3), 164; https://doi.org/10.3390/computers15030164 (registering DOI) - 4 Mar 2026
Abstract
Student dropout in Higher Education remains a persistent challenge with significant academic, social and economic consequences. Predictive analytics using traditional Machine Learning and Deep Learning have been increasingly explored to support early identification of students at risk. This article presents a structured literature [...] Read more.
Student dropout in Higher Education remains a persistent challenge with significant academic, social and economic consequences. Predictive analytics using traditional Machine Learning and Deep Learning have been increasingly explored to support early identification of students at risk. This article presents a structured literature review of studies published between 2018 and 2025 that apply these techniques to predict dropout in Higher Education. Unlike previous reviews, we pay particular attention to model interpretability, practical deployment and ethical considerations when analysing data types, preprocessing strategies and modelling approaches. Results show that transparent traditional models, including Decision Trees, Logistic Regression, and ensemble methods such as Random Forest and Gradient Boosting remain dominant because they perform strongly on structured data and are easier to explain. Deep Learning approaches, although less prevalent, show promise for sequential and behavioural data but face challenges in data availability, explainability, and implementation complexity. Despite frequently high reported performance, most studies rely on single-institution datasets, limiting generalisability, and only a minority address fairness, bias, or real-world integration. This analysis concludes that we must transition from accuracy-focused evaluations to transparent, accountable and actionable predictive systems that facilitate data-driven and inclusive decision-making in Higher Education. Full article
Show Figures

Figure 1

13 pages, 1558 KB  
Technical Note
Optimized Fiber Element Modeling Strategy for Concrete-Encased Steel Composite Columns: Focusing on Material Nonlinearity and Confinement Effects
by Seongjin Ha
Buildings 2026, 16(5), 999; https://doi.org/10.3390/buildings16050999 (registering DOI) - 4 Mar 2026
Abstract
Reliable numerical simulation of concrete-encased steel (CES) composite columns remains challenging, and practical fiber-element modeling can be sensitive to confinement representation and to discretization and integration choices. Although CES columns offer superior structural performance, accurate simulation is difficult due to the complex interaction [...] Read more.
Reliable numerical simulation of concrete-encased steel (CES) composite columns remains challenging, and practical fiber-element modeling can be sensitive to confinement representation and to discretization and integration choices. Although CES columns offer superior structural performance, accurate simulation is difficult due to the complex interaction between steel and concrete under cyclic loading. Current seismic design codes, such as ASCE/SEI 41-17, often simplify modeling parameters by underestimating composite action, which can lead to uneconomical and overly conservative assessments that do not fully reflect the confining effect of the concrete encasement and the buckling restraint of the steel core. This study proposes a practical guideline for constructing an accurate analytical model for CES columns using nonlinear fiber-element analysis, with a specific focus on material constitutive laws. To validate the proposed strategy, nonlinear analyses were conducted and compared against a comprehensive database of 79 experimental specimens compiled from previous studies. The predicted-to-test peak strength ratio shows a mean of 1.02 (standard deviation of 0.058). Sensitivity studies indicate that responses stabilize beyond ~23 fibers (<1.5% error), reducing computation time by ~40% on average (from 52 to 23 fibers) compared with dense discretization while maintaining reliable hysteretic response prediction. Full article
(This article belongs to the Special Issue Analysis of Structural and Seismic Performance of Building Structures)
Show Figures

Figure 1

17 pages, 319 KB  
Article
Approximate Synchronization of Memristive Hopfield Neural Networks
by Yuncheng You
Axioms 2026, 15(3), 185; https://doi.org/10.3390/axioms15030185 (registering DOI) - 4 Mar 2026
Abstract
Asymptotic synchronization is one of the essential differences between artificial neural networks and biologically inspired neural networks due to mismatches from the dynamical update of weight parameters and heterogeneous activations. In this paper, a new concept of approximate synchronization is proposed and investigated [...] Read more.
Asymptotic synchronization is one of the essential differences between artificial neural networks and biologically inspired neural networks due to mismatches from the dynamical update of weight parameters and heterogeneous activations. In this paper, a new concept of approximate synchronization is proposed and investigated for Hopfield neural networks coupled with nonlinear memristors. It is proved that global solution dynamics are robustly dissipative and a sharp ultimate bound is acquired. Through a priori uniform estimates on the interneuron differencing equations, it is rigorously and analytically shown that approximate synchronization to any prescribed small gap at an exponential convergence rate of the memristive Hopfield neural networks occurs if an explicitly computable threshold condition is satisfied by the interneuron coupling strength parameter. The main result is also extended to Hopfield neural networks with Hebbian learning rules for a broad range of applications in unsupervised learning. The contribution of this approximate synchronization framework and the analytic methodology in this work advance the exploration of asymptotic dynamics for more AI mathematical models. Full article
21 pages, 1357 KB  
Review
Natural Ingredients to Enhance the Antioxidant Capacity in Different Meat Products: A Review
by Brisa del Mar Torres-Martínez, Armida Sánchez-Escalante, Gastón Ramón Torrescano-Urrutia and Rey David Vargas-Sánchez
Foods 2026, 15(5), 852; https://doi.org/10.3390/foods15050852 (registering DOI) - 3 Mar 2026
Abstract
The oxidative stability of meat products is a crucial factor determining quality, shelf life, and consumer acceptance, as lipid and protein oxidation promote undesirable changes in sensory attributes and nutritional content. Antioxidant capacity (AOC) assays such as total phenolic content (TPC), ferric reducing [...] Read more.
The oxidative stability of meat products is a crucial factor determining quality, shelf life, and consumer acceptance, as lipid and protein oxidation promote undesirable changes in sensory attributes and nutritional content. Antioxidant capacity (AOC) assays such as total phenolic content (TPC), ferric reducing antioxidant power (FRAP), 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS•+), and 2,2-diphenyl-1-picrylhydrazyl (DPPH) are commonly applied in meat systems to assess the AOC associated with both intrinsic muscle components (endogenous) and the protective effects of natural ingredients (exogenous added compounds), i.e., antioxidants. Although differences in analytical methodologies limit direct comparisons among studies, it has been demonstrated that meat products inherently contain compounds that modulate oxidative reactions, with their effectiveness influenced by meat type, processing, and storage conditions. Within this framework, natural ingredients, including plant- and fungal-derived ingredients and their by-products, have gained attention as sources of natural antioxidants, whose capacity depends on the extraction method, the solvent used, and their behavior during gastrointestinal digestion, as evaluated using simulated gastrointestinal digestion (sGD) models. Numerous studies have shown that incorporating natural extracts or powders into meat products enhances AOC during refrigerated storage, with the effect generally depending on the concentration used. Moreover, several natural antioxidant treatments maintain or even enhance their AOC when assessed under sGD conditions. Full article
Show Figures

Graphical abstract

22 pages, 4261 KB  
Article
An Analytical Modeling Framework for Martian Soil—Sampling Scoop Interaction with Numerical Validation
by Hongtao Cao, Haoran Xie, Dong Pan, Yingchun Qi, Lutz Richter, Yan Shen and Meng Zou
Aerospace 2026, 13(3), 237; https://doi.org/10.3390/aerospace13030237 - 3 Mar 2026
Abstract
Accurate prediction of excavation forces is critical for the design reliability and operational safety of Mars surface sampling systems. This study establishes an analytical modeling framework to describe the excavation mechanics of Martian soil, focusing on the formation mechanism and evolution of resistance. [...] Read more.
Accurate prediction of excavation forces is critical for the design reliability and operational safety of Mars surface sampling systems. This study establishes an analytical modeling framework to describe the excavation mechanics of Martian soil, focusing on the formation mechanism and evolution of resistance. Soil deformation and failure processes are qualitatively identified using particle image velocimetry (PIV) and discrete element method (DEM) simulations. Based on limit equilibrium theory, the passive earth pressure is derived, and the scoop is divided into seven force-bearing regions for three-dimensional force decomposition. The analytical model is validated against multibody dynamics–discrete element method (MBD–DEM) co-simulation. The results indicate that excavation resistance exhibits a distinct single-peak evolution, maximizing near the maximum excavation depth. Notably, the inner bottom surface and cutting edge dominate resistance during penetration, contributing approximately 56% and 30% of the total force, respectively. The resistance mechanism transitions after soil emergence due to the gravitational effect of retained soil. Consequently, this framework provides a physically interpretable and quantitatively validated approach for force prediction, offering theoretical support for sampling scoop design and optimization in future Mars missions. Full article
(This article belongs to the Section Astronautics & Space Science)
29 pages, 2466 KB  
Article
Living Labs as Cultural Infrastructures: Performing and Normalising Circular Fashion Practices
by Alessandra Spagnoli and Valeria M. Iannilli
Sustainability 2026, 18(5), 2471; https://doi.org/10.3390/su18052471 - 3 Mar 2026
Abstract
The transition to circular models in fashion and textiles requires changes that go beyond technical innovation. The literature recognises that systemic change depends on the transformation of shared meanings around consumption and production, and that spaces for co-design and collaborative learning are crucial [...] Read more.
The transition to circular models in fashion and textiles requires changes that go beyond technical innovation. The literature recognises that systemic change depends on the transformation of shared meanings around consumption and production, and that spaces for co-design and collaborative learning are crucial to generating this transformation. This article documents how Living Labs operate in this capacity, analysing the Madeback Circular Fashion Festival (May–November 2025), a pilot project of the Fashion & Textile Living Lab at the Politecnico di Milano. The study employs the Living Lab Integrative Process (LLIP) as both a design framework and an analytical lens. Adopting a qualitative and participatory method, the study documents how the three spaces of the LLIP—Problem Space, Solution Space and Implementation Space—simultaneously structured both design innovation and empirical analysis. The results point to three main contributions: (i) Living Labs can function as cultural infrastructures in which performative and narrative dimensions may contribute to the gradual normalisation of alternative practices; (ii) the Quadruple Helix operates as a living process characterised by distributed intentionality and emerging trust; and (iii) transformative learning appears through the co-production of knowledge in embodied and relational practices. The article identifies contextual factors that enabled the project—from its location in a design university to its multi-year funding—and the related constraints on transferability, concluding that Living Labs are promising infrastructures for sustainable transitions when they consciously integrate performative, cultural and relational dimensions. Full article
(This article belongs to the Section Sustainable Products and Services)
Show Figures

Figure 1

21 pages, 854 KB  
Review
Trypanosoma vivax in Water Buffaloes (Bubalus bubalis): A Host-Centered Synthesis of Pathogenesis, Epidemiology, Diagnosis, and Integrated Control with Implications for Tropical Production Systems
by André de Medeiros Costa Lins, Dryelle Vieira de Oliveira Brandão, Fernanda Monik Silva Martins, Aline Maia Silva, Henrique dos Anjos Bonjardim and Felipe Masiero Salvarani
Pathogens 2026, 15(3), 273; https://doi.org/10.3390/pathogens15030273 - 3 Mar 2026
Abstract
Trypanosoma vivax is a hemoprotozoan parasite of major veterinary importance affecting domestic ungulates in Africa and the Americas. While traditionally addressed within cattle-centered paradigms, accumulating evidence indicates that water buffaloes (Bubalus bubalis) are both clinically susceptible and epidemiologically significant hosts. This [...] Read more.
Trypanosoma vivax is a hemoprotozoan parasite of major veterinary importance affecting domestic ungulates in Africa and the Americas. While traditionally addressed within cattle-centered paradigms, accumulating evidence indicates that water buffaloes (Bubalus bubalis) are both clinically susceptible and epidemiologically significant hosts. This structured narrative review provides a host-centered synthesis of global evidence on T. vivax infection in buffaloes, integrating pathogenesis, transmission biology, epidemiology, diagnostics, chemotherapy, and integrated control. The analysis encompasses literature from 2000 to 2025 and incorporates seminal experimental studies published prior to 2000 that established buffalo susceptibility and reservoir competence. Evidence from cyclical (tsetse-mediated) and mechanical transmission systems is comparatively interpreted to clarify host–parasite dynamics. The Amazon biome is discussed as a model system for high-density buffalo production under mechanical vector pressure, offering case-based contextualization without geographic restriction. Particular attention is given to immunopathological mechanisms, chronic low-parasitemia carriage, diagnostic sensitivity in subclinical infections, emerging trypanocide resistance, and ecological constraints on vector control. Controversies and buffalo-specific knowledge gaps are highlighted throughout. By adopting a buffalo-centered analytical framework, this review supports translational diagnostics, targeted surveillance, and sustainable control strategies for trypanosomiasis in tropical livestock systems. Full article
(This article belongs to the Topic Advances in Infectious and Parasitic Diseases of Animals)
27 pages, 7655 KB  
Review
A Bibliometric Analysis of Collaboration in Building Information Modeling: Emerging Dynamics and Future Trends
by Nurdan Kasul and Fahriye Hilal Halicioglu
Buildings 2026, 16(5), 986; https://doi.org/10.3390/buildings16050986 (registering DOI) - 3 Mar 2026
Abstract
Collaboration is a cornerstone of digital transformation in the Architecture, Engineering, and Construction (AEC) sector. Despite widespread adoption of building information modeling (BIM), a critical “human factor gap”—the disconnect between rapid technological advancement and organizational readiness—remains largely unaddressed. This study investigates the intellectual [...] Read more.
Collaboration is a cornerstone of digital transformation in the Architecture, Engineering, and Construction (AEC) sector. Despite widespread adoption of building information modeling (BIM), a critical “human factor gap”—the disconnect between rapid technological advancement and organizational readiness—remains largely unaddressed. This study investigates the intellectual structure of BIM collaboration research. Adopting a mixed-methods review design, it conducts a comprehensive bibliometric analysis of 1595 journal articles indexed in Web of Science and Scopus, followed by a qualitative content analysis of 38 purposefully selected studies. Findings reveal a clear shift from process-oriented collaboration frameworks toward data-centric and cyber–physical systems, including digital twins, Artificial Intelligence, and blockchain-enabled environments. However, this evolution also highlights a notable asymmetry in BIM collaboration, with technology-driven dynamics substantially outweighing human and organizational considerations. This imbalance, conceptualized as a human factor gap in thematic development observed through the bibliometric analysis, points to a potential socio-technical misalignment, in which technological topics receive more scholarly attention than organizational and human factors. Rather than framing collaboration maturity as a purely technical challenge, the study emphasizes the need for advanced digital infrastructure, human-centered governance models, organizational readiness, and socio-technical competencies. It provides analytical insights for researchers and implications for policymakers and industry stakeholders, emphasizing the necessity of integrated approaches that address both the technological and human dimensions of BIM-enabled collaboration. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

38 pages, 25286 KB  
Article
A New Multi-Progressive Generalized Type-II Censoring: Theory, Reliability Inference, and Multidisciplinary Applications
by Heba S. Mohammed and Ahmed Elshahhat
Mathematics 2026, 14(5), 862; https://doi.org/10.3390/math14050862 (registering DOI) - 3 Mar 2026
Abstract
Modern reliability experiments frequently face operational constraints that require balancing test duration, precision, and removal strategies, rendering classical censoring schemes inadequate for contemporary multidisciplinary applications. This study introduces a novel multi-progressive generalized Type-II censoring (MP-GC-T2) framework that unifies and extends existing progressive and [...] Read more.
Modern reliability experiments frequently face operational constraints that require balancing test duration, precision, and removal strategies, rendering classical censoring schemes inadequate for contemporary multidisciplinary applications. This study introduces a novel multi-progressive generalized Type-II censoring (MP-GC-T2) framework that unifies and extends existing progressive and generalized censoring structures through the integration of staged failure-proportion controls, dual temporal termination thresholds, and adaptive withdrawal of surviving units. The proposed mechanism provides enhanced flexibility in experiment design while retaining analytical tractability for statistical inference. Assuming Weibull lifetimes, we develop a complete inferential framework including maximum likelihood estimation, asymptotic interval construction, and Bayesian estimation via hybrid Metropolis–Hastings–Gibbs sampling with informative gamma priors, together with multiple interval estimation strategies for reliability characteristics. Extensive Monte Carlo investigations assess estimator bias, precision, coverage behaviour, and interval efficiency across diverse censoring configurations, demonstrating robustness and inferential gains relative to conventional schemes. Furthermore, optimal progressive-removal planning criteria are explored to guide practitioners in selecting censoring patterns that maximize inferential accuracy under practical constraints. The versatility and practical relevance of the MP-GC-T2 design are illustrated through applications to heterogeneous real datasets arising from clinical, chemical, geological, physical, and petroleum sciences, confirming its adaptability to distinct reliability structures and data-generation mechanisms. Collectively, the proposed methodology contributes a unified experimental and inferential platform that advances censoring design, reliability estimation, and cross-disciplinary statistical modelling. Full article
(This article belongs to the Special Issue Reliability Estimation and Mathematical Statistics, 2nd Edition)
23 pages, 2477 KB  
Article
Determinants of Electric Vehicle Adoption Intentions in Turkey: An Explainable Machine Learning Analysis of Economic, Infrastructure, and Behavioral Factors
by İlayda Nur Şişman and Burcu Çarklı Yavuz
Sustainability 2026, 18(5), 2463; https://doi.org/10.3390/su18052463 (registering DOI) - 3 Mar 2026
Abstract
The transportation sector is a major contributor to global greenhouse gas emissions, making electric vehicle (EV) adoption critical for decarbonization. This study investigates EV adoption determinants in Turkey using explainable machine learning, focusing on economic, infrastructure, and attitudinal factors while exploring driver behavior [...] Read more.
The transportation sector is a major contributor to global greenhouse gas emissions, making electric vehicle (EV) adoption critical for decarbonization. This study investigates EV adoption determinants in Turkey using explainable machine learning, focusing on economic, infrastructure, and attitudinal factors while exploring driver behavior and fuel-efficiency awareness. Data from 304 participants were collected; after excluding undecided responses, the final analytical sample comprised 232 participants. Multiple algorithms (Random Forest, XGBoost, Logistic Regression, and SVM) were evaluated, addressing class imbalance via SMOTETomek. SHAP analysis identified policy-relevant predictors. Results reveal that EV adoption intentions are primarily driven by perceived cost impact, EV knowledge, and charging infrastructure accessibility, showing substantially stronger effects than driver behavior. Exploratory analysis indicates that aggressive driving correlates with lower fuel-efficiency awareness, whereas maintenance and eco-driving support higher awareness. The best-performing Random Forest model achieved 89.36% accuracy and a 0.9348 F1-score. Rather than claiming novelty in ML application, this study contributes an interpretable framework and emerging-market evidence contrasting economic/infrastructure factors against behavioral variables. Findings provide actionable insights for policy, highlighting cost-focused incentives, infrastructure deployment, and targeted awareness campaigns. Full article
(This article belongs to the Section Sustainable Transportation)
23 pages, 3449 KB  
Article
Inverse Kinematics of China Space Station Experimental Module Manipulator
by Yang Liu, Haibo Gao, Yuxiang Zhao, Shuo Zhang, Yuteng Xie, Yifan Yang, Yonglong Zhang, Mengfei Li, Zhiduo Jiang and Zongwu Xie
Machines 2026, 14(3), 284; https://doi.org/10.3390/machines14030284 - 3 Mar 2026
Abstract
SSRMS refers to a Space Station Remote Manipulator System. The robotic arm of the Wentian module can complete tasks such as supporting astronauts’ extravehicular activities, installing and maintaining payloads, and inspecting the space station. The seven-joint SSRMS manipulator is critical for space missions. [...] Read more.
SSRMS refers to a Space Station Remote Manipulator System. The robotic arm of the Wentian module can complete tasks such as supporting astronauts’ extravehicular activities, installing and maintaining payloads, and inspecting the space station. The seven-joint SSRMS manipulator is critical for space missions. This study aims to build its kinematic model via screw theory. It simplifies SSRMS to right-angle rods, defines joint screw axes, twist coordinates, and initial pose matrix. Using the PoE (Product of Exponentials) formula, the 7-DOF forward kinematics equation is derived. In addition, it derives fixed joint angle for inverse kinematics, including analytical solutions and numerical solutions. It elaborates analytical solutions for fixing joints 1/7 and 2/6 and numerical solutions for fixing joints 3/4/5, solves all joint angles via kinematic decoupling, and addresses special cases. Experiments with China’s space station small arm parameters show the probability of meeting the accuracy threshold 10–4 is 99.79%, verifying model effectiveness, while noting singularity-related weak solving areas. This provides a reliable basis for subsequent inverse kinematics optimization. Full article
20 pages, 77395 KB  
Article
Underwater Moving Target Localization Based on High-Density Pressure Array Sensing
by Jiamin Chen, Yilin Li, Ruixin Chen, Wenjun Li, Keqiang Yue and Ruixue Li
J. Mar. Sci. Eng. 2026, 14(5), 484; https://doi.org/10.3390/jmse14050484 - 3 Mar 2026
Abstract
The artificial lateral line sensing principle provides a promising approach for underwater target perception and the navigation of underwater vehicles in complex flow environments. However, the highly nonlinear hydrodynamic mechanisms in complex flow fields make it difficult to establish accurate analytical models, which [...] Read more.
The artificial lateral line sensing principle provides a promising approach for underwater target perception and the navigation of underwater vehicles in complex flow environments. However, the highly nonlinear hydrodynamic mechanisms in complex flow fields make it difficult to establish accurate analytical models, which limits the development of high-precision perception and localization methods for underwater moving targets. In this study, a high-fidelity simulation model is established to characterize the pressure field variations induced by a moving source on an artificial lateral line pressure array. The influences of source velocity and sensing distance on the sensitivity and discretization characteristics of the pressure array are systematically investigated. Simulation results indicate that the sensor density of the pressure array is strongly correlated with the spatial resolution of the acquired pressure data, and a resolution of 50 sensors per meter is selected as the best-performing configuration by balancing sensing accuracy and sensor quantity. Under this configuration, the pressure distribution induced by the moving source exhibits clear and distinguishable spatiotemporal features, making it suitable for deep learning-based modeling. Furthermore, a large-scale temporal pressure dataset is constructed based on high-fidelity simulations under multiple motion directions and velocity conditions, and a spatiotemporal neural network is employed to predict the position of the underwater moving source. Experimental results demonstrate that, for straight-line underwater motion scenarios, the average localization error is within 7 cm, and a classification accuracy of 71% is achieved in practical engineering experiments. These results indicate that the proposed artificial lateral line pressure array design and deep learning-based prediction framework provide a feasible and effective solution for underwater target perception and localization in complex flow environments. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

36 pages, 2892 KB  
Article
Bridging Behavioral and Emotional Intelligence: An Interpretable Multimodal Deep Learning Framework for Customer Lifetime Value Estimation in the Hospitality Industry
by Milena Nikolić, Marina Marjanović and Žarko Rađenović
Math. Comput. Appl. 2026, 31(2), 39; https://doi.org/10.3390/mca31020039 - 3 Mar 2026
Abstract
Customer Lifetime Value (CLV) estimation over the observed transactional horizon is a fundamental challenge in hospitality analytics, supporting revenue management, personalization, and long-term customer relationship strategies. However, existing models predominantly rely on structured behavioral data while overlooking the emotional intelligence embedded in guest [...] Read more.
Customer Lifetime Value (CLV) estimation over the observed transactional horizon is a fundamental challenge in hospitality analytics, supporting revenue management, personalization, and long-term customer relationship strategies. However, existing models predominantly rely on structured behavioral data while overlooking the emotional intelligence embedded in guest narratives. This study proposes an interpretable multimodal deep learning (DL) framework that bridges behavioral and emotional intelligence for CLV estimation by integrating structured booking records with unstructured hotel review text. Model interpretability is ensured through SHAP analysis for structured attributes, LIME for local textual explanations, and attention visualization for modality interaction analysis. Experimental evaluation on large-scale hospitality datasets demonstrates that the proposed multimodal framework outperforms traditional machine learning models, unimodal deep learning baselines, and classical ensemble learners, yielding consistent improvements across multiple error metrics and a notable increase in goodness of fit. The results confirm that emotional intelligence extracted from guest reviews significantly enhances CLV estimation and provides actionable insights for hospitality decision-making, supporting the deployment of transparent and explainable artificial intelligence (XAI) systems for strategic customer value management. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Neural Networks)
Show Figures

Figure 1

Back to TopTop