Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (119)

Search Parameters:
Keywords = linear decision rules

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 839 KB  
Article
Optimizing a Heavy-Haul Railway Train Formation Plan for Maximized Transport Capacity
by Shichao Han, Yun Bai and Yao Chen
Vehicles 2026, 8(3), 45; https://doi.org/10.3390/vehicles8030045 - 28 Feb 2026
Viewed by 176
Abstract
Heavy-haul railways are important for bulk freight transport, and improving their transport capacity is essential for railway operators to enhance operational efficiency. This study develops an integer linear programming model for train formation planning that maximizes transport capacity, incorporating key practical constraints such [...] Read more.
Heavy-haul railways are important for bulk freight transport, and improving their transport capacity is essential for railway operators to enhance operational efficiency. This study develops an integer linear programming model for train formation planning that maximizes transport capacity, incorporating key practical constraints such as section headway, station capacity, and locomotive matching. This study makes two main contributions: (1) explicit formulation of transport-capacity maximization as the primary objective; and (2) incorporation of specific train formation rules through linear resource-flow coefficients that characterize the combination and decomposition operations. The model is applied to the Shuozhou–Huanghua Railway in a case study. Experimental results show that the optimized train formation plan increases total freight volume from 2810.4 thousand tons to 3080.0 thousand tons, representing a capacity improvement of approximately 9.6%. This result is achieved by adjusting the mix of train tonnage levels, increasing combination operations for medium-capacity trains, and reallocating locomotive types in accordance with traction requirements. The study demonstrates that a capacity-oriented optimization framework can effectively support train-formation plan decisions under practical operational constraints, providing railway operators with a systematic tool to enhance line utilization without expanding infrastructure. Full article
(This article belongs to the Special Issue Models and Algorithms for Railway Line Planning Problems)
Show Figures

Figure 1

23 pages, 3499 KB  
Article
Integrating Lipschitz Extensions and Probabilistic Modelling for Metric Space Classification
by Roger Arnau, Álvaro González Cortés and Enrique A. Sánchez Pérez
Mathematics 2026, 14(3), 544; https://doi.org/10.3390/math14030544 - 3 Feb 2026
Viewed by 301
Abstract
Lipschitz-based classification provides a flexible framework for general metric spaces, naturally adapting to complex data structures without assuming linearity. However, direct applications of classical extensions often yield decision boundaries equivalent to the 1-Nearest Neighbour classifier, leading to overfitting and sensitivity to noise. Addressing [...] Read more.
Lipschitz-based classification provides a flexible framework for general metric spaces, naturally adapting to complex data structures without assuming linearity. However, direct applications of classical extensions often yield decision boundaries equivalent to the 1-Nearest Neighbour classifier, leading to overfitting and sensitivity to noise. Addressing this limitation, this paper introduces a novel binary classification algorithm that integrates probabilistic kernel smoothing with explicit Lipschitz extensions. We approximate the conditional probability of class membership by extending smoothed labels through a family of bounded Lipschitz functions. Theoretically, we prove that while direct extensions of binary labels collapse to nearest-neighbour rules, our probabilistic approach guarantees controlled complexity and stability. Experimentally, evaluations on synthetic and real-world datasets demonstrate that this methodology generates smooth, interpretable decision boundaries resilient to outliers. The results confirm that combining kernel smoothing with adaptive Lipschitz extensions yields performance competitive with state-of-the-art methods while offering superior geometric interpretability. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

16 pages, 7688 KB  
Article
Vision-Only Localization of Drones with Optimal Window Velocity Fusion
by Seokwon Yeom
Electronics 2026, 15(3), 637; https://doi.org/10.3390/electronics15030637 - 2 Feb 2026
Viewed by 272
Abstract
Drone localization is essential for various purposes such as navigation, autonomous flight, and object tracking. However, this task is challenging when satellite signals are unavailable. This paper addresses database-free vision-only localization of flying drones using optimal window template matching and velocity fusion. Assuming [...] Read more.
Drone localization is essential for various purposes such as navigation, autonomous flight, and object tracking. However, this task is challenging when satellite signals are unavailable. This paper addresses database-free vision-only localization of flying drones using optimal window template matching and velocity fusion. Assuming the ground is flat, multiple optimal windows are derived from a piecewise linear segment (regression) model of the image-to-real world conversion function. The optimal window is used as a fixed region template to estimate the instantaneous velocity of the drone. The multiple velocities obtained from multiple optimal windows are integrated by a hybrid fusion rule: a weighted average for lateral (sideways) velocities, and a winner-take-all decision for longitudinal velocities. In the experiments, a drone performed a total of six medium-range (800 m to 2 km round trip) and high-speed (up to 14 m/s) maneuvering flights in rural and urban areas. The flight maneuvers include forward-backward, zigzags, and banked turns. Performance was evaluated by root mean squared error (RMSE) and drift error of the GNSS-derived ground-truth trajectories and rigid-body rotated vision-only trajectories. Four fusion rules (simple average, weighted average, winner-take-all, hybrid fusion) were evaluated, and the hybrid fusion rule performed the best. The proposed video stream-based method has been shown to achieve flight errors ranging from a few meters to tens of meters, which corresponds to a few percent of the flight length. Full article
Show Figures

Figure 1

23 pages, 744 KB  
Article
Integrating Explainable AI (XAI) and NCA-Validated Clustering for an Interpretable Multi-Layered Recruitment Model
by Marcin Nowak and Marta Pawłowska-Nowak
AI 2026, 7(2), 53; https://doi.org/10.3390/ai7020053 - 2 Feb 2026
Viewed by 504
Abstract
The growing use of AI-supported recruitment systems raises concerns related to model opacity, auditability, and ethically sensitive decision-making, despite their predictive potential. In human resource management, there is a clear need for recruitment solutions that combine analytical effectiveness with transparent and explainable decision [...] Read more.
The growing use of AI-supported recruitment systems raises concerns related to model opacity, auditability, and ethically sensitive decision-making, despite their predictive potential. In human resource management, there is a clear need for recruitment solutions that combine analytical effectiveness with transparent and explainable decision support. Existing approaches often lack coherent, multi-layered architectures integrating expert knowledge, machine learning, and interpretability within a single framework. This article proposes an interpretable, multi-layered recruitment model designed to balance predictive performance with decision transparency. The framework integrates an expert rule-based screening layer, an unsupervised clustering layer for structuring candidate profiles and generating pseudo-labels, and a supervised classification layer trained using repeated k-fold cross-validation. Model behavior is explained using SHAP, while Necessary Condition Analysis (NCA) is applied to diagnose minimum competency thresholds required to achieve a target quality level. The approach is demonstrated in a Data Scientist recruitment case study. Results show the predominance of centroid-based clustering and the high stability of linear classifiers, particularly logistic regression. The proposed framework is replicable and supports transparent, auditable recruitment decisions. Full article
Show Figures

Figure 1

13 pages, 1625 KB  
Article
MAGE (Multimodal AI-Enhanced Gastrectomy Evaluation): Comparative Analysis of Machine Learning Models for Postoperative Complications in Central European Gastric Cancer Population
by Wojciech Górski, Marcin Kubiak, Amir Nour Mohammadi, Maksymilian Podleśny, Gian Luca Baiocchi, Manuele Gaioni, S. Vincent Grasso, Andrew Gumbs, Timothy M. Pawlik, Bartłomiej Drop, Albert Chomątowski, Zuzanna Pelc, Katarzyna Sędłak, Michał Woś and Karol Rawicz-Pruszyński
Cancers 2026, 18(3), 443; https://doi.org/10.3390/cancers18030443 - 29 Jan 2026
Viewed by 416
Abstract
Introduction: By leveraging dedicated datasets and predictive modeling, machine-learning (ML) algorithms can estimate the probability of both short- and long-term outcomes after surgery. The aim of this study was to evaluate the ability of ML-based models to predict postoperative complications in patients [...] Read more.
Introduction: By leveraging dedicated datasets and predictive modeling, machine-learning (ML) algorithms can estimate the probability of both short- and long-term outcomes after surgery. The aim of this study was to evaluate the ability of ML-based models to predict postoperative complications in patients with gastric cancer (GC) undergoing multimodal therapy. In particular, we aimed to develop a free, publicly accessible online calculator based on preoperative variables. Materials and Methods: Patients with histologically confirmed locally advanced (cT2-4N0-3M0) GC who underwent multimodal treatment with curative intent between 2013 and 2023 were included in the study. ML models evaluation pipeline was used with Stratified 5-Fold Cross-Validation. Results: A total of 368 patients were included in the final analytic cohort. Among five algorithm classes under 5-fold cross-validation, Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) was 0.9719, 0.9652, 0.9796, 0.8339 and 0.7581 for XGBoost, Catboost, Random Forest, SVM and Logistic Regression, respectively. Macro F1 was 0.8714, 0.5094, 0.8820, 0.8714 and 0.4579 for XGBoost, SVM, Random Forest, CatBoost and Logistic Regression, respectively. Overall Accuracy was 0.8897, 0.5980, 0.8885, 0.8750 and 0.5466 for XGBoost, SVM, Random Forest, CatBoost and Logistic Regression models, respectively. Conclusions: In this Central and Eastern European cohort of patients with locally advanced GC, ML models using non-linear decision rules-particularly Random Forest and XGBoost- substantially outperformed conventional linear approaches in predicting the severity of postoperative complications. Prospective external validation is needed to clarify the model’s clinical utility and its potential role in perioperative decision support. Full article
Show Figures

Figure 1

44 pages, 4146 KB  
Article
Interpretable Binary Classification Under Constraints for Financial Compliance Modeling
by Álex Paz, Broderick Crawford, Eric Monfroy, Eduardo Rodriguez-Tello, José Barrera-García, Felipe Cisternas-Caneo, Benjamín López Cortés, Yoslandy Lazo, Andrés Yáñez, Álvaro Peña Fritz and Ricardo Soto
Mathematics 2026, 14(3), 429; https://doi.org/10.3390/math14030429 - 26 Jan 2026
Viewed by 340
Abstract
This study addresses an interpretable supervised binary classification problem under constrained feature availability and class imbalance. The objective is to evaluate whether reliable predictive performance can be achieved using exclusively pre-event administrative variables while preserving transparency and analytical traceability of model decisions. A [...] Read more.
This study addresses an interpretable supervised binary classification problem under constrained feature availability and class imbalance. The objective is to evaluate whether reliable predictive performance can be achieved using exclusively pre-event administrative variables while preserving transparency and analytical traceability of model decisions. A comparative framework is developed using linear and ensemble-based classifiers, combined with resampling strategies and exhaustive hyperparameter optimization embedded within cross-validation. Model performance is evaluated using standard classification metrics, with particular emphasis on the Matthews correlation coefficient as a robust measure under imbalance. In addition to predictive accuracy, the analysis incorporates global, structural, and local interpretability mechanisms, including permutation feature importance, explicit decision paths derived from tree-based models, and additive local explanations. Experimental results show that optimized ensemble models achieve consistent performance gains over linear baselines while maintaining a balanced error structure across classes. Importantly, the most influential predictors exhibit stable rankings across models and explanation methods, indicating a concentrated and robust discriminative signal within the constrained feature space. The interpretability analysis demonstrates that complex classifiers can be decomposed into verifiable decision rules and locally coherent feature contributions. Overall, the findings confirm that interpretable supervised classification can be reliably conducted under administrative data constraints, providing a reproducible modeling framework that balances predictive performance, error analysis, and explainability in applied mathematical settings. Full article
Show Figures

Figure 1

26 pages, 5330 KB  
Article
Spatial Risk Assessment: A Case of Multivariate Linear Regression
by Dubravka Božić, Biserka Runje, Branko Štrbac, Miloš Ranisavljev and Andrej Razumić
Appl. Syst. Innov. 2026, 9(1), 20; https://doi.org/10.3390/asi9010020 - 9 Jan 2026
Viewed by 698
Abstract
The acceptance or rejection of a measurement is determined based on its associated measurement uncertainty. In this procedure, there is a risk of making incorrect decisions, including the potential rejection of compliant measurements or the acceptance of non-conforming ones. This study introduces a [...] Read more.
The acceptance or rejection of a measurement is determined based on its associated measurement uncertainty. In this procedure, there is a risk of making incorrect decisions, including the potential rejection of compliant measurements or the acceptance of non-conforming ones. This study introduces a mathematical model for the spatial evaluation of the global producer’s and global consumer’s risk, predicated on Bayes’ theorem and a decision rule that includes a guard band. The proposed model is appropriate for risk assessment within the framework of multivariate linear regression. Its applicability is demonstrated through an example involving the flatness of the workbench table surface of a coordinate measuring machine. The least-risk direction on the workbench was identified, and risks were quantified under varying selections of reference planes and differing measurement uncertainties anticipated in future measurement processes. Model evaluation was performed using confusion matrix-based metrics. The spaces of the commonly used metrics, constrained by the dimensions of the coordinate measuring machine workbench, were constructed. Using the evaluated metrics, the optimal guard band width was specified to ensure the minimum values of both the global producer’s and the global consumer’s risk. Full article
(This article belongs to the Section Applied Mathematics)
Show Figures

Figure 1

26 pages, 1160 KB  
Article
Identifying the Importance of Key Performance Indicators for Enhanced Maritime Decision-Making to Avoid Navigational Accidents
by Antanas Markauskas and Vytautas Paulauskas
J. Mar. Sci. Eng. 2026, 14(1), 105; https://doi.org/10.3390/jmse14010105 - 5 Jan 2026
Viewed by 623
Abstract
Despite ongoing advances in maritime safety research, ship accidents persist, with significant consequences for human life, marine ecosystems, and port operations. Because many accidents occur in or near ports, assessing a vessel’s ability to enter or depart safely remains critical. Although ports apply [...] Read more.
Despite ongoing advances in maritime safety research, ship accidents persist, with significant consequences for human life, marine ecosystems, and port operations. Because many accidents occur in or near ports, assessing a vessel’s ability to enter or depart safely remains critical. Although ports apply local navigational rules, safety criteria could be strengthened by adopting more adaptive and data-informed approaches. This study presents a mathematical framework that links Key Performance Indicators (KPIs) to a Ship Risk Profile (SRP) for collision/contact/grounding risk indication. Expert-based KPI importance weights were derived using the Average Rank Transformation into Weight method in linear (ARTIW-L) and nonlinear (ARTIW-N) forms and aggregated into a nominal SRP. Using routinely monitored KPIs largely drawn from the Baltic and International Maritime Council and Port State Control/flag-related measures, the results indicate that critical equipment and systems failures and human/organisational factors—particularly occupational health and safety and human resource management deficiencies—are the most influential contributors to the normalised accident-risk index. The proposed framework provides port authorities and maritime stakeholders with an interpretable basis for more proactive risk-informed decision-making and targeted safety improvements. Full article
(This article belongs to the Special Issue Advancements in Maritime Safety and Risk Assessment)
Show Figures

Figure 1

64 pages, 6020 KB  
Article
Logistics Performance and the Three Pillars of ESG: A Detailed Causal and Predictive Investigation
by Nicola Magaletti, Valeria Notarnicola, Mauro Di Molfetta, Stefano Mariani and Angelo Leogrande
Sustainability 2025, 17(24), 11370; https://doi.org/10.3390/su172411370 - 18 Dec 2025
Viewed by 1026
Abstract
This study investigates the complex relationship between the performance of logistics and Environmental, Social, and Governance (ESG) performance, drawing upon the multi-methodological framework of combining econometrics with state-of-the-art machine learning approaches. Employing Instrumental Variable (IV) Panel data regressions, viz., 2SLS and G2SLS, with [...] Read more.
This study investigates the complex relationship between the performance of logistics and Environmental, Social, and Governance (ESG) performance, drawing upon the multi-methodological framework of combining econometrics with state-of-the-art machine learning approaches. Employing Instrumental Variable (IV) Panel data regressions, viz., 2SLS and G2SLS, with data from a balanced panel of 163 countries covering the period from 2007 to 2023, the research thoroughly investigates how the performance of the Logistics Performance Index (LPI) is correlated with a variety of ESG indicators. To enrich the analysis, machine learning models—models based upon regression, viz., Random Forest, k-Nearest Neighbors, Support Vector Machines, Boosting Regression, Decision Tree Regression, and Linear Regressions, and clustering, viz., Density-Based, Neighborhood-Based, and Hierarchical clustering, Fuzzy c-Means, Model-Based, and Random Forest—were applied to uncover unknown structures and predict the behavior of LPI. Empirical evidence suggests that higher improvements in the performance of logistics are systematically correlated with nascent developments in all three dimensions of the environment (E), social (S), and governance (G). The evidence from econometrics suggests that higher LPI goes with environmental trade-offs such as higher emissions of greenhouse gases but cleaner air and usage of resources. On the S dimension, better performance in terms of logistics is correlated with better education performance and reducing child labor, but also demonstrates potential problems such as social imbalances. For G, better governance of logistics goes with better governance, voice and public participation, science productivity, and rule of law. Through both regression and cluster methods, each of the respective parts of ESG were analyzed in isolation, allowing us to study in-depth how the infrastructure of logistics is interacting with sustainability research goals. Overall, the study emphasizes that while modernization is facilitated by the performance of the infrastructure of logistics, this must go hand in hand with policy intervention to make it socially inclusive, environmentally friendly, and institutionally robust. Full article
Show Figures

Figure 1

19 pages, 2001 KB  
Article
Modelling the Sustainable Development of the Ground Handling Process Using the PERT-COST Method
by Artur Kierzkowski, Jacek Ryczyński, Tomasz Kisiel, Ewa Mardeusz and Olegas Prentkovskis
Sustainability 2025, 17(24), 11278; https://doi.org/10.3390/su172411278 - 16 Dec 2025
Viewed by 458
Abstract
Aircraft turnaround efficiency is a key determinant of the sustainability of air transport systems. Each stage of ground handling—passenger disembarkation, baggage handling, refuelling, and ancillary services—contributes to the total turnaround time, with direct implications for airport capacity, operating costs, and environmental performance. Using [...] Read more.
Aircraft turnaround efficiency is a key determinant of the sustainability of air transport systems. Each stage of ground handling—passenger disembarkation, baggage handling, refuelling, and ancillary services—contributes to the total turnaround time, with direct implications for airport capacity, operating costs, and environmental performance. Using empirical records from ground operations, the study characterizes the duration and variability of individual activities and identifies the main process bottlenecks. Building on this evidence, a comparative PERT-COST protocol with explicit threshold rules (quantized billing steps for selected resources) is developed and applied across predefined scenarios (remote versus gate, day versus night, low versus high fuel uplift, with versus without a second baggage team) under both linear and threshold cost models. The protocol aligns with ITS-enabled decision support by mapping stochastic activity times to cost-of-crashing functions and by providing harmonized performance metrics: final time T, total cost ∑ΔC, and efficiency η (EUR/min). The results show that moderate time reductions are attainable at reasonable cost, whereas aggressive targets that lie below the structural minimum are infeasible under current constraints; gate stands reduce the attainable minimum time but increase the marginal price near the minimum, and night operations raise costs without improving that minimum. These findings delineate the most productive intervention range and inform operational choices consistent with sustainability objectives. Full article
Show Figures

Figure 1

19 pages, 444 KB  
Article
Enhancing Cascade Object Detection Accuracy Using Correctors Based on High-Dimensional Feature Separation
by Andrey V. Kovalchuk, Andrey A. Lebedev, Olga V. Shemagina, Irina V. Nuidel, Vladimir G. Yakhno and Sergey V. Stasenko
Technologies 2025, 13(12), 593; https://doi.org/10.3390/technologies13120593 - 16 Dec 2025
Cited by 2 | Viewed by 474
Abstract
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can [...] Read more.
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can be attached to an existing detector without retraining it. The key idea is to exploit the blessing of dimensionality: high-dimensional feature vectors constructed from multiple cascade stages are transformed by PCA and whitening into a space where simple linear Fisher discriminants can reliably separate rare error patterns from normal operation using only a few labeled examples. This study presents a novel algorithm designed to correct the outputs of object detectors constructed using the Viola–Jones framework enhanced with a modified census transform. The proposed method introduces several improvements addressing error correction and robustness in data-limited conditions. The approach involves image partitioning through a sliding window of fixed aspect ratio and a modified census transform in which pixel intensity is compared to the mean value within a rectangular neighborhood. Training samples for false negative and false positive correctors are selected using dual Intersection-over-Union (IoU) thresholds and probabilistic sampling of true positive and true negative fragments. Corrector models are trained based on the principles of high-dimensional separability within the paradigm of one- and few-shot learning, utilizing features derived from cascade stages of the detector. Decision boundaries are optimized using Fisher’s rule, with adaptive thresholding to guarantee zero false acceptance. Experimental results indicate that the proposed correction scheme enhances object detection accuracy by effectively compensating for classifier errors, particularly under conditions of scarce training data. On two railway image datasets with only about one thousand images each, the proposed correctors increase Precision from 0.36 to 0.65 on identifier detection while maintaining high Recall (0.98 → 0.94), and improve digit detection Recall from 0.94 to 0.98 with negligible loss in Precision (0.92 → 0.91). These results demonstrate that even under scarce training data, high-dimensional feature separation enables effective one-/few-shot error correction for cascade detectors with minimal computational overhead. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
Show Figures

Figure 1

32 pages, 544 KB  
Article
Explainability, Safety Cues, and Trust in GenAI Advisors: A SEM–ANN Hybrid Study
by Stefanos Balaskas, Ioannis Stamatiou and George Androulakis
Future Internet 2025, 17(12), 566; https://doi.org/10.3390/fi17120566 - 9 Dec 2025
Viewed by 1172
Abstract
“GenAI” assistants are gradually being integrated into daily tasks and learning, but their uptake is no less contingent on perceptions of credibility or safety than on their capabilities per se. The current study hypothesizes and tests its proposed two-road construct consisting of two [...] Read more.
“GenAI” assistants are gradually being integrated into daily tasks and learning, but their uptake is no less contingent on perceptions of credibility or safety than on their capabilities per se. The current study hypothesizes and tests its proposed two-road construct consisting of two interface-level constructs, namely perceived transparency (PT) and perceived safety/guardrails (PSG), influencing “behavioral intention” (BI) both directly and indirectly, via the two socio-cognitive mediators trust in automation (TR) and psychological reactance (RE). Furthermore, we also provide formulations for the evaluative lenses, namely perceived usefulness (PU) and “perceived risk” (PR). Employing survey data with a sample of 365 responses and partial least squares structural equation modeling (PLS-SEM) with bootstrap techniques in SMART-PLS 4, we discovered that PT is the most influential factor in BI, supported by TR, with some contributions from PSG/PU, but none from PR/RE. Mediation testing revealed significant partial mediations, with PT only exhibiting indirect-only mediated relationships via TR, while the other variables are nonsignificant via reactance-driven paths. To uncover non-linearity and non-compensation, a Stage 2 multilayer perceptron was implemented, confirming the SEM ranking, complimented by an importance of variables and sensitivity analysis. In practical terms, the study’s findings support the primacy of explanatory clarity and the importance of clear rules that are rigorously obligatory, with usefulness subordinated to credibility once the latter is achieved. The integration of SEM and ANN improves explanation and prediction, providing valuable insights for policy, managerial, or educational decision-makers about the implementation of GenAI. Full article
Show Figures

Figure 1

21 pages, 3633 KB  
Article
One System, Two Rules: Asymmetrical Coupling of Speech Production and Reading Comprehension in the Trilingual Brain
by Yuanbo Wang, Yingfang Meng, Qiuyue Yang and Ruiming Wang
Brain Sci. 2025, 15(12), 1288; https://doi.org/10.3390/brainsci15121288 - 29 Nov 2025
Viewed by 566
Abstract
Background/Objectives: The functional architecture connecting speech production and reading comprehension remains unclear in multilinguals. This study investigated the cross-modal interaction between these systems in trilinguals to resolve the debate between Age of Acquisition (AoA) and usage frequency. Methods: We recruited 144 Uyghur (L1)–Chinese [...] Read more.
Background/Objectives: The functional architecture connecting speech production and reading comprehension remains unclear in multilinguals. This study investigated the cross-modal interaction between these systems in trilinguals to resolve the debate between Age of Acquisition (AoA) and usage frequency. Methods: We recruited 144 Uyghur (L1)–Chinese (L2)–English (L3) trilinguals, a population uniquely dissociating acquisition order from social dominance. Participants completed a production-to-comprehension priming paradigm, naming pictures in one language before performing a lexical decision task on translated words. Data were analyzed using linear mixed-effects models. Results: Significant cross-language priming confirmed an integrated lexicon, yet a fundamental asymmetry emerged. The top-down influence of production was governed by AoA; earlier-acquired languages (specifically L1) generated more effective priming signals than L2. Conversely, the bottom-up efficiency of recognition was driven by social usage frequency; the socially dominant L2 was the most receptive target, surpassing the heritage L1. Conclusions: The trilingual lexicon operates via “Two Rules”: a history-driven production system (AoA) and an environment-driven recognition system (Social Usage). This asymmetrical baseline challenges simple bilingual extensions and clarifies the dynamics of multilingual language control. Full article
(This article belongs to the Topic Language: From Hearing to Speech and Writing)
Show Figures

Figure 1

40 pages, 3433 KB  
Article
Interpretable Predictive Modeling for Educational Equity: A Workload-Aware Decision Support System for Early Identification of At-Risk Students
by Aigul Shaikhanova, Oleksandr Kuznetsov, Kainizhamal Iklassova, Aizhan Tokkuliyeva and Laura Sugurova
Big Data Cogn. Comput. 2025, 9(11), 297; https://doi.org/10.3390/bdcc9110297 - 20 Nov 2025
Cited by 1 | Viewed by 1544
Abstract
Educational equity and access to quality learning opportunities represent fundamental pillars of sustainable societal development, directly aligned with the United Nations Sustainable Development Goal 4 (Quality Education). Student retention remains a critical challenge in higher education, with early disengagement strongly predicting eventual failure [...] Read more.
Educational equity and access to quality learning opportunities represent fundamental pillars of sustainable societal development, directly aligned with the United Nations Sustainable Development Goal 4 (Quality Education). Student retention remains a critical challenge in higher education, with early disengagement strongly predicting eventual failure and limiting opportunities for social mobility. While machine learning models have demonstrated impressive predictive accuracy for identifying at-risk students, most systems prioritize performance metrics over practical deployment constraints, creating a gap between research demonstrations and real-world impact for social good. We present an accountable and interpretable decision support system that balances three competing objectives essential for responsible AI deployment: ultra-early prediction timing (day 14 of semester), manageable instructor workload (flagging 15% of students), and model transparency (multiple explanation mechanisms). Using the Open University Learning Analytics Dataset (OULAD) containing 22,437 students across seven modules, we develop predictive models from activity patterns, assessment performance, and demographics observable within two weeks. We compare threshold-based rules, logistic regression (interpretable linear modeling), and gradient boosting (ensemble modeling) using temporal validation where early course presentations train models tested on later cohorts. Results show gradient boosting achieves AUC (Area Under the ROC Curve, measuring discrimination ability) of 0.789 and average precision of 0.722, with logistic regression performing nearly identically (AUC 0.783, AP 0.713), revealing that linear modeling captures most predictive signal and makes interpretability essentially free. At our recommended threshold of 0.607, the predictive model flags 15% of students with 84% precision and 35% recall, creating actionable alert lists instructors can manage within normal teaching duties while maintaining accountability for false positives. Calibration analysis confirms that predicted probabilities match observed failure rates, ensuring trustworthy risk estimates. Feature importance modeling reveals that assessment completion and activity patterns dominate demographic factors, providing transparent evidence that behavioral engagement matters more than student background. We implement a complete decision support system generating instructor reports, explainable natural language justifications for each alert, and personalized intervention templates. Our contribution advances responsible AI for social good by demonstrating that interpretable predictive modeling can support equitable educational outcomes when designed with explicit attention to timing, workload, and transparency—core principles of accountable artificial intelligence. Full article
(This article belongs to the Special Issue Applied Data Science for Social Good: 2nd Edition)
Show Figures

Figure 1

24 pages, 2475 KB  
Article
Adaptive Belief Rule Base Modeling of Complex Industrial Systems Based on Sigmoid Functions
by Haolan Huang, Shucheng Feng, Jingying Li, Tianshu Guan and Hailong Zhu
Entropy 2025, 27(11), 1157; https://doi.org/10.3390/e27111157 - 14 Nov 2025
Viewed by 595
Abstract
In response to the challenges posed by multifactorial nonlinear relationships and uncertainties, and to address the limitations of the existing Belief Rule Base (BRB) in nonlinear fitting, uncertainty representation, and parameter optimization, this paper presents an improved reliable modeling method using a nonlinear [...] Read more.
In response to the challenges posed by multifactorial nonlinear relationships and uncertainties, and to address the limitations of the existing Belief Rule Base (BRB) in nonlinear fitting, uncertainty representation, and parameter optimization, this paper presents an improved reliable modeling method using a nonlinear belief rule base (R-NBRB). First, the linear inference mechanism is replaced by a smooth nonlinear S-function. This replacement better adapts to nonlinear dynamics in complex industrial systems. Second, attribute reliability is quantified through a reliability assessment method. Data, reliability, and expert knowledge are integrated using the Evidential Reasoning (ER) algorithm. Uncertainty is expressed in the form of belief degrees. Finally, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm is applied to optimize the inference parameters. Decision bias caused by insufficient expert knowledge is thereby reduced. Experiments were conducted on a task involving the detection of a petroleum pipeline leak. The mean squared error (MSE) of the R-NBRB model is only 0.2569. This represents a 28.24% reduction compared with the BRB model. The proposed method’s effectiveness and adaptability in complex industrial situations are confirmed. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Back to TopTop