Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (114)

Search Parameters:
Keywords = linear decision rules

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 4146 KB  
Article
Interpretable Binary Classification Under Constraints for Financial Compliance Modeling
by Álex Paz, Broderick Crawford, Eric Monfroy, Eduardo Rodriguez-Tello, José Barrera-García, Felipe Cisternas-Caneo, Benjamín López Cortés, Yoslandy Lazo, Andrés Yáñez, Álvaro Peña Fritz and Ricardo Soto
Mathematics 2026, 14(3), 429; https://doi.org/10.3390/math14030429 - 26 Jan 2026
Abstract
This study addresses an interpretable supervised binary classification problem under constrained feature availability and class imbalance. The objective is to evaluate whether reliable predictive performance can be achieved using exclusively pre-event administrative variables while preserving transparency and analytical traceability of model decisions. A [...] Read more.
This study addresses an interpretable supervised binary classification problem under constrained feature availability and class imbalance. The objective is to evaluate whether reliable predictive performance can be achieved using exclusively pre-event administrative variables while preserving transparency and analytical traceability of model decisions. A comparative framework is developed using linear and ensemble-based classifiers, combined with resampling strategies and exhaustive hyperparameter optimization embedded within cross-validation. Model performance is evaluated using standard classification metrics, with particular emphasis on the Matthews correlation coefficient as a robust measure under imbalance. In addition to predictive accuracy, the analysis incorporates global, structural, and local interpretability mechanisms, including permutation feature importance, explicit decision paths derived from tree-based models, and additive local explanations. Experimental results show that optimized ensemble models achieve consistent performance gains over linear baselines while maintaining a balanced error structure across classes. Importantly, the most influential predictors exhibit stable rankings across models and explanation methods, indicating a concentrated and robust discriminative signal within the constrained feature space. The interpretability analysis demonstrates that complex classifiers can be decomposed into verifiable decision rules and locally coherent feature contributions. Overall, the findings confirm that interpretable supervised classification can be reliably conducted under administrative data constraints, providing a reproducible modeling framework that balances predictive performance, error analysis, and explainability in applied mathematical settings. Full article
26 pages, 5330 KB  
Article
Spatial Risk Assessment: A Case of Multivariate Linear Regression
by Dubravka Božić, Biserka Runje, Branko Štrbac, Miloš Ranisavljev and Andrej Razumić
Appl. Syst. Innov. 2026, 9(1), 20; https://doi.org/10.3390/asi9010020 - 9 Jan 2026
Viewed by 364
Abstract
The acceptance or rejection of a measurement is determined based on its associated measurement uncertainty. In this procedure, there is a risk of making incorrect decisions, including the potential rejection of compliant measurements or the acceptance of non-conforming ones. This study introduces a [...] Read more.
The acceptance or rejection of a measurement is determined based on its associated measurement uncertainty. In this procedure, there is a risk of making incorrect decisions, including the potential rejection of compliant measurements or the acceptance of non-conforming ones. This study introduces a mathematical model for the spatial evaluation of the global producer’s and global consumer’s risk, predicated on Bayes’ theorem and a decision rule that includes a guard band. The proposed model is appropriate for risk assessment within the framework of multivariate linear regression. Its applicability is demonstrated through an example involving the flatness of the workbench table surface of a coordinate measuring machine. The least-risk direction on the workbench was identified, and risks were quantified under varying selections of reference planes and differing measurement uncertainties anticipated in future measurement processes. Model evaluation was performed using confusion matrix-based metrics. The spaces of the commonly used metrics, constrained by the dimensions of the coordinate measuring machine workbench, were constructed. Using the evaluated metrics, the optimal guard band width was specified to ensure the minimum values of both the global producer’s and the global consumer’s risk. Full article
(This article belongs to the Section Applied Mathematics)
Show Figures

Figure 1

26 pages, 1160 KB  
Article
Identifying the Importance of Key Performance Indicators for Enhanced Maritime Decision-Making to Avoid Navigational Accidents
by Antanas Markauskas and Vytautas Paulauskas
J. Mar. Sci. Eng. 2026, 14(1), 105; https://doi.org/10.3390/jmse14010105 - 5 Jan 2026
Viewed by 392
Abstract
Despite ongoing advances in maritime safety research, ship accidents persist, with significant consequences for human life, marine ecosystems, and port operations. Because many accidents occur in or near ports, assessing a vessel’s ability to enter or depart safely remains critical. Although ports apply [...] Read more.
Despite ongoing advances in maritime safety research, ship accidents persist, with significant consequences for human life, marine ecosystems, and port operations. Because many accidents occur in or near ports, assessing a vessel’s ability to enter or depart safely remains critical. Although ports apply local navigational rules, safety criteria could be strengthened by adopting more adaptive and data-informed approaches. This study presents a mathematical framework that links Key Performance Indicators (KPIs) to a Ship Risk Profile (SRP) for collision/contact/grounding risk indication. Expert-based KPI importance weights were derived using the Average Rank Transformation into Weight method in linear (ARTIW-L) and nonlinear (ARTIW-N) forms and aggregated into a nominal SRP. Using routinely monitored KPIs largely drawn from the Baltic and International Maritime Council and Port State Control/flag-related measures, the results indicate that critical equipment and systems failures and human/organisational factors—particularly occupational health and safety and human resource management deficiencies—are the most influential contributors to the normalised accident-risk index. The proposed framework provides port authorities and maritime stakeholders with an interpretable basis for more proactive risk-informed decision-making and targeted safety improvements. Full article
(This article belongs to the Special Issue Advancements in Maritime Safety and Risk Assessment)
Show Figures

Figure 1

64 pages, 6020 KB  
Article
Logistics Performance and the Three Pillars of ESG: A Detailed Causal and Predictive Investigation
by Nicola Magaletti, Valeria Notarnicola, Mauro Di Molfetta, Stefano Mariani and Angelo Leogrande
Sustainability 2025, 17(24), 11370; https://doi.org/10.3390/su172411370 - 18 Dec 2025
Viewed by 717
Abstract
This study investigates the complex relationship between the performance of logistics and Environmental, Social, and Governance (ESG) performance, drawing upon the multi-methodological framework of combining econometrics with state-of-the-art machine learning approaches. Employing Instrumental Variable (IV) Panel data regressions, viz., 2SLS and G2SLS, with [...] Read more.
This study investigates the complex relationship between the performance of logistics and Environmental, Social, and Governance (ESG) performance, drawing upon the multi-methodological framework of combining econometrics with state-of-the-art machine learning approaches. Employing Instrumental Variable (IV) Panel data regressions, viz., 2SLS and G2SLS, with data from a balanced panel of 163 countries covering the period from 2007 to 2023, the research thoroughly investigates how the performance of the Logistics Performance Index (LPI) is correlated with a variety of ESG indicators. To enrich the analysis, machine learning models—models based upon regression, viz., Random Forest, k-Nearest Neighbors, Support Vector Machines, Boosting Regression, Decision Tree Regression, and Linear Regressions, and clustering, viz., Density-Based, Neighborhood-Based, and Hierarchical clustering, Fuzzy c-Means, Model-Based, and Random Forest—were applied to uncover unknown structures and predict the behavior of LPI. Empirical evidence suggests that higher improvements in the performance of logistics are systematically correlated with nascent developments in all three dimensions of the environment (E), social (S), and governance (G). The evidence from econometrics suggests that higher LPI goes with environmental trade-offs such as higher emissions of greenhouse gases but cleaner air and usage of resources. On the S dimension, better performance in terms of logistics is correlated with better education performance and reducing child labor, but also demonstrates potential problems such as social imbalances. For G, better governance of logistics goes with better governance, voice and public participation, science productivity, and rule of law. Through both regression and cluster methods, each of the respective parts of ESG were analyzed in isolation, allowing us to study in-depth how the infrastructure of logistics is interacting with sustainability research goals. Overall, the study emphasizes that while modernization is facilitated by the performance of the infrastructure of logistics, this must go hand in hand with policy intervention to make it socially inclusive, environmentally friendly, and institutionally robust. Full article
Show Figures

Figure 1

19 pages, 2001 KB  
Article
Modelling the Sustainable Development of the Ground Handling Process Using the PERT-COST Method
by Artur Kierzkowski, Jacek Ryczyński, Tomasz Kisiel, Ewa Mardeusz and Olegas Prentkovskis
Sustainability 2025, 17(24), 11278; https://doi.org/10.3390/su172411278 - 16 Dec 2025
Viewed by 345
Abstract
Aircraft turnaround efficiency is a key determinant of the sustainability of air transport systems. Each stage of ground handling—passenger disembarkation, baggage handling, refuelling, and ancillary services—contributes to the total turnaround time, with direct implications for airport capacity, operating costs, and environmental performance. Using [...] Read more.
Aircraft turnaround efficiency is a key determinant of the sustainability of air transport systems. Each stage of ground handling—passenger disembarkation, baggage handling, refuelling, and ancillary services—contributes to the total turnaround time, with direct implications for airport capacity, operating costs, and environmental performance. Using empirical records from ground operations, the study characterizes the duration and variability of individual activities and identifies the main process bottlenecks. Building on this evidence, a comparative PERT-COST protocol with explicit threshold rules (quantized billing steps for selected resources) is developed and applied across predefined scenarios (remote versus gate, day versus night, low versus high fuel uplift, with versus without a second baggage team) under both linear and threshold cost models. The protocol aligns with ITS-enabled decision support by mapping stochastic activity times to cost-of-crashing functions and by providing harmonized performance metrics: final time T, total cost ∑ΔC, and efficiency η (EUR/min). The results show that moderate time reductions are attainable at reasonable cost, whereas aggressive targets that lie below the structural minimum are infeasible under current constraints; gate stands reduce the attainable minimum time but increase the marginal price near the minimum, and night operations raise costs without improving that minimum. These findings delineate the most productive intervention range and inform operational choices consistent with sustainability objectives. Full article
Show Figures

Figure 1

19 pages, 444 KB  
Article
Enhancing Cascade Object Detection Accuracy Using Correctors Based on High-Dimensional Feature Separation
by Andrey V. Kovalchuk, Andrey A. Lebedev, Olga V. Shemagina, Irina V. Nuidel, Vladimir G. Yakhno and Sergey V. Stasenko
Technologies 2025, 13(12), 593; https://doi.org/10.3390/technologies13120593 - 16 Dec 2025
Cited by 1 | Viewed by 371
Abstract
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can [...] Read more.
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can be attached to an existing detector without retraining it. The key idea is to exploit the blessing of dimensionality: high-dimensional feature vectors constructed from multiple cascade stages are transformed by PCA and whitening into a space where simple linear Fisher discriminants can reliably separate rare error patterns from normal operation using only a few labeled examples. This study presents a novel algorithm designed to correct the outputs of object detectors constructed using the Viola–Jones framework enhanced with a modified census transform. The proposed method introduces several improvements addressing error correction and robustness in data-limited conditions. The approach involves image partitioning through a sliding window of fixed aspect ratio and a modified census transform in which pixel intensity is compared to the mean value within a rectangular neighborhood. Training samples for false negative and false positive correctors are selected using dual Intersection-over-Union (IoU) thresholds and probabilistic sampling of true positive and true negative fragments. Corrector models are trained based on the principles of high-dimensional separability within the paradigm of one- and few-shot learning, utilizing features derived from cascade stages of the detector. Decision boundaries are optimized using Fisher’s rule, with adaptive thresholding to guarantee zero false acceptance. Experimental results indicate that the proposed correction scheme enhances object detection accuracy by effectively compensating for classifier errors, particularly under conditions of scarce training data. On two railway image datasets with only about one thousand images each, the proposed correctors increase Precision from 0.36 to 0.65 on identifier detection while maintaining high Recall (0.98 → 0.94), and improve digit detection Recall from 0.94 to 0.98 with negligible loss in Precision (0.92 → 0.91). These results demonstrate that even under scarce training data, high-dimensional feature separation enables effective one-/few-shot error correction for cascade detectors with minimal computational overhead. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
Show Figures

Figure 1

32 pages, 544 KB  
Article
Explainability, Safety Cues, and Trust in GenAI Advisors: A SEM–ANN Hybrid Study
by Stefanos Balaskas, Ioannis Stamatiou and George Androulakis
Future Internet 2025, 17(12), 566; https://doi.org/10.3390/fi17120566 - 9 Dec 2025
Viewed by 897
Abstract
“GenAI” assistants are gradually being integrated into daily tasks and learning, but their uptake is no less contingent on perceptions of credibility or safety than on their capabilities per se. The current study hypothesizes and tests its proposed two-road construct consisting of two [...] Read more.
“GenAI” assistants are gradually being integrated into daily tasks and learning, but their uptake is no less contingent on perceptions of credibility or safety than on their capabilities per se. The current study hypothesizes and tests its proposed two-road construct consisting of two interface-level constructs, namely perceived transparency (PT) and perceived safety/guardrails (PSG), influencing “behavioral intention” (BI) both directly and indirectly, via the two socio-cognitive mediators trust in automation (TR) and psychological reactance (RE). Furthermore, we also provide formulations for the evaluative lenses, namely perceived usefulness (PU) and “perceived risk” (PR). Employing survey data with a sample of 365 responses and partial least squares structural equation modeling (PLS-SEM) with bootstrap techniques in SMART-PLS 4, we discovered that PT is the most influential factor in BI, supported by TR, with some contributions from PSG/PU, but none from PR/RE. Mediation testing revealed significant partial mediations, with PT only exhibiting indirect-only mediated relationships via TR, while the other variables are nonsignificant via reactance-driven paths. To uncover non-linearity and non-compensation, a Stage 2 multilayer perceptron was implemented, confirming the SEM ranking, complimented by an importance of variables and sensitivity analysis. In practical terms, the study’s findings support the primacy of explanatory clarity and the importance of clear rules that are rigorously obligatory, with usefulness subordinated to credibility once the latter is achieved. The integration of SEM and ANN improves explanation and prediction, providing valuable insights for policy, managerial, or educational decision-makers about the implementation of GenAI. Full article
Show Figures

Figure 1

21 pages, 3633 KB  
Article
One System, Two Rules: Asymmetrical Coupling of Speech Production and Reading Comprehension in the Trilingual Brain
by Yuanbo Wang, Yingfang Meng, Qiuyue Yang and Ruiming Wang
Brain Sci. 2025, 15(12), 1288; https://doi.org/10.3390/brainsci15121288 - 29 Nov 2025
Viewed by 470
Abstract
Background/Objectives: The functional architecture connecting speech production and reading comprehension remains unclear in multilinguals. This study investigated the cross-modal interaction between these systems in trilinguals to resolve the debate between Age of Acquisition (AoA) and usage frequency. Methods: We recruited 144 Uyghur (L1)–Chinese [...] Read more.
Background/Objectives: The functional architecture connecting speech production and reading comprehension remains unclear in multilinguals. This study investigated the cross-modal interaction between these systems in trilinguals to resolve the debate between Age of Acquisition (AoA) and usage frequency. Methods: We recruited 144 Uyghur (L1)–Chinese (L2)–English (L3) trilinguals, a population uniquely dissociating acquisition order from social dominance. Participants completed a production-to-comprehension priming paradigm, naming pictures in one language before performing a lexical decision task on translated words. Data were analyzed using linear mixed-effects models. Results: Significant cross-language priming confirmed an integrated lexicon, yet a fundamental asymmetry emerged. The top-down influence of production was governed by AoA; earlier-acquired languages (specifically L1) generated more effective priming signals than L2. Conversely, the bottom-up efficiency of recognition was driven by social usage frequency; the socially dominant L2 was the most receptive target, surpassing the heritage L1. Conclusions: The trilingual lexicon operates via “Two Rules”: a history-driven production system (AoA) and an environment-driven recognition system (Social Usage). This asymmetrical baseline challenges simple bilingual extensions and clarifies the dynamics of multilingual language control. Full article
(This article belongs to the Topic Language: From Hearing to Speech and Writing)
Show Figures

Figure 1

40 pages, 3433 KB  
Article
Interpretable Predictive Modeling for Educational Equity: A Workload-Aware Decision Support System for Early Identification of At-Risk Students
by Aigul Shaikhanova, Oleksandr Kuznetsov, Kainizhamal Iklassova, Aizhan Tokkuliyeva and Laura Sugurova
Big Data Cogn. Comput. 2025, 9(11), 297; https://doi.org/10.3390/bdcc9110297 - 20 Nov 2025
Viewed by 1176
Abstract
Educational equity and access to quality learning opportunities represent fundamental pillars of sustainable societal development, directly aligned with the United Nations Sustainable Development Goal 4 (Quality Education). Student retention remains a critical challenge in higher education, with early disengagement strongly predicting eventual failure [...] Read more.
Educational equity and access to quality learning opportunities represent fundamental pillars of sustainable societal development, directly aligned with the United Nations Sustainable Development Goal 4 (Quality Education). Student retention remains a critical challenge in higher education, with early disengagement strongly predicting eventual failure and limiting opportunities for social mobility. While machine learning models have demonstrated impressive predictive accuracy for identifying at-risk students, most systems prioritize performance metrics over practical deployment constraints, creating a gap between research demonstrations and real-world impact for social good. We present an accountable and interpretable decision support system that balances three competing objectives essential for responsible AI deployment: ultra-early prediction timing (day 14 of semester), manageable instructor workload (flagging 15% of students), and model transparency (multiple explanation mechanisms). Using the Open University Learning Analytics Dataset (OULAD) containing 22,437 students across seven modules, we develop predictive models from activity patterns, assessment performance, and demographics observable within two weeks. We compare threshold-based rules, logistic regression (interpretable linear modeling), and gradient boosting (ensemble modeling) using temporal validation where early course presentations train models tested on later cohorts. Results show gradient boosting achieves AUC (Area Under the ROC Curve, measuring discrimination ability) of 0.789 and average precision of 0.722, with logistic regression performing nearly identically (AUC 0.783, AP 0.713), revealing that linear modeling captures most predictive signal and makes interpretability essentially free. At our recommended threshold of 0.607, the predictive model flags 15% of students with 84% precision and 35% recall, creating actionable alert lists instructors can manage within normal teaching duties while maintaining accountability for false positives. Calibration analysis confirms that predicted probabilities match observed failure rates, ensuring trustworthy risk estimates. Feature importance modeling reveals that assessment completion and activity patterns dominate demographic factors, providing transparent evidence that behavioral engagement matters more than student background. We implement a complete decision support system generating instructor reports, explainable natural language justifications for each alert, and personalized intervention templates. Our contribution advances responsible AI for social good by demonstrating that interpretable predictive modeling can support equitable educational outcomes when designed with explicit attention to timing, workload, and transparency—core principles of accountable artificial intelligence. Full article
(This article belongs to the Special Issue Applied Data Science for Social Good: 2nd Edition)
Show Figures

Figure 1

24 pages, 2475 KB  
Article
Adaptive Belief Rule Base Modeling of Complex Industrial Systems Based on Sigmoid Functions
by Haolan Huang, Shucheng Feng, Jingying Li, Tianshu Guan and Hailong Zhu
Entropy 2025, 27(11), 1157; https://doi.org/10.3390/e27111157 - 14 Nov 2025
Viewed by 483
Abstract
In response to the challenges posed by multifactorial nonlinear relationships and uncertainties, and to address the limitations of the existing Belief Rule Base (BRB) in nonlinear fitting, uncertainty representation, and parameter optimization, this paper presents an improved reliable modeling method using a nonlinear [...] Read more.
In response to the challenges posed by multifactorial nonlinear relationships and uncertainties, and to address the limitations of the existing Belief Rule Base (BRB) in nonlinear fitting, uncertainty representation, and parameter optimization, this paper presents an improved reliable modeling method using a nonlinear belief rule base (R-NBRB). First, the linear inference mechanism is replaced by a smooth nonlinear S-function. This replacement better adapts to nonlinear dynamics in complex industrial systems. Second, attribute reliability is quantified through a reliability assessment method. Data, reliability, and expert knowledge are integrated using the Evidential Reasoning (ER) algorithm. Uncertainty is expressed in the form of belief degrees. Finally, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm is applied to optimize the inference parameters. Decision bias caused by insufficient expert knowledge is thereby reduced. Experiments were conducted on a task involving the detection of a petroleum pipeline leak. The mean squared error (MSE) of the R-NBRB model is only 0.2569. This represents a 28.24% reduction compared with the BRB model. The proposed method’s effectiveness and adaptability in complex industrial situations are confirmed. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

18 pages, 1846 KB  
Article
Modeling Informal Driver Interaction and Priority Behavior in Smart-City Traffic Systems
by Alica Kalašová, Peter Fabian, Ľubomír Černický and Kristián Čulík
Smart Cities 2025, 8(6), 193; https://doi.org/10.3390/smartcities8060193 - 13 Nov 2025
Viewed by 2418
Abstract
Accurate traffic modeling is essential for effective urban mobility planning within Smart Cities. Conventional capacity assessment methods assume rule-based driver behavior and therefore neglect psychological priority, an informal interaction in which drivers negotiate right-of-way contrary to traffic regulations. This study investigates how the [...] Read more.
Accurate traffic modeling is essential for effective urban mobility planning within Smart Cities. Conventional capacity assessment methods assume rule-based driver behavior and therefore neglect psychological priority, an informal interaction in which drivers negotiate right-of-way contrary to traffic regulations. This study investigates how the absence of this behavioral factor affects the accuracy of delay and capacity evaluation at unsignalized intersections. A 12 h field observation was conducted at an intersection in Prešov, Slovakia, and 28 driver interactions were analyzed using linear regression modeling. The derived model (R2 = 0.83, p < 0.05) demonstrates that incorporating psychological priority significantly improves the agreement between calculated and observed waiting times. Unrealistic results occurring under oversaturated conditions in standard methodologies were eliminated. The findings confirm that behavioral variability has a measurable impact on traffic performance and should be reflected in analytical and simulation models. Integrating these behavioral parameters into Smart City traffic modeling contributes to more realistic and human-centered decision-making in intersection design and capacity management, supporting the development of safer and more efficient urban mobility systems. Full article
Show Figures

Figure 1

18 pages, 1635 KB  
Article
Agent-Based Simulation of Digital Interoperability Thresholds in Fragmented Air Cargo Systems: Evidence from a Developing Country
by Siska Amonalisa Silalahi, I Nyoman Pujawan and Moses Laksono Singgih
Logistics 2025, 9(4), 160; https://doi.org/10.3390/logistics9040160 - 13 Nov 2025
Viewed by 953
Abstract
Background: This study investigates how varying levels of digital interoperability affect coordination and performance in Indonesia’s decentralized air cargo system, reflecting the inefficiencies typical of fragmented digital infrastructures in developing economies. Methods: An Agent-Based Model (ABM) was developed to simulate interactions among shippers, [...] Read more.
Background: This study investigates how varying levels of digital interoperability affect coordination and performance in Indonesia’s decentralized air cargo system, reflecting the inefficiencies typical of fragmented digital infrastructures in developing economies. Methods: An Agent-Based Model (ABM) was developed to simulate interactions among shippers, freight forwarders, airlines, ground handlers, and customs agents along the CGK–SIN/HKG export corridor. Six simulation scenarios combined varying levels of digital adoption, operational friction, and behavioral adaptivity to capture emergent coordination patterns and threshold dynamics. Results: The simulation identified a distinct interoperability threshold at approximately 60%, beyond which performance improvements became non-linear. Once this threshold was surpassed, clearance times decreased by more than 40%, and capacity utilization exceeded 85%, particularly when adaptive decision rules were implemented among agents. Conclusions: Digital transformation in fragmented logistics systems requires both technological connectivity and behavioral adaptivity. The proposed hybrid framework—integrating Autonomous Supply Chains (ASC), Graph-Based Digital Twins (GBDT), and interoperability thresholds—provides a simulation-based decision-support tool to determine when digitalization yields system-wide benefits. The study contributes theoretically by linking behavioral adaptivity and digital interoperability within a unified modeling approach, and practically by offering a quantitative benchmark for policymakers and practitioners seeking to develop efficient and resilient logistics ecosystems. Full article
Show Figures

Figure 1

24 pages, 2934 KB  
Article
Selected Methods for Designing Monetary and Fiscal Targeting Rules Within the Policy Mix Framework
by Agnieszka Przybylska-Mazur
Entropy 2025, 27(10), 1082; https://doi.org/10.3390/e27101082 - 19 Oct 2025
Viewed by 526
Abstract
In the existing literature, targeting rules are typically determined separately for monetary and fiscal policy. This article proposes a framework for determining targeting rules that account for the policy mix of both monetary and fiscal policy. The aim of this study is to [...] Read more.
In the existing literature, targeting rules are typically determined separately for monetary and fiscal policy. This article proposes a framework for determining targeting rules that account for the policy mix of both monetary and fiscal policy. The aim of this study is to compare selected optimization methods used to derive targeting rules as solutions to a constrained minimization problem. The constraints are defined by a model that incorporates a monetary and fiscal policy mix. The optimization methods applied include the linear–quadratic regulator, Bellman dynamic programming, and Euler’s calculus of variations. The resulting targeting rules are solutions to a discrete-time optimization problem with a finite horizon and without discounting. In this article, we define targeting rules that take into account the monetary and fiscal policy mix. The derived rules allow for the calculation of optimal values for the interest rate and the balance-to-GDP ratio, which ensure price stability, a stable debt-to-GDP ratio, and the desired GDP growth dynamics. It can be noted that all the optimization methods used yield the same optimal vector of decision variables, and the specific method applied does not affect the form of the targeting rules. Full article
Show Figures

Figure 1

14 pages, 1307 KB  
Article
Diagnostic Value of Machine Learning Models in Inflammation of Unknown Origin
by Selma Özlem Çelikdelen, Onur Inan, Sema Servi and Reyhan Bilici
J. Clin. Med. 2025, 14(19), 7116; https://doi.org/10.3390/jcm14197116 - 9 Oct 2025
Viewed by 1176
Abstract
Background: Inflammation of unknown origin (IUO) represents a persistent clinical challenge, often requiring extensive diagnostic efforts despite nonspecific inflammatory findings such as elevated C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR). The complexity and heterogeneity of its etiologies—including infections, malignancies, and rheumatologic diseases—make [...] Read more.
Background: Inflammation of unknown origin (IUO) represents a persistent clinical challenge, often requiring extensive diagnostic efforts despite nonspecific inflammatory findings such as elevated C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR). The complexity and heterogeneity of its etiologies—including infections, malignancies, and rheumatologic diseases—make timely and accurate diagnosis essential to avoid unnecessary interventions or treatment delays. Objective: This study aimed to evaluate the potential of machine learning (ML)-based models in distinguishing the major etiologic subgroups of IUO and to explore their value as clinical decision support tools. Methods: We retrospectively analyzed 300 IUO patients hospitalized between January 2023 and December 2024. Four binary one-vs-rest Linear Discriminant Analysis (LDA) models were first developed to independently classify infection, malignancy, rheumatologic disease, and undiagnosed cases using clinical and laboratory parameters. In addition, a multiclass LDA framework was constructed to simultaneously differentiate all four diagnostic groups. Each model was evaluated across 10 independent runs using standard performance metrics, including accuracy, sensitivity, specificity, precision, F1 score, and negative predictive value (NPV). Results: The malignancy model achieved the highest performance, with an accuracy of 91.7% and specificity of 0.96. The infection model demonstrated high specificity (0.88) and NPV (0.86), supporting its role in ruling out infection despite lower sensitivity (0.71). The rheumatologic model showed high sensitivity (0.81) but lower specificity (0.73), reflecting the clinical heterogeneity of autoimmune conditions. The undiagnosed model achieved very high accuracy (96.7%) and specificity (0.98) but limited precision and recall (0.50 each). The multiclass LDA framework reached an overall accuracy of 73.3% (mean 66%) with robust specificity (0.90) and NPV (0.89). Conclusions: ML-based LDA models demonstrated strong potential to support the diagnostic evaluation of IUO. While malignancy and infection could be predicted with high accuracy, rheumatologic diseases required integration of additional serological and clinical data. These models should be viewed not as stand-alone diagnostic tools but as complementary decision-support systems. Prospective multicenter studies are warranted to externally validate and refine these approaches for broader clinical application. Full article
(This article belongs to the Section Immunology & Rheumatology)
Show Figures

Figure 1

27 pages, 3413 KB  
Article
DermaMamba: A Dual-Branch Vision Mamba Architecture with Linear Complexity for Efficient Skin Lesion Classification
by Zhongyu Yao, Yuxuan Yan, Zhe Liu, Tianhang Chen, Ling Cho, Yat-Wah Leung, Tianchi Lu, Wenjin Niu, Zhenyu Qiu, Yuchen Wang, Xingcheng Zhu and Ka-Chun Wong
Bioengineering 2025, 12(10), 1030; https://doi.org/10.3390/bioengineering12101030 - 26 Sep 2025
Viewed by 1215
Abstract
Accurate skin lesion classification is crucial for the early detection of malignant lesions, including melanoma, as well as improved patient outcomes. While convolutional neural networks (CNNs) excel at capturing local morphological features, they struggle with global context modeling essential for comprehensive lesion assessment. [...] Read more.
Accurate skin lesion classification is crucial for the early detection of malignant lesions, including melanoma, as well as improved patient outcomes. While convolutional neural networks (CNNs) excel at capturing local morphological features, they struggle with global context modeling essential for comprehensive lesion assessment. Vision transformers address this limitation but suffer from quadratic computational complexity O(n2), hindering deployment in resource-constrained clinical environments. We propose DermaMamba, a novel dual-branch fusion architecture that integrates CNN-based local feature extraction with Vision Mamba (VMamba) for efficient global context modeling with linear complexity O(n). Our approach introduces a state space fusion mechanism with adaptive weighting that dynamically balances local and global features based on lesion characteristics. We incorporate medical domain knowledge through multi-directional scanning strategies and ABCDE (Asymmetry, Border irregularity, Color variation, Diameter, Evolution) rule feature integration. Extensive experiments on the ISIC dataset show that DermaMamba achieves 92.1% accuracy, 91.7% precision, 91.3% recall, and 91.5% mac-F1 score, which outperforms the best baseline by 2.0% accuracy with 2.3× inference speedup and 40% memory reduction. The improvements are statistically significant based on a significance test (p < 0.001, Cohen’s d > 0.8), with greater than 79% confidence also preserved on challenging boundary cases. These results establish DermaMamba as an effective solution bridging diagnostic accuracy and computational efficiency for clinical deployment. Full article
Show Figures

Figure 1

Back to TopTop