Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,020)

Search Parameters:
Keywords = decision analysis framework

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1408 KB  
Article
An RL-Enhanced Multi-Agent Framework for Scalable and Intelligent Business Intelligence Systems
by Khamza Eshankulov, Kudratjon Zohirov, Ilkhom Bakaev, Shafiyev Tursun, Nazarov Shakhzod, Zavqiddin Temirov and Rashid Nasimov
Information 2026, 17(3), 252; https://doi.org/10.3390/info17030252 (registering DOI) - 3 Mar 2026
Abstract
In many organizations, business intelligence systems support analytical reporting and operational decision making. As data volumes grow and analytical tasks become more complex, architectures based on centralized processing pipelines increasingly face limitations related to scalability and timely response. These challenges motivate the development [...] Read more.
In many organizations, business intelligence systems support analytical reporting and operational decision making. As data volumes grow and analytical tasks become more complex, architectures based on centralized processing pipelines increasingly face limitations related to scalability and timely response. These challenges motivate the development of alternative architectural approaches capable of operating efficiently in data-intensive environments. This study presents a modular multi-agent business intelligence framework that distributes analytical tasks across autonomous agents and applies lightweight reinforcement learning at the decision-making stage. The analytical workflow is decomposed into agents responsible for data collection, preprocessing, analytical modeling, and decision execution. Decision adaptation relies on localized policy updates driven by operational feedback, which avoids complex learning coordination and helps preserve system stability and interpretability. The proposed framework is evaluated using real-world transactional data from an electronic commerce setting. Experimental results show that the approach consistently outperforms centralized analytical pipelines and non-agent machine learning baselines in terms of processing efficiency, classification accuracy, and balanced classification performance. Threshold-independent evaluation further confirms stronger discriminative behavior across varying decision thresholds. In addition, stability analysis across repeated experimental runs indicates reduced performance variance and more predictable system behavior. These findings suggest that the proposed multi-agent business intelligence framework provides a practical and scalable alternative to centralized analytical architectures for data-intensive decision-support environments, while maintaining the robustness and transparency required in enterprise systems. The evaluation is limited to a single dataset and a classification task, and results should be interpreted within this scope. Experiments on the Online Retail dataset (UCI Machine Learning Repository) show an average accuracy of 0.89 ± 0.012 (baseline: 0.74 ± 0.029) and decision latency of 94 ± 9 ms (baseline: 137 ± 16 ms) across 10 independent runs, indicating stable behavior under repeated execution. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

28 pages, 4565 KB  
Article
A Hybrid Improved Atom Search Optimization Algorithm Optimizes BiGRU for Bus Travel Speed Prediction
by Qingling He, Yifan Feng, Yongsheng Qian, Xiaojuan Lu, Junwei Zeng, Xu Wei, Kaiyang Li and Yao Peng
Mathematics 2026, 14(5), 856; https://doi.org/10.3390/math14050856 (registering DOI) - 3 Mar 2026
Abstract
This paper focuses on enhancing the accuracy and efficiency of bus travel speed prediction by improving the optimization process for deep learning model parameters. Existing intelligent optimization algorithms often suffer from slow convergence and substantial errors when tuning parameters for such predictive tasks. [...] Read more.
This paper focuses on enhancing the accuracy and efficiency of bus travel speed prediction by improving the optimization process for deep learning model parameters. Existing intelligent optimization algorithms often suffer from slow convergence and substantial errors when tuning parameters for such predictive tasks. To mitigate these shortcomings, this study presents a new predictive framework that synergizes an Improved Atom Search Optimization (IASO) algorithm with a Bidirectional Gated Recurrent Unit (BiGRU) network. The EASO algorithm is developed through three principal modifications: (1) population initialization using a Logistic-Tent composite chaotic map to enhance diversity and initial quality; (2) incorporation of a hybrid operator merging refraction opposition-based learning and Cauchy mutation to broaden the search around promising solutions and alleviate issues of local stagnation and early convergence; and (3) implementation of an adaptive variable spiral search to recalibrate the position update rule, thereby improving the trade-off between extensive exploration and intensive exploitation. Based on the analysis of bus travel speed determinants, the IASO algorithm is applied to optimize the hyperparameters of the BiGRU network, culminating in the proposed IASO-BiGRU predictive model. Validation tests indicate that the devised IASO algorithm shows improved performance in certain aspects compared to several contemporary intelligent optimization techniques in terms of solution accuracy and convergence efficiency. Under the specific experimental conditions of this study, the IASO-BiGRU model achieves MAE, RMSE, and MAPE values of 1.62, 1.80, and 6.70%, respectively, corresponding to an improvement of 1.91–7.56% compared to the baseline models tested. These findings offer valuable data support and a decision-making foundation for bus operation scheduling and passenger travel planning. Full article
(This article belongs to the Special Issue Applications of Optimization Algorithms and Evolutionary Computation)
Show Figures

Figure 1

17 pages, 1684 KB  
Article
Patient-Level Modeling of Ménière’s Disease vs. Vestibular Migraine: Performance of Speech Discrimination and Caloric-vHIT Dissociation
by Nicolás Pérez-Fernández and Lorea Arbizu
J. Clin. Med. 2026, 15(5), 1908; https://doi.org/10.3390/jcm15051908 (registering DOI) - 3 Mar 2026
Abstract
Background: Differentiating Ménière’s disease (MD) from vestibular migraine (VM) remains difficult because current diagnostic frameworks are predominantly clinical and incorporate pure-tone thresholds, risking incorporation bias. We asked whether speech discrimination scores (SDS) alone can separate MD from VM at the patient level [...] Read more.
Background: Differentiating Ménière’s disease (MD) from vestibular migraine (VM) remains difficult because current diagnostic frameworks are predominantly clinical and incorporate pure-tone thresholds, risking incorporation bias. We asked whether speech discrimination scores (SDS) alone can separate MD from VM at the patient level and whether adding a prespecified vestibular marker, the caloric–vHIT dissociation, pattern A (abnormal calorics with normal horizontal vHIT), improves performance. Methods: In a retrospective cohort (2015–2018) including definite MD (n = 60) and definite VM (n = 40) by Bárány/ICHD criteria, we trained patient-level logistic regression models with 5-fold out-of-fold validation and in-fold preprocessing. To avoid incorporation bias, PTA was excluded from all models. Predefined feature sets were as follows: (1) SDS-only (bilateral SDS), (2) CalHiT-A-only (Yes/No; canal paresis ≥22% with horizontal-canal vHIT gain ≥0.80 in either ear), and (3) SDS+CalHiT-A. Discrimination was assessed by ROC–AUC with bootstrap 95% CIs; calibration and decision-curve analysis (DCA) are reported. An exploratory model encoded SDS as “affected/healthy.” Results: The SDS-only model achieved AUC 0.866 (95% CI 0.787–0.937). CalHiT-A-only yielded AUC 0.674 (0.561–0.778). Adding CalHiT-A to SDS did not improve discrimination (SDS+CalHiT-A AUC 0.844 [0.760–0.913]). The exploratory “affected/healthy” SDS encoding underperformed (AUC 0.801 [0.706–0.882]). CalHiT-A was significantly more prevalent in MD than in VM (56.7% [34/60] vs. 17.5% [7/40]; Fisher’s exact p = 1.49 × 10−4). Calibration favored SDS-only, and DCA showed the highest net benefit for SDS-only across thresholds p = 0.05–0.40. Conclusions: Bilateral SDS alone provides robust, well-calibrated discrimination between MD and VM and outperforms CalHiT-A and the affected/healthy SDS encoding. In this cohort, vestibular test dissociation did not add diagnostic value beyond SDS at the patient level, supporting SDS-centered diagnostic workflows while reserving CalHiT-A for adjudication and phenotyping rather than primary classification. Full article
(This article belongs to the Section Otolaryngology)
Show Figures

Figure 1

29 pages, 2585 KB  
Article
Characterizing the Spatiotemporal Complexity of Power Outages in the U.S. Power Grid: A Reliability Assessment Perspective
by Qun Yu, Zhiyi Zhou, Tongshuai Jin, Weimin Sun and Jiongcheng Yan
Energies 2026, 19(5), 1252; https://doi.org/10.3390/en19051252 - 2 Mar 2026
Abstract
With the intensification of climate change, deepening energy transition, and increasing social vulnerability, extreme power outage events pose escalating challenges to the governance capacity of modern power systems. Existing evaluation frameworks primarily focus on engineering reliability and economic loss estimation, lacking systematic quantification [...] Read more.
With the intensification of climate change, deepening energy transition, and increasing social vulnerability, extreme power outage events pose escalating challenges to the governance capacity of modern power systems. Existing evaluation frameworks primarily focus on engineering reliability and economic loss estimation, lacking systematic quantification of the governance complexity arising from multidimensional interacting pressures behind outage events. This creates a blind spot in both theoretical research and governance practice, hindering differentiated resilience decision-making. To address this gap, this study develops a four-dimensional evaluation framework of power outage governance complexity encompassing event attributes, external environment, internal system, and social impacts. Based on county-level outage data and multi-source auxiliary data in the United States from 2015 to 2024 and employing the XGBoost–SHAP interpretable machine learning approach, we construct the Power Outage Complexity Index (POCI) for all U.S. counties and systematically analyze its spatiotemporal evolution and core driving factors. The results show that outage governance complexity in the U.S. power grid exhibits a significant upward trend during 2015–2024, with an average annual growth rate of 1.84%. Spatially, significant positive autocorrelation is observed, and 146 high-complexity hotspot counties are identified, mainly clustered along the East and West Coasts, the Gulf Coast, and the Southwest. Driver analysis reveals that social impact and event attribute dimensions together account for nearly 90% of the variance in complexity, with cumulative outage exposure burden, outage frequency, and large-scale event ratio being the most critical drivers. Theoretically, this study extends power resilience research from an engineering-physical paradigm to a socio-technical governance paradigm and provides a reproducible methodological framework for assessing governance complexity in critical infrastructure systems. Practically, the POCI can serve as a governance diagnostic tool for the power industry and regulators, supporting resilience investment prioritization, emergency resource optimization, and differentiated governance strategy formulation. It also provides empirical evidence for safeguarding energy security in highly vulnerable communities and promoting energy resilience equity. Full article
38 pages, 10201 KB  
Article
Synthesis of a Moth and Flame Algorithm for Incorporation into the Architecture of Deceptive Systems with Baits and Traps
by Oleg Savenko, Bohdan Rusyn, Sergii Lysenko, Tomasz Ciszewski, Bohdan Savenko, Andrii Drozd, Andrii Nicheporuk and Anatoliy Sachenko
Appl. Sci. 2026, 16(5), 2415; https://doi.org/10.3390/app16052415 - 2 Mar 2026
Abstract
This paper proposes a novel method for synthesizing a discrete optimization algorithm based on the moth–flame paradigm for application to the architecture of deceptive systems incorporating decoys and traps. Unlike existing approaches that primarily rely on continuous search spaces or static deception strategies, [...] Read more.
This paper proposes a novel method for synthesizing a discrete optimization algorithm based on the moth–flame paradigm for application to the architecture of deceptive systems incorporating decoys and traps. Unlike existing approaches that primarily rely on continuous search spaces or static deception strategies, the proposed method enables the formation of a discrete search space with a coordinate-based representation of deception objects and system states. A spiral search trajectory is synthesized by modeling the dynamic interaction between moths and flames, which allows the algorithm to balance exploration and exploitation effectively and to mitigate premature convergence to local optima. The problem of selecting subsequent operational steps of a deceptive system, which includes the control and reconfiguration of decoys and traps in response to detected events, is formulated as a discrete optimization problem. The objective of this optimization is to increase the effectiveness of cyberattack and malware detection in corporate network environments. The decision variables include the sequence of deception actions, process models, and architectural characteristics of the system, while the constraints are defined by the operational conditions, resource limitations, and structural features of corporate networks. The proposed method supports the identification of an optimal sequence of deception actions under dynamically changing conditions and provides mechanisms for operational adaptation to attacker behavior in real time. This adaptability enables the creation of deceptive systems capable of long-term autonomous operation without continuous administrative intervention, while simultaneously increasing their resistance to adversarial reconnaissance and reverse engineering of their operational principles. The experimental results confirm the feasibility and effectiveness of the proposed approach and demonstrate the potential of integrating population-based optimization algorithms into deceptive system architectures. Comparative analysis shows that the proposed method outperforms its closest competitor, the genetic algorithm, achieving an improvement of 4.82% in terms of the objective function value. Future research directions include deeper integration of population-based optimization methods into decoy-and-trap architectures and the development of a comprehensive framework for organizing their operation in accordance with the proposed conceptual model. Overall, the results contribute to enhancing the cyber-resilience of corporate networks through intelligent, adaptive, and autonomous systems for countering modern cyberattacks and malware. Full article
Show Figures

Figure 1

24 pages, 2956 KB  
Article
Enhancing Energy Performance in Hot Climates: A Multi-Criteria Approach Towards Nearly Zero-Energy Buildings
by Micheal A. William, María José Suárez-López, Silvia Soutullo, Ahmed A. Hanafy and Mona F. Moussa
Sustainability 2026, 18(5), 2424; https://doi.org/10.3390/su18052424 - 2 Mar 2026
Abstract
Accelerating decarbonization in hot-climate buildings requires integrated retrofit strategies that address energy performance, environmental impact, thermal comfort, and economic feasibility within a unified decision framework. This study develops and validates a simulation-driven multi-criteria approach to evaluate retrofit packages across three representative ASHRAE hot [...] Read more.
Accelerating decarbonization in hot-climate buildings requires integrated retrofit strategies that address energy performance, environmental impact, thermal comfort, and economic feasibility within a unified decision framework. This study develops and validates a simulation-driven multi-criteria approach to evaluate retrofit packages across three representative ASHRAE hot sub-climates (1B, 2B, 2A). An academic building was modeled using DesignBuilder (Stroud, UK) and validated in accordance with ASHRAE Guidelines. The retrofit analysis integrates envelope enhancements (insulation and reflective coatings), glazing-integrated photovoltaics (GIPV), rooftop photovoltaics (RTPV), and a Dedicated Outdoor Air System (DOAS). The performance evaluation incorporates dynamically simulated energy consumption, operational CO2 emissions, thermal comfort indicators (PMV and DCH), and techno-economic metrics (IRR, ROI, PBP). Weighting factors were derived from a structured stakeholder consultation to reflect context-sensitive sustainability priorities. The results indicate energy reductions of approximately 51–57% and carbon emission reductions of 40–53% across the examined zones, while discomfort hours decreased by roughly 42–46%. This demonstrates significant improvements in thermal comfort under integrated retrofit strategies, particularly with DOAS integration, highlighting the importance of ventilation-driven comfort enhancement. Economic feasibility was climate-dependent; envelope-focused solutions yielded high returns, while integrated strategies delivered balanced environmental and economic performance. The proposed framework enables systematic, climate-specific prioritization of retrofit alternatives and supports scalable, economically viable NZEB transitions in rapidly expanding hot-climate educational infrastructure. Full article
Show Figures

Figure 1

23 pages, 1736 KB  
Article
Enhancing Sustainable Traffic Safety Through Machine Learning: A Risk Assessment and Feature Selection Framework Using NGSIM Data
by Meltem Aslantas and Fatma Kutlu Gündoğdu
Sustainability 2026, 18(5), 2423; https://doi.org/10.3390/su18052423 - 2 Mar 2026
Abstract
Precisely assessing driving danger is essential for various applications, including the advancement of autonomous driving systems and traffic engineering decisions. This study presents a driving risk analysis framework based on the Next-Generation Simulation (NGSIM) dataset. First, vehicles were classified into four risk classes [...] Read more.
Precisely assessing driving danger is essential for various applications, including the advancement of autonomous driving systems and traffic engineering decisions. This study presents a driving risk analysis framework based on the Next-Generation Simulation (NGSIM) dataset. First, vehicles were classified into four risk classes using the Fuzzy C-Means algorithm using five key risk indicators. Subsequently, comprehensive driving behavior features representing vehicle movements were extracted and evaluated for both risk class prediction and driving behavior feature selection. A new driving risk score was developed using Spearman’s rho coefficient weights, which reflect the relationship of each risk indicator to risk levels. This score was observed to exhibit an increasing trend consistent with the sequential structure of the Fuzzy C-Means (FCM) clustering based on risk labels, thus confirming that it accurately reflects the labeling process. Furthermore, the findings show that the 26 key driving behavior features selected can predict the driving risk score developed using the XGBoost algorithm with over 85% accuracy. Moreover, feature importance analysis reveals that the following distances and inter-vehicle distance variability are particularly effective in determining driving risk. The study discusses the limitations of driving risk assessment based solely on vehicle dynamics and highlights the importance of developing enriched datasets that include multidimensional data sources such as environmental conditions, infrastructure features, traffic density, and autonomous vehicles in future risk prediction studies. Ultimately, this framework contributes to the development of safer and more efficient transportation systems, supporting environmental sustainability by reducing accident-related congestion and promoting resource-efficient traffic management. Full article
Show Figures

Figure 1

41 pages, 17913 KB  
Article
Vision-Based Dual-Mode Collision Risk-Warning for Aircraft Apron Monitoring
by Emre Can Bingol, Hamed Al-Raweshidy and Konstantinos Banitsas
Drones 2026, 10(3), 173; https://doi.org/10.3390/drones10030173 - 2 Mar 2026
Abstract
Ground incidents on airport aprons can cause substantial operational disruption and economic loss, while conventional surveillance (e.g., Surface Movement Radar (SMR), Closed-Circuit Television (CCTV)) often lacks the resolution and proactive decision support required for close-proximity operations. This study proposes a UAV-deployable, camera-agnostic Computer [...] Read more.
Ground incidents on airport aprons can cause substantial operational disruption and economic loss, while conventional surveillance (e.g., Surface Movement Radar (SMR), Closed-Circuit Television (CCTV)) often lacks the resolution and proactive decision support required for close-proximity operations. This study proposes a UAV-deployable, camera-agnostic Computer Vision (CV) framework for collision-risk warning from elevated viewpoints. An optimised YOLOv8-Seg backbone performs multi-class aircraft segmentation (airplane, wing, nose, tail, and fuselage) and is integrated with four MOT algorithms under identical evaluation settings. For quantitative tracker benchmarking, DeepSORT provides the strongest overall performance on the airplane-only MOTChallenge-format ground truth (MOTA 92.77%, recall 93.27%). To mitigate the scarcity of annotated apron-incident data, a labelled 997-frame MOT dataset is created via an MSFS simulation-based reenactment inspired by the 2018 Asiana–Turkish Airlines wing-to-tail event at Istanbul Ataturk Airport. The framework further introduces a dual-module warning mechanism that can operate independently: (i) a reactive module using image-plane proximity derived from segmentation masks, and (ii) a proactive module that predicts short-horizon conflicts via trajectory extrapolation and IoU-based future overlap analysis. The approach is evaluated on multiple simulated incident scenarios and assessed on a real apron video from Hong Kong International Airport; additionally, laboratory-scale UAV experiments using diecast aircraft models provide end-to-end feasibility evidence on unmanned-platform imagery. Overall, the results indicate timely warnings and practical feasibility for low-overhead UAV-enabled apron monitoring. Full article
44 pages, 2922 KB  
Article
A Novel Hybrid Opcode Feature Selection Framework for Efficient and Effective IoT Malware Detection
by Bakhan Tofiq Ahmed, Noor Ghazi M. Jameel and Bakhtiar Ibrahim Saeed
IoT 2026, 7(1), 24; https://doi.org/10.3390/iot7010024 - 2 Mar 2026
Abstract
Malware’s proliferation in the Internet of Things (IoT) ecosystem requires precise, efficient detection systems capable of operating on IoT devices. Existing static analysis approaches often fail due to computational inefficiency stemming from high feature dimensionality inherent in raw opcode features. This research addresses [...] Read more.
Malware’s proliferation in the Internet of Things (IoT) ecosystem requires precise, efficient detection systems capable of operating on IoT devices. Existing static analysis approaches often fail due to computational inefficiency stemming from high feature dimensionality inherent in raw opcode features. This research addresses this limitation by proposing a novel machine-learning (ML)-driven Intelligent Hybrid Feature Selection (IHFS) framework with two distinct architectures. IHFS1 combines a filter method (variance threshold) with an embedded method (LGBM feature importance). Conversely, IHFS2 integrates variance thresholding with a wrapper method (Recursive Feature Elimination with Cross-Validation using LGBM) for optimal selection. This framework is specifically designed to select an optimally stable and minimal feature subset from the initial 1183 opcode frequency vector extracted from ARM binaries. Applying this framework to a multi-family IoT malware dataset, the IHFS architectures yielded distinct and highly efficient feature subsets: IHFS1 achieved a 95.77% reduction (to 50 features), while IHFS2 attained a 98.06% reduction (to 23 features). Evaluation across eight ML models confirmed that the Random Forest (with IHFS1 subset) and Decision Tree (with IHFS2 subset) classifiers were the best performing, achieving robust classification metrics that outperform current state-of-the-art solutions. The Decision Tree model demonstrated exceptional detection capabilities, with an accuracy of 99.87%, a precision of 99.82%, a recall of 99.88%, and an F1-score of 99.85%. It achieved an average inference time of 0.058 ms per sample. Experimental results attained on a native ARM64 environment validate the deployment feasibility of the proposed system for resource-constrained IoT devices, such as the Raspberry Pi. The proposed system achieves a high-throughput, low-overhead security posture while maintaining host operational stability, processing a single ELF binary in just 3.431 ms. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of the Internet of Things)
20 pages, 1623 KB  
Article
Deep Contextual Bandits with Multivariate Outcomes: Empirical Copula Normalization, Temporal Feature Learning, and Doubly Robust Policy Evaluation
by Jong-Min Kim
Mathematics 2026, 14(5), 846; https://doi.org/10.3390/math14050846 (registering DOI) - 2 Mar 2026
Abstract
We develop and evaluate a deep contextual bandit framework for multivariate off-policy evaluation within a controlled simulation-based validation setting. Using real covariate distributions from the Adult, Boston Housing, and Wine Quality datasets, we construct synthetic treatment assignments and multivariate potential outcomes to enable [...] Read more.
We develop and evaluate a deep contextual bandit framework for multivariate off-policy evaluation within a controlled simulation-based validation setting. Using real covariate distributions from the Adult, Boston Housing, and Wine Quality datasets, we construct synthetic treatment assignments and multivariate potential outcomes to enable rigorous benchmarking under known data-generating processes. We compare CNN-LSTM, LSTM, and Feed-forward Neural Network (FNN) architectures as nonlinear action-value estimators. To examine representation learning under structured dependence, an AR(1) feature augmentation scheme is employed, while multivariate outcomes are standardized using empirical copula transformations to preserve cross-dimensional dependence. Policy values are estimated using Stabilized Importance Sampling (SIPS) and doubly robust (DR) estimators with bootstrap inference. Although the decision problem is strictly one-step, empirical results indicate that CNN-LSTM architectures provide competitive action-value calibration under temporal augmentation. Across all datasets, the DR estimator demonstrates substantially lower variance and greater stability than SIPS, consistent with its theoretical variance-reduction properties. Diagnostic analyses—including propensity overlap assessment, cumulative oracle regret (with oracle values known by construction), calibration evaluation, and sensitivity analysis—support the reliability of the proposed evaluation framework. Overall, the results demonstrate that combining copula-normalized multivariate outcomes with doubly robust off-policy evaluation yields a statistically principled and variance-efficient approach for offline policy learning in high-dimensional simulated environments. Full article
(This article belongs to the Special Issue Advances in Statistical AI and Causal Inference)
Show Figures

Figure 1

32 pages, 2974 KB  
Review
Integrating Remote Sensing and Crop Simulation Models for Rice Yield Estimation: A Comprehensive Review
by Chilakamari Lokesh, Murali Krishna Gumma, R. Susheela, Swarna Ronanki, M. Shankaraiah and Pranay Panjala
AgriEngineering 2026, 8(3), 88; https://doi.org/10.3390/agriengineering8030088 (registering DOI) - 2 Mar 2026
Abstract
Reliable estimation of rice yield is essential for food security planning, climate-resilient agriculture, and informed policy decisions. This review synthesizes recent research on the integration of remote sensing and crop simulation models for rice yield estimation. The analysis shows that optical and Synthetic [...] Read more.
Reliable estimation of rice yield is essential for food security planning, climate-resilient agriculture, and informed policy decisions. This review synthesizes recent research on the integration of remote sensing and crop simulation models for rice yield estimation. The analysis shows that optical and Synthetic Aperture Radar (SAR) data are the most commonly used remote sensing sources, with SAR proving especially valuable in monsoon-affected regions due to its ability to provide consistent observations under cloud cover. Among crop simulation models, DSSAT, APSIM, ORYZA, and WOFOST are most frequently applied, either independently or in combination with satellite-derived information. Across the reviewed studies, integrated approaches, particularly those using data assimilation and hybrid modeling, consistently achieve higher accuracy and better spatial representation of yield compared to standalone remote sensing or crop model methods. Despite these advances, limitations related to data availability, model calibration, scale mismatches, and climate-induced uncertainty remain significant. Based on the reviewed evidence, future efforts should focus on developing practical hybrid frameworks, improving multi-sensor data fusion, and designing scalable systems suited to data-limited regions. Overall, integrating remote sensing with crop simulation models offers a robust pathway for improving rice yield forecasting and supporting climate-adaptive agricultural management. Full article
Show Figures

Figure 1

34 pages, 979 KB  
Article
A Systems-Based Multi-Criteria Framework for Evaluating Organizational Competitiveness in Complex Organizations: Evidence from Elite Professional Football
by Labros Sdrolias, Panagiotis Serdaris, Konstantinos Spinthiropoulos, Stavros Kalogiannidis and Alkinoos Psarras
Systems 2026, 14(3), 265; https://doi.org/10.3390/systems14030265 - 2 Mar 2026
Abstract
This paper examines the organizational competitiveness and strategic transformation of an elite professional football entity in the Greek Super League during the period 2018–2020, using Panathinaikos as a case study within a comparative framework including Olympiacos, AEK, and PAOK. This period marked a [...] Read more.
This paper examines the organizational competitiveness and strategic transformation of an elite professional football entity in the Greek Super League during the period 2018–2020, using Panathinaikos as a case study within a comparative framework including Olympiacos, AEK, and PAOK. This period marked a phase of enforced reorientation for Panathinaikos due to UEFA sanctions for overdue debts and the club’s exclusion from European competitions, which resulted in extensive squad renewal and increased reliance on academy-developed players. The aim of the study is to identify the factors shaping Panathinaikos’ strategic position, diagnose the causes of its lagging performance, and suggest directions for strategic repositioning. To this end, a multi-criteria framework based on the Analytic Hierarchy Process (AHP) is employed, integrating qualitative assessments, expert judgements, and quantitative performance indicators through pairwise comparisons, weight calculations, and consistency checks. The analysis is based on a conceptually original model that defines the Football Organization as an integrated system composed of two interdependent subsystems: the Football Club and the Football Team (competitive subsystem). This approach highlights that league standings do not always reflect overall performance dynamics, as they are influenced by both organizational and on-field factors. The findings indicate that Panathinaikos is lagging behind in key areas and that a structural discontinuity between the Club and the Team limits strategic coherence and the ability to create a sustainable competitive advantage. The study concludes with proposals for restructuring and strategic repositioning, while the proposed model functions as a transferable decision-support tool for assessing organizational competitiveness, with broader applicability to complex organizational systems beyond professional football. Full article
(This article belongs to the Section Complex Systems and Cybernetics)
Show Figures

Figure 1

23 pages, 1320 KB  
Article
Personalized Hearing Loss Care Using SNOMED CT-Aligned Ontology and Random Forest Machine Learning: A Hybrid Decision-Support Framework
by Darine Kebsi, Chamseddine Barki, Ismail Dergaa, Riadh Gouider, Halil İbrahim Ceylan, Amina Maddouri, Abderrazak Jemai, Mourad Elloumi, Nicola Luigi Bragazzi and Hanene Boussi Rahmouni
Audiol. Res. 2026, 16(2), 37; https://doi.org/10.3390/audiolres16020037 - 2 Mar 2026
Abstract
Background: Hearing loss affects over 466 million individuals globally and is recognized as a major risk factor for Alzheimer’s disease, yet treatment personalization remains limited due to the complexity and diversity of underlying causes. Current diagnostic and therapeutic approaches lack standardized methods to [...] Read more.
Background: Hearing loss affects over 466 million individuals globally and is recognized as a major risk factor for Alzheimer’s disease, yet treatment personalization remains limited due to the complexity and diversity of underlying causes. Current diagnostic and therapeutic approaches lack standardized methods to accurately predict the most appropriate intervention for individual patients. The integration of medical ontologies with machine learning offers a promising solution for enhancing diagnostic accuracy and treatment personalization. Aim: Our study aimed to (i) develop a Systematized Nomenclature of Medicine—Clinical Terms (SNOMED CT)-aligned clinical ontology for hearing loss using Semantic Web Rule Language for automated reasoning; (ii) implement a Random Forest classifier trained on ontology-enriched patient data to classify hearing loss types (conductive, sensorineural, mixed, or normal); and (iii) predict optimal personalized treatments based on laterality, severity, audiometric thresholds, and medical history using real-world patient data. Methods: We developed a task ontology using Protégé 5.6.3 with Web Ontology Language (OWL), integrated SNOMED CT terminology alignment, and implemented Semantic Web Rule Language rules executed by the Pellet 2.2.0 reasoner. The framework was trained and evaluated on 3723 adult patients from the 2015–2016 National Health and Nutrition Examination Survey (NHANES) dataset with complete audiometric and clinical data. Random Forest models were developed using an 80–20 train-test split with stratified sampling and five-fold cross-validation. Performance was compared between K-Means clustering-based labeling and ontology-based semantic inference using accuracy, precision, recall, F1-score, and log loss metrics. Results: The ontology successfully generated semantic labels for all 3723 patients, enabling precise classification of hearing loss types, severity levels, and laterality. The Random Forest model with K-Means clustering achieved a test accuracy of 90.2% with a log loss of 0.2766 and a cross-validation mean accuracy of 91.22% (standard deviation 1.2%). Integration of ontology-based semantic enrichment significantly improved performance, achieving a test accuracy of 92.48% with a cross-validation mean accuracy of 92.80% (standard deviation 0.9%). F1-scores improved across all classes, with mixed hearing loss showing a notable increase from 0.86 to 0.92. Feature importance analysis identified audiometric thresholds, ontology-derived severity labels, and medical history as top predictors, enhancing clinical interpretability. Conclusions: This study demonstrates that combining SNOMED CT-aligned ontology with Random Forest classification achieves superior diagnostic accuracy and enables personalized treatment recommendations for hearing loss. The hybrid framework provides clinically interpretable decision support while ensuring semantic interoperability with electronic health records. Multi-institutional validation studies are necessary to assess generalizability across diverse populations before clinical deployment. Full article
Show Figures

Figure 1

22 pages, 1675 KB  
Article
HybridNER: A Multi-Model Ensemble Framework for Robust Named Entity Recognition—From General Domains to Adversarial GNSS Scenarios
by Yixuan Liu, Jing Zhang, Ruipeng Luan and Xuewen Yu
Sensors 2026, 26(5), 1553; https://doi.org/10.3390/s26051553 - 2 Mar 2026
Abstract
Named entity recognition (NER), a core task in natural language processing (NLP), remains constrained by heavy reliance on annotated data, limited cross domain generalization, and difficulty in recognizing name entities out of vocabulary entities. In specialized domains such as analysis of Global Navigation [...] Read more.
Named entity recognition (NER), a core task in natural language processing (NLP), remains constrained by heavy reliance on annotated data, limited cross domain generalization, and difficulty in recognizing name entities out of vocabulary entities. In specialized domains such as analysis of Global Navigation Satellite System (GNSS) countermeasures, including anti-jamming and anti-spoofing, where datasets are small and domain knowledge is scarce, existing models exhibit marked performance degradation. To address these challenges, we propose HybridNER, a framework that integrates locally trained span-based models with large language models (LLMs). The approach employs a span prediction metasystem that first fuses outputs from multiple base learners by computing span to label compatibility scores and assigns an uncertainty estimate to each candidate entity. Entities with uncertainty above a preset threshold are then routed to an LLM for a second stage classification, and the final decision integrates both sources to realize complementary strengths. Experiments on multiple general purpose and domain specific datasets show that HybridNER achieves higher precision, recall, and F1 than traditional ensemble methods such as majority voting and weighted voting, with especially pronounced gains in specialized domains, thereby improving the robustness and generalization of NER. Full article
Show Figures

Figure 1

19 pages, 1861 KB  
Article
Bibliometric Analysis of Earnings Response Coefficient: A Measure of Market Reaction to a Company’s Earnings Announcements and Key Drivers of Investor
by Syarifuddin Rasyid, Darmawati Darmawati and Haryanto Haryanto
J. Risk Financial Manag. 2026, 19(3), 177; https://doi.org/10.3390/jrfm19030177 - 2 Mar 2026
Abstract
The Earnings Response Coefficient (ERC) has emerged as a pivotal topic in academic literature and financial practice, elucidating the critical relationship between corporate earnings information and market response, which directly impacts corporate performance evaluation and investment decision-making. This study aims to identify the [...] Read more.
The Earnings Response Coefficient (ERC) has emerged as a pivotal topic in academic literature and financial practice, elucidating the critical relationship between corporate earnings information and market response, which directly impacts corporate performance evaluation and investment decision-making. This study aims to identify the most frequently researched topics in the Earnings Response Coefficient domain, explore the basic concepts and theoretical frameworks underlying ERC research, and propose potential future research directions in the field, all within finance and investment management. This research employs bibliometric analysis to use data from Google Scholar and Scopus, accessed through Publish or Perish (PoP), to evaluate the literature’s performance, explore related topics, and identify research trends, thereby deepening the understanding of ERC studies. The findings reveal that income smoothing and intellectual capital disclosure have a significant impact but low connectedness, indicating a need for deeper exploration to heighten their relevance in ERC studies. Research on corporate social responsibility exhibits a high degree of interconnectedness and substantial impact. Underexplored topics such as economic uncertainty and analysts’ influence require greater attention to understand their contributions fully. This study identifies publication trends and citation networks related to ERC, provides insights into researcher collaborations, and offers guidance for academics, practitioners, and policymakers to enrich their understanding, develop more effective earnings management strategies, and design regulations that bolster market transparency and efficiency in the realm of finance and investment management. This research is particularly beneficial for practitioners, as it helps evaluate more effective earnings management strategies and understand the market’s response to earnings information, ultimately enhancing firm value. For policymakers, this study provides a framework for designing regulations and policies that support financial information transparency and market efficiency to enhance economic stability and investor confidence. Full article
(This article belongs to the Special Issue Accounting Information and Capital Markets)
Show Figures

Figure 1

Back to TopTop