Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (346)

Search Parameters:
Keywords = Decision Tree & Rules

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 960 KB  
Article
EDR-FJ48: An Empirical Distribution Ranking-Based Fuzzy J48 Classifier for Multiclass Intrusion Detection in IoMT Networks
by Jisi Chandroth, Laura Tileutay, Ahyoung Choi and Young-Bae Ko
Mathematics 2026, 14(1), 157; https://doi.org/10.3390/math14010157 - 31 Dec 2025
Abstract
The Internet of Medical Things (IoMT) interconnects medical devices, software applications, and healthcare services through the internet to enable the transmission and analysis of health data. IoMT facilitates seamless patient care and supports real-time clinical decision-making. The IoMT faces substantial security threats due [...] Read more.
The Internet of Medical Things (IoMT) interconnects medical devices, software applications, and healthcare services through the internet to enable the transmission and analysis of health data. IoMT facilitates seamless patient care and supports real-time clinical decision-making. The IoMT faces substantial security threats due to limited device resources, high device interconnectivity, and a lack of standardization. In this paper, we present an Intrusion Detection System (IDS) called An Empirical Distribution Ranking-Based Fuzzy J48 Classifier for Multiclass Intrusion Detection in IoMT Networks (EDR-FJ48) to distinguish between regular traffic and multiple types of security threats. The proposed IDS is built upon the J48 decision tree algorithm and is designed to detect a wide range of attacks. To ensure the protection of medical devices and patient data, the system incorporates a fuzzy IF-THEN rule inference module. In our approach, fuzzy rules are formulated based on the fuzzified values of selected features, which capture the statistical behavior of the input observations. These rules enable interpretable and transparent decision-making and are applied before the final classification step. We thoroughly evaluated our methodology through extensive simulations using three publicly available datasets, such as WUSTL-EHMS-2020, CICIoMT2024, and ECU-IoHT. The results exhibit exceptional accuracy rates of 99.68%, 98.71%, and 99.43%, respectively. A comparative analysis against state-of-the-art models in the existing literature, based on metrics including accuracy, precision, recall, F1-score, and time complexity, reveals that our proposed method achieves superior results. This evidence suggests that our method constitutes a robust solution for mitigating security threats in IoMT networks. Full article
Show Figures

Figure 1

22 pages, 1380 KB  
Article
Machine Learning Classification of Return on Equity from Sustainability Reporting and Corporate Governance Metrics: A SHAP-Based Explanation
by Mustafa Terzioğlu, Aslıhan Ersoy Bozcuk, Güler Ferhan Ünal Uyar, Neylan Kaya, Burçin Tutcu and Günay Deniz Dursun
Sustainability 2026, 18(1), 194; https://doi.org/10.3390/su18010194 - 24 Dec 2025
Viewed by 183
Abstract
The aim of this study was to develop a model that classifies companies into high or low categories based on their return on equity (RoE), the most important indicator of financial performance, using sustainability and governance-related committee reports and reports shared with the [...] Read more.
The aim of this study was to develop a model that classifies companies into high or low categories based on their return on equity (RoE), the most important indicator of financial performance, using sustainability and governance-related committee reports and reports shared with the public. As a sample, the RoE, sustainability, and governance variables of all 427 companies traded on the Istanbul Stock Exchange in 2024 were used. Using a 70:30 stratified split between the training and test sets, three tree-based models (XGBoost, LightGBM, and Random Forest) were used to perform a binary classification task. The findings show that tree-based models perform only slightly better than the naive majority class rule, and therefore, have limited overall classification power. A noteworthy finding from the study is that SHAP-based explainability analysis shows that the Corporate Governance Report (IMNG), the Integrated Report (IREP) and the existence of a Sustainability Committee (ICOM) rank higher in terms of SHAP-based global importance in the High RoE classification model, although their average contributions are small and, in the case of IMNG, predominantly negative for the probability of belonging to the High RoE class. Methodologically, the article moves away from traditional econometric methods based on ESG scores, instead combining a predictive classification structure with TreeSHAP-based explanations. These findings indicate a need for reporting practices that offer deeper content, clearer evidence of governance quality, and stronger data integrity to better support investors’ decision-making processes through sustainability and governance. Full article
Show Figures

Figure 1

21 pages, 1745 KB  
Article
An Integrated Artificial Intelligence Tool for Predicting and Managing Project Risks
by Andreea Geamanu, Maria-Iuliana Dascalu, Ana-Maria Neagu and Raluca Ioana Guica
Mach. Learn. Knowl. Extr. 2026, 8(1), 1; https://doi.org/10.3390/make8010001 - 20 Dec 2025
Viewed by 356
Abstract
Artificial Intelligence (AI) is increasingly used to enhance project management practices, especially in risk analysis, where traditional tools often lack predictive capabilities. This study introduces an AI-based tool that supports project teams in identifying and interpreting risks through machine learning and integrated documentation [...] Read more.
Artificial Intelligence (AI) is increasingly used to enhance project management practices, especially in risk analysis, where traditional tools often lack predictive capabilities. This study introduces an AI-based tool that supports project teams in identifying and interpreting risks through machine learning and integrated documentation features. A synthetic dataset of 5000 project instances was generated using deterministic rules across 27 input variables, enabling the training of multi-output Decision Tree and Random Forest models to predict risk type, impact, probability, and response strategy. Due to the rule-based structure of the dataset, both models achieved near-perfect classification performance, with Random Forest showing slightly better regression accuracy. These results validate the modelling pipeline but should not be interpreted as real-world predictive accuracy. The trained models were deployed within a web platform offering prediction visualization, automated PDF reporting, result storage, and access to a structured risk management plan template. Survey feedback highlights strong user interest in AI-assisted mitigation suggestions, dashboards, notifications, and mobile access. The findings demonstrate the potential of AI to improve proactive risk assessment and decision-making in project environments. Full article
Show Figures

Graphical abstract

64 pages, 6020 KB  
Article
Logistics Performance and the Three Pillars of ESG: A Detailed Causal and Predictive Investigation
by Nicola Magaletti, Valeria Notarnicola, Mauro Di Molfetta, Stefano Mariani and Angelo Leogrande
Sustainability 2025, 17(24), 11370; https://doi.org/10.3390/su172411370 - 18 Dec 2025
Viewed by 340
Abstract
This study investigates the complex relationship between the performance of logistics and Environmental, Social, and Governance (ESG) performance, drawing upon the multi-methodological framework of combining econometrics with state-of-the-art machine learning approaches. Employing Instrumental Variable (IV) Panel data regressions, viz., 2SLS and G2SLS, with [...] Read more.
This study investigates the complex relationship between the performance of logistics and Environmental, Social, and Governance (ESG) performance, drawing upon the multi-methodological framework of combining econometrics with state-of-the-art machine learning approaches. Employing Instrumental Variable (IV) Panel data regressions, viz., 2SLS and G2SLS, with data from a balanced panel of 163 countries covering the period from 2007 to 2023, the research thoroughly investigates how the performance of the Logistics Performance Index (LPI) is correlated with a variety of ESG indicators. To enrich the analysis, machine learning models—models based upon regression, viz., Random Forest, k-Nearest Neighbors, Support Vector Machines, Boosting Regression, Decision Tree Regression, and Linear Regressions, and clustering, viz., Density-Based, Neighborhood-Based, and Hierarchical clustering, Fuzzy c-Means, Model-Based, and Random Forest—were applied to uncover unknown structures and predict the behavior of LPI. Empirical evidence suggests that higher improvements in the performance of logistics are systematically correlated with nascent developments in all three dimensions of the environment (E), social (S), and governance (G). The evidence from econometrics suggests that higher LPI goes with environmental trade-offs such as higher emissions of greenhouse gases but cleaner air and usage of resources. On the S dimension, better performance in terms of logistics is correlated with better education performance and reducing child labor, but also demonstrates potential problems such as social imbalances. For G, better governance of logistics goes with better governance, voice and public participation, science productivity, and rule of law. Through both regression and cluster methods, each of the respective parts of ESG were analyzed in isolation, allowing us to study in-depth how the infrastructure of logistics is interacting with sustainability research goals. Overall, the study emphasizes that while modernization is facilitated by the performance of the infrastructure of logistics, this must go hand in hand with policy intervention to make it socially inclusive, environmentally friendly, and institutionally robust. Full article
Show Figures

Figure 1

35 pages, 18467 KB  
Article
Monitoring Rubber Plantation Distribution and Biomass with Sentinel-2 Using Deep Learning and Machine Learning Algorithm (2019–2024)
by Yingtan Chen, Jialong Duanmu, Zhongke Feng, Jun Qian, Zhikuan Liu, Huiqing Pei, Pietro Grimaldi and Zixuan Qiu
Remote Sens. 2025, 17(24), 4042; https://doi.org/10.3390/rs17244042 - 16 Dec 2025
Viewed by 356
Abstract
The number of rubber plantations has increased significantly since 2000, especially in Southeast Asia and China, and their ecological impacts are becoming more evident. A robust rubber supply monitoring system is currently required at both the production and ecological levels. This study used [...] Read more.
The number of rubber plantations has increased significantly since 2000, especially in Southeast Asia and China, and their ecological impacts are becoming more evident. A robust rubber supply monitoring system is currently required at both the production and ecological levels. This study used Sentinel-2 multi-rule remote sensing images and a deep learning method to construct a deep learning model that could generate a distribution map of rubber plantations in Danzhou City, Hainan Province, from 2019 to 2024. For biomass modeling, 52 sample plots (27 of which were historical plots) were integrated, and the canopy structure was extracted as an auxiliary variable from the point cloud data generated by an unmanned aerial vehicle survey. Five algorithms, namely Random Forest (RF), Gradient Boosting Decision Tree, Convolutional Neural Network, Back Propagation Neural Network, and Extreme Gradient Boosting, were used to characterize the spatiotemporal changes in rubber plantation biomass and analyze the driving mechanisms. The developed deep learning model was exceptional at identifying rubber plantations (overall accuracy = 91.63%, Kappa = 0.83). The RF model performed the best in terms of biomass prediction (R2 = 0.72, RRMSE = 21.48 Mg/ha). Research shows that canopy height as a characteristic factor enhances the explanatory power and stability of the biomass model. However, due to limitations such as sample plot size, image differences, canopy closure degree, and point cloud density, uncertainties in its generalization across years and regions remain. In summary, the proposed framework effectively captures the spatial and temporal dynamics of rubber plantations and estimates their biomass with high accuracy. This study provides a crucial reference for the refined management and ongoing monitoring of rubber plantations. Full article
Show Figures

Graphical abstract

30 pages, 4602 KB  
Article
Explaining Urban Transformation in Heritage Areas: A Comparative Analysis of Predictive and Interpretive Machine Learning Models for Land-Use Change
by Pablo González-Albornoz, Clemente Rubio-Manzano and Maria Isabel López
Mathematics 2025, 13(24), 3971; https://doi.org/10.3390/math13243971 - 12 Dec 2025
Viewed by 226
Abstract
In line with UNESCO’s Historic Urban Landscape approach, this study highlights the need for integrative tools that connect heritage conservation with broader urban development dynamics, balancing preservation and growth. While several machine-learning models have been applied to analyse the drivers of urban change, [...] Read more.
In line with UNESCO’s Historic Urban Landscape approach, this study highlights the need for integrative tools that connect heritage conservation with broader urban development dynamics, balancing preservation and growth. While several machine-learning models have been applied to analyse the drivers of urban change, there remains a need for comparative analyses that assess their strengths, limitations, and potential for combined applications tailored to specific contexts. This study aims to compare the predictive accuracy of three land-use change models (Random Forest, Logistic Regression, and Recursive Partitioning Regression Trees) in estimating the probability of land-use transitions, as well as their interpretative capacity to identify the main factors driving these changes. Using data from the Bellavista neighborhood in Tomé, Chile, the models were assessed through prediction and performance metrics, probability maps, and an analysis of key driving factors. The results underscore the potential of integrating predictive (Random Forest) and interpretative (Logistic Regression and Recursive Partitioning Regression Trees) approaches to support heritage planning. Specifically, the research demonstrates how these models can be effectively combined by leveraging their respective strengths: employing Random Forest for spatial simulations, Logistic Regression for identifying associative factors, and Recursive Partitioning Regression Trees for generating intuitive decision rules. Overall, the study shows that land-use change models constitute valuable tools for managing urban transformation in heritage urban areas of intermediate cities. Full article
(This article belongs to the Special Issue Innovations and Applications of Machine Learning Techniques)
Show Figures

Figure 1

21 pages, 1603 KB  
Article
Behavior-Rule Inference Based on Hyponymy–Hypernymy Knowledge Tree
by Huanlai Zhou, Jianyu Guo, Haitao Jia, Kaishi Wang, Lei Guo and Long Qi
Electronics 2025, 14(24), 4789; https://doi.org/10.3390/electronics14244789 - 5 Dec 2025
Viewed by 271
Abstract
Behavior-rule reasoning aims to infer the corresponding applicable rules from specific behaviors and is a type of inductive reasoning that goes from special cases to general ones. This paper proposes a behavior-rule inference model based on hyponymy–hypernymy knowledge trees, which maps behaviors to [...] Read more.
Behavior-rule reasoning aims to infer the corresponding applicable rules from specific behaviors and is a type of inductive reasoning that goes from special cases to general ones. This paper proposes a behavior-rule inference model based on hyponymy–hypernymy knowledge trees, which maps behaviors to corresponding rules through deep learning by processing textual behavior sequences. The primary contributions of this work are threefold: we present a systematic framework that adapts K-BERT to the legal domain by integrating domain-specific hyponymy–hypernymy knowledge trees, addressing the unique challenges of legal text understanding; we conduct comprehensive optimization of key components, including context length, loss function, and base model selection, providing empirical guidelines for applying pre-trained models to legal reasoning tasks; and we propose a practical evaluation metric (tolerance) that mimics real-world legal decision-making processes, providing extensive analysis on the effectiveness of different knowledge types in legal inference. Full article
Show Figures

Figure 1

19 pages, 365 KB  
Article
From Exponential to Efficient: A Novel Matrix-Based Framework for Scalable Medical Diagnosis
by Mohammed Addou, El Bekkaye Mermri and Mohammed Gabli
BioMedInformatics 2025, 5(4), 68; https://doi.org/10.3390/biomedinformatics5040068 - 2 Dec 2025
Viewed by 254
Abstract
Modern diagnostic systems face computational challenges when processing exponential disease-symptom combinations, with traditional approaches requiring up to 2n evaluations for n symptoms. This paper presents MARS (Matrix-Accelerated Reasoning System), a diagnostic framework combining Case-Based Reasoning with matrix representations and intelligent filtering to [...] Read more.
Modern diagnostic systems face computational challenges when processing exponential disease-symptom combinations, with traditional approaches requiring up to 2n evaluations for n symptoms. This paper presents MARS (Matrix-Accelerated Reasoning System), a diagnostic framework combining Case-Based Reasoning with matrix representations and intelligent filtering to address these limitations. The approach encodes disease-symptom relationships as matrices enabling parallel processing, implements adaptive rule-based filtering to prioritize relevant cases, and features automatic rule generation with continuous learning through a dynamically updated Pertinence Matrix. MARS was evaluated on four diverse medical datasets (41 to 721 diseases) and compared against Decision Tree, Random Forest, k-Nearest Neighbors, Support Vector Classifier, Bayesian classifiers, and Neural Networks. On the most challenging dataset (721 diseases, 49,365 test cases), MARS achieved the highest accuracy (87.34%) with substantially reduced processing time. When considering differential diagnosis, accuracy reached 98.33% for top-5 suggestions. These results demonstrate that MARS effectively balances diagnostic accuracy, computational efficiency, and interpretability, three requirements critical for clinical deployment. The framework’s ability to provide ranked differential diagnoses and update incrementally positions it as a practical solution for diverse clinical settings. Full article
Show Figures

Figure 1

21 pages, 1144 KB  
Article
Distributed Data Classification with Coalition-Based Decision Trees and Decision Template Fusion
by Katarzyna Kusztal and Małgorzata Przybyła-Kasperek
Entropy 2025, 27(12), 1205; https://doi.org/10.3390/e27121205 - 27 Nov 2025
Viewed by 327
Abstract
In distributed data environments, classification tasks are challenged by inconsistencies across independently maintained sources. These environments are inherently characterized by high informational uncertainty. Our framework addresses this challenge through a structured process designed for the reduction of entropy in the overall decision-making process. [...] Read more.
In distributed data environments, classification tasks are challenged by inconsistencies across independently maintained sources. These environments are inherently characterized by high informational uncertainty. Our framework addresses this challenge through a structured process designed for the reduction of entropy in the overall decision-making process. This paper proposes a novel framework that integrates conflict analysis, coalition formation, decision tree induction, and decision template fusion to address these challenges. The method begins by identifying compatible data sources using Pawlak’s conflict model, forming coalitions that aggregate complementary information. Each coalition trains a decision tree classifier, and the final decision is derived through decision templates that fuse probabilistic outputs from all models. The proposed approach is compared with a variant that does not use coalitions, where each local source is modeled independently. Additionally, the framework extends previous work based on decision rules by introducing decision trees, which offer greater modeling flexibility while preserving interpretability. Experimental results on benchmark datasets from the UCI repository demonstrate that the proposed method consistently outperforms both the non-coalition variant and the rule-based version, particularly under moderate data dispersion. The key contributions of this work include the integration of coalition-based modeling with decision trees, the use of decision templates for interpretable fusion, and the demonstration of improved classification performance across diverse scenarios. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)
Show Figures

Figure 1

28 pages, 2016 KB  
Article
A Fuzzy Rule-Based Decision Support in Process Mining: Turning Diagnostics into Prescriptions
by Onur Dogan and Hunaıda Avvad
Appl. Sci. 2025, 15(23), 12402; https://doi.org/10.3390/app152312402 - 22 Nov 2025
Viewed by 825
Abstract
In this study, a fuzzy rule-based framework has been developed that expands from the diagnostic analyses traditionally offered by process mining to a decision-support structure that provides recommendations to managers. While traditional process mining methods are widely used to identify bottlenecks and inefficiencies, [...] Read more.
In this study, a fuzzy rule-based framework has been developed that expands from the diagnostic analyses traditionally offered by process mining to a decision-support structure that provides recommendations to managers. While traditional process mining methods are widely used to identify bottlenecks and inefficiencies, they have often produced results that merely describe the current situation and have failed to provide managers with applicable solutions. Therefore, this paper designs a hybrid method combining statistical data preprocessing, process mining, and fuzzy inference mechanisms. First, statistical analysis was carried out to determine which activities are most influential in terms of process lead time. Subsequently, a procedure mining approach was used to locate structural bottlenecks and delay patterns, which the Bottleneck Severity Index rated. To translate diagnostic insights into managerially actionable recommendations, the study constructed a fuzzy decision tree-based inference model. While the model is easily understood and implemented, it presents its results in explicit IF-THEN rules. The approach was applied to a real IT service process with 1500 cases and 46,618 events. The fuzzy rule-based system generated tangible improvements: the cycle time was reduced by 26.4%, the bottleneck events decreased by 55.1% and the operational cost savings were calculated to be of 17.7%. Full article
(This article belongs to the Special Issue Process Mining: Theory and Applications)
Show Figures

Figure 1

31 pages, 3054 KB  
Article
Outlier Detection in EEG Signals Using Ensemble Classifiers
by Agnieszka Duraj, Natalia Łukasik and Piotr S. Szczepaniak
Appl. Sci. 2025, 15(22), 12343; https://doi.org/10.3390/app152212343 - 20 Nov 2025
Viewed by 415
Abstract
Epilepsy is one of the most prevalent neurological disorders, affecting over 50 million people worldwide. Accurate detection and characterization of epileptic activity are clinically critical, as seizures are associated with substantial morbidity, mortality, and impaired quality of life. Electroencephalography (EEG) remains the gold [...] Read more.
Epilepsy is one of the most prevalent neurological disorders, affecting over 50 million people worldwide. Accurate detection and characterization of epileptic activity are clinically critical, as seizures are associated with substantial morbidity, mortality, and impaired quality of life. Electroencephalography (EEG) remains the gold standard for epilepsy assessment; however, its manual interpretation is time-consuming, subjective, and prone to inter-rater variability, emphasizing the need for automated analytical approaches. This study proposes an automated ensemble classification framework for outlier detection in EEG signals. Three interpretable baseline models—Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), and decision tree (DT-CART)—were screened. Ensembles were formed only from base models that had a pre-registered meta-selection rule (F1 on the outlier-class >0.60). Under this criterion, DT-CART did not qualify and was excluded from all ensembles; final ensembles combined SVM and k-NN. The framework was evaluated on two publicly available datasets with distinct acquisition conditions. The Bonn EEG dataset comprises 500 artifact-free single-channel recordings from healthy subjects and epilepsy patients under controlled laboratory settings. In contrast, the Guinea-Bissau and Nigeria Epilepsy (GBNE) dataset contains multi-channel EEG recordings from 97 participants acquired in field conditions using low-cost equipment, reflecting real-world diagnostic challenges such as motion artifacts and signal variability. The ensemble framework substantially improved outlier detection performance, with stacking achieving up to a 95.0% F1-score (accuracy 95.0%) on the Bonn dataset and 85.5% F1-score (accuracy 85.5%) on the GBNE dataset. These findings demonstrate that the proposed approach provides a robust, interpretable, and generalizable solution for EEG analysis, with strong potential to enhance reliable, efficient, and scalable epilepsy detection in both laboratory and resource-limited clinical environments. Full article
(This article belongs to the Special Issue EEG Signal Processing in Medical Diagnosis Applications)
Show Figures

Figure 1

10 pages, 240 KB  
Article
Efficient Modeling of Deterministic Decision Trees for Recognition of Realizable Decision Rules: Bounds on Weighted Depth
by Kerven Durdymyradov and Mikhail Moshkov
Axioms 2025, 14(11), 794; https://doi.org/10.3390/axioms14110794 - 28 Oct 2025
Viewed by 255
Abstract
In this paper, an efficient algorithm for modeling the operation of a DDT (Deterministic Decision Tree) solving the problem of realizability of DRs (Decision Rules) is proposed and analyzed. For this problem, it is assumed that a DRS (Decision Rule System) is given; [...] Read more.
In this paper, an efficient algorithm for modeling the operation of a DDT (Deterministic Decision Tree) solving the problem of realizability of DRs (Decision Rules) is proposed and analyzed. For this problem, it is assumed that a DRS (Decision Rule System) is given; for an arbitrary tuple of feature values, it is required to recognize whether there is a DR realizable on this tuple, i.e., a DR for which the left-hand side is true on the tuple. It is shown that the weighted depth of the modeled DDT does not exceed the square of the minimum weighted depth of the NDT (Nondeterministic Decision Tree) solving the realizability problem. Full article
26 pages, 2949 KB  
Article
Passenger Switch Behavior and Decision Mechanisms in Multimodal Public Transportation Systems
by Zhe Zhang, Wenxie Lin, Tongyu Hu, Qi Cao, Jianhua Song, Gang Ren and Changjian Wu
Systems 2025, 13(11), 951; https://doi.org/10.3390/systems13110951 - 26 Oct 2025
Viewed by 674
Abstract
Efficient public transportation systems are fundamental to achieving sustainable urban development. As the backbone of urban mobility, the coordinated development of rail transit and bus systems is crucial. The opening of a new rail transit line inevitably reshapes urban travel patterns, posing significant [...] Read more.
Efficient public transportation systems are fundamental to achieving sustainable urban development. As the backbone of urban mobility, the coordinated development of rail transit and bus systems is crucial. The opening of a new rail transit line inevitably reshapes urban travel patterns, posing significant challenges to the existing bus network. Understanding passenger switch behavior is key to optimizing the competition and cooperation between these two modes. However, existing methods on the switch behavior of bus passengers along the newly opened rail transit line cannot balance the predictive accuracy and model interpretability. To bridge this gap, we propose a CART (classification and regression tree) decision tree-based switch behavior model that incorporates both predictive and interpretive abilities. This paper uses the massive passenger swiping-card data before and after the opening of the rail transit to construct the switch dataset of bus passengers. Subsequently, a data-driven predictive model of passenger switch behavior was established based on a CART decision tree. The experimental findings demonstrate the superiority of the proposed method, with the CART model achieving an overall prediction accuracy of 85%, outperforming traditional logit and other machine learning benchmarks. Moreover, the analysis of factor significance reveals that ‘Transfer times needed after switch’ is the dominant feature (importance: 0.52), and the extracted decision rules provide clear insights into the decision-making mechanisms of bus passengers. Full article
Show Figures

Figure 1

37 pages, 12943 KB  
Article
Natural Disaster Information System (NDIS) for RPAS Mission Planning
by Robiah Al Wardah and Alexander Braun
Drones 2025, 9(11), 734; https://doi.org/10.3390/drones9110734 - 23 Oct 2025
Viewed by 884
Abstract
Today’s rapidly increasing number and performance of Remotely Piloted Aircraft Systems (RPASs) and sensors allows for an innovative approach in monitoring, mitigating, and responding to natural disasters and risks. At present, there are 100s of different RPAS platforms and smaller and more affordable [...] Read more.
Today’s rapidly increasing number and performance of Remotely Piloted Aircraft Systems (RPASs) and sensors allows for an innovative approach in monitoring, mitigating, and responding to natural disasters and risks. At present, there are 100s of different RPAS platforms and smaller and more affordable payload sensors. As natural disasters pose ever increasing risks to society and the environment, it is imperative that these RPASs are utilized effectively. In order to exploit these advances, this study presents the development and validation of a Natural Disaster Information System (NDIS), a geospatial decision-support framework for RPAS-based natural hazard missions. The system integrates a global geohazard database with specifications of geophysical sensors and RPAS platforms to automate mission planning in a generalized form. NDIS v1.0 uses decision tree algorithms to select suitable sensors and platforms based on hazard type, distance to infrastructure, and survey feasibility. NDIS v2.0 introduces a Random Forest method and a Critical Path Method (CPM) to further optimize task sequencing and mission timing. The latest version, NDIS v3.8.3, implements a staggered decision workflow that sequentially maps hazard type and disaster stage to appropriate survey methods, sensor payloads, and compatible RPAS using rule-based and threshold-based filtering. RPAS selection considers payload capacity and range thresholds, adjusted dynamically by proximity, and ranks candidate platforms using hazard- and sensor-specific endurance criteria. The system is implemented using ArcGIS Pro 3.4.0, ArcGIS Experience Builder (2025 cloud release), and Azure Web App Services (Python 3.10 runtime). NDIS supports both batch processing and interactive real-time queries through a web-based user interface. Additional features include a statistical overview dashboard to help users interpret dataset distribution, and a crowdsourced input module that enables community-contributed hazard data via ArcGIS Survey123. NDIS is presented and validated in, for example, applications related to volcanic hazards in Indonesia. These capabilities make NDIS a scalable, adaptable, and operationally meaningful tool for multi-hazard monitoring and remote sensing mission planning. Full article
Show Figures

Figure 1

10 pages, 473 KB  
Article
Framework for In Silico Toxicity Screening of Novel Odorants
by Isaac Mohar, Brad C. Hansen, Destiny M. Hollowed and Joel D. Mainland
Toxics 2025, 13(10), 902; https://doi.org/10.3390/toxics13100902 - 21 Oct 2025
Viewed by 611
Abstract
Toxicological risk assessment of chemicals without experimental toxicity data often relies on in silico predictions. However, models designed to predict inhalation toxicity associated with exposure to volatile chemicals in solution are unavailable. The aim of this research was to develop an approach to [...] Read more.
Toxicological risk assessment of chemicals without experimental toxicity data often relies on in silico predictions. However, models designed to predict inhalation toxicity associated with exposure to volatile chemicals in solution are unavailable. The aim of this research was to develop an approach to estimate toxicology-based maximum solution concentrations for novel odorants using in silico structure-based predictions. The decision trees were adapted from established open-source models for assessing mutagenicity (rule-based, ISS in vitro mutagenicity decision tree) and systemic toxicity (revised Cramer decision tree). These were implemented using Toxtree (v3.1.0), a freely available program. Thresholds of toxicologic concern (TTC) were then assigned based on the predicted hazard classification. We then used predicted vapor pressure derived from MPBPWIN™ using US EPA EPI Suite to calculate a solution concentration where inhalation exposure to a defined headspace volume would not exceed the TTC. The approach was evaluated using a published dataset of 143 chemicals with repeat exposure inhalation toxicity data, yielding health-protective predictions for 98.6% of the test set. This demonstrates that the proposed in silico approach enables the estimation of safe toxicology-based maximum solution concentrations for chemicals using open-source models and software. Full article
(This article belongs to the Collection Predictive Toxicology)
Show Figures

Graphical abstract

Back to TopTop