Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (335)

Search Parameters:
Keywords = Decision Tree & Rules

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 240 KB  
Article
Efficient Modeling of Deterministic Decision Trees for Recognition of Realizable Decision Rules: Bounds on Weighted Depth
by Kerven Durdymyradov and Mikhail Moshkov
Axioms 2025, 14(11), 794; https://doi.org/10.3390/axioms14110794 - 28 Oct 2025
Viewed by 115
Abstract
In this paper, an efficient algorithm for modeling the operation of a DDT (Deterministic Decision Tree) solving the problem of realizability of DRs (Decision Rules) is proposed and analyzed. For this problem, it is assumed that a DRS (Decision Rule System) is given; [...] Read more.
In this paper, an efficient algorithm for modeling the operation of a DDT (Deterministic Decision Tree) solving the problem of realizability of DRs (Decision Rules) is proposed and analyzed. For this problem, it is assumed that a DRS (Decision Rule System) is given; for an arbitrary tuple of feature values, it is required to recognize whether there is a DR realizable on this tuple, i.e., a DR for which the left-hand side is true on the tuple. It is shown that the weighted depth of the modeled DDT does not exceed the square of the minimum weighted depth of the NDT (Nondeterministic Decision Tree) solving the realizability problem. Full article
26 pages, 2949 KB  
Article
Passenger Switch Behavior and Decision Mechanisms in Multimodal Public Transportation Systems
by Zhe Zhang, Wenxie Lin, Tongyu Hu, Qi Cao, Jianhua Song, Gang Ren and Changjian Wu
Systems 2025, 13(11), 951; https://doi.org/10.3390/systems13110951 - 26 Oct 2025
Viewed by 287
Abstract
Efficient public transportation systems are fundamental to achieving sustainable urban development. As the backbone of urban mobility, the coordinated development of rail transit and bus systems is crucial. The opening of a new rail transit line inevitably reshapes urban travel patterns, posing significant [...] Read more.
Efficient public transportation systems are fundamental to achieving sustainable urban development. As the backbone of urban mobility, the coordinated development of rail transit and bus systems is crucial. The opening of a new rail transit line inevitably reshapes urban travel patterns, posing significant challenges to the existing bus network. Understanding passenger switch behavior is key to optimizing the competition and cooperation between these two modes. However, existing methods on the switch behavior of bus passengers along the newly opened rail transit line cannot balance the predictive accuracy and model interpretability. To bridge this gap, we propose a CART (classification and regression tree) decision tree-based switch behavior model that incorporates both predictive and interpretive abilities. This paper uses the massive passenger swiping-card data before and after the opening of the rail transit to construct the switch dataset of bus passengers. Subsequently, a data-driven predictive model of passenger switch behavior was established based on a CART decision tree. The experimental findings demonstrate the superiority of the proposed method, with the CART model achieving an overall prediction accuracy of 85%, outperforming traditional logit and other machine learning benchmarks. Moreover, the analysis of factor significance reveals that ‘Transfer times needed after switch’ is the dominant feature (importance: 0.52), and the extracted decision rules provide clear insights into the decision-making mechanisms of bus passengers. Full article
Show Figures

Figure 1

37 pages, 12943 KB  
Article
Natural Disaster Information System (NDIS) for RPAS Mission Planning
by Robiah Al Wardah and Alexander Braun
Drones 2025, 9(11), 734; https://doi.org/10.3390/drones9110734 - 23 Oct 2025
Viewed by 399
Abstract
Today’s rapidly increasing number and performance of Remotely Piloted Aircraft Systems (RPASs) and sensors allows for an innovative approach in monitoring, mitigating, and responding to natural disasters and risks. At present, there are 100s of different RPAS platforms and smaller and more affordable [...] Read more.
Today’s rapidly increasing number and performance of Remotely Piloted Aircraft Systems (RPASs) and sensors allows for an innovative approach in monitoring, mitigating, and responding to natural disasters and risks. At present, there are 100s of different RPAS platforms and smaller and more affordable payload sensors. As natural disasters pose ever increasing risks to society and the environment, it is imperative that these RPASs are utilized effectively. In order to exploit these advances, this study presents the development and validation of a Natural Disaster Information System (NDIS), a geospatial decision-support framework for RPAS-based natural hazard missions. The system integrates a global geohazard database with specifications of geophysical sensors and RPAS platforms to automate mission planning in a generalized form. NDIS v1.0 uses decision tree algorithms to select suitable sensors and platforms based on hazard type, distance to infrastructure, and survey feasibility. NDIS v2.0 introduces a Random Forest method and a Critical Path Method (CPM) to further optimize task sequencing and mission timing. The latest version, NDIS v3.8.3, implements a staggered decision workflow that sequentially maps hazard type and disaster stage to appropriate survey methods, sensor payloads, and compatible RPAS using rule-based and threshold-based filtering. RPAS selection considers payload capacity and range thresholds, adjusted dynamically by proximity, and ranks candidate platforms using hazard- and sensor-specific endurance criteria. The system is implemented using ArcGIS Pro 3.4.0, ArcGIS Experience Builder (2025 cloud release), and Azure Web App Services (Python 3.10 runtime). NDIS supports both batch processing and interactive real-time queries through a web-based user interface. Additional features include a statistical overview dashboard to help users interpret dataset distribution, and a crowdsourced input module that enables community-contributed hazard data via ArcGIS Survey123. NDIS is presented and validated in, for example, applications related to volcanic hazards in Indonesia. These capabilities make NDIS a scalable, adaptable, and operationally meaningful tool for multi-hazard monitoring and remote sensing mission planning. Full article
Show Figures

Figure 1

10 pages, 473 KB  
Article
Framework for In Silico Toxicity Screening of Novel Odorants
by Isaac Mohar, Brad C. Hansen, Destiny M. Hollowed and Joel D. Mainland
Toxics 2025, 13(10), 902; https://doi.org/10.3390/toxics13100902 - 21 Oct 2025
Viewed by 324
Abstract
Toxicological risk assessment of chemicals without experimental toxicity data often relies on in silico predictions. However, models designed to predict inhalation toxicity associated with exposure to volatile chemicals in solution are unavailable. The aim of this research was to develop an approach to [...] Read more.
Toxicological risk assessment of chemicals without experimental toxicity data often relies on in silico predictions. However, models designed to predict inhalation toxicity associated with exposure to volatile chemicals in solution are unavailable. The aim of this research was to develop an approach to estimate toxicology-based maximum solution concentrations for novel odorants using in silico structure-based predictions. The decision trees were adapted from established open-source models for assessing mutagenicity (rule-based, ISS in vitro mutagenicity decision tree) and systemic toxicity (revised Cramer decision tree). These were implemented using Toxtree (v3.1.0), a freely available program. Thresholds of toxicologic concern (TTC) were then assigned based on the predicted hazard classification. We then used predicted vapor pressure derived from MPBPWIN™ using US EPA EPI Suite to calculate a solution concentration where inhalation exposure to a defined headspace volume would not exceed the TTC. The approach was evaluated using a published dataset of 143 chemicals with repeat exposure inhalation toxicity data, yielding health-protective predictions for 98.6% of the test set. This demonstrates that the proposed in silico approach enables the estimation of safe toxicology-based maximum solution concentrations for chemicals using open-source models and software. Full article
(This article belongs to the Collection Predictive Toxicology)
Show Figures

Graphical abstract

17 pages, 414 KB  
Article
DQMAF—Data Quality Modeling and Assessment Framework
by Razan Al-Toq and Abdulaziz Almaslukh
Information 2025, 16(10), 911; https://doi.org/10.3390/info16100911 - 17 Oct 2025
Viewed by 471
Abstract
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only [...] Read more.
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only undermines analytics and machine learning models but also exposes unsuspecting users to unreliable services, compromised authentication mechanisms, and biased decision-making processes. Traditional data quality assessment methods, largely based on manual inspection or rigid rule-based validation, cannot cope with the scale, heterogeneity, and velocity of modern data streams. To address this gap, we propose DQMAF (Data Quality Modeling and Assessment Framework), a generalized machine learning–driven approach that systematically profiles, evaluates, and classifies data quality to protect end-users and enhance the reliability of Internet services. DQMAF introduces an automated profiling mechanism that measures multiple dimensions of data quality—completeness, consistency, accuracy, and structural conformity—and aggregates them into interpretable quality scores. Records are then categorized into high, medium, and low quality, enabling downstream systems to filter or adapt their behavior accordingly. A distinctive strength of DQMAF lies in integrating profiling with supervised machine learning models, producing scalable and reusable quality assessments applicable across domains such as social media, healthcare, IoT, and e-commerce. The framework incorporates modular preprocessing, feature engineering, and classification components using Decision Trees, Random Forest, XGBoost, AdaBoost, and CatBoost to balance performance and interpretability. We validate DQMAF on a publicly available Airbnb dataset, showing its effectiveness in detecting and classifying data issues with high accuracy. The results highlight its scalability and adaptability for real-world big data pipelines, supporting user protection, document and text-based classification, and proactive data governance while improving trust in analytics and AI-driven applications. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

17 pages, 1106 KB  
Article
Calibrated Global Logit Fusion (CGLF) for Fetal Health Classification Using Cardiotocographic Data
by Mehret Ephrem Abraha and Juntae Kim
Electronics 2025, 14(20), 4013; https://doi.org/10.3390/electronics14204013 - 13 Oct 2025
Viewed by 270
Abstract
Accurate detection of fetal distress from cardiotocography (CTG) is clinically critical but remains subjective and error-prone. In this research, we present a leakage-safe Calibrated Global Logit Fusion (CGLF) framework that couples TabNet’s sparse, attention-based feature selection with XGBoost’s gradient-boosted rules and fuses their [...] Read more.
Accurate detection of fetal distress from cardiotocography (CTG) is clinically critical but remains subjective and error-prone. In this research, we present a leakage-safe Calibrated Global Logit Fusion (CGLF) framework that couples TabNet’s sparse, attention-based feature selection with XGBoost’s gradient-boosted rules and fuses their class probabilities through global logit blending followed by per-class vector temperature calibration. Class imbalance is addressed with SMOTE–Tomek for TabNet and one XGBoost stream (XGB–A), and class-weighted training for a second stream (XGB–B). To prevent information leakage, all preprocessing, resampling, and weighting are fitted only on the training split within each outer fold. Out-of-fold (OOF) predictions from the outer-train split are then used to optimize blend weights and fit calibration parameters, which are subsequently applied once to the corresponding held-out outer-test fold. Our calibration-guided logit fusion (CGLF) matches top-tier discrimination on the public Fetal Health dataset while producing more reliable probability estimates than strong standalone baselines. Under nested cross-validation, CGLF delivers comparable AUROC and overall accuracy to the best tree-based model, with visibly improved calibration and slightly lower balanced accuracy in some splits. We also provide interpretability and overfitting checks via TabNet sparsity, feature stability analysis, and sufficiency (k95) curves. Finally, threshold tuning under a balanced-accuracy floor preserves sensitivity to pathological cases, aligning operating points with risk-aware obstetric decision support. Overall, CGLF is a calibration-centric, leakage-controlled CTG pipeline that is interpretable and suited to threshold-based clinical deployment. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

22 pages, 2695 KB  
Article
Modeling Total Alkalinity in Aquatic Ecosystems by Decision Trees: Anticipation of pH Stability and Identification of Main Contributors
by Hichem Tahraoui, Rachida Bouallouche, Kamilia Madi, Oumnia Rayane Benkouachi, Reguia Boudraa, Hadjar Belkacemi, Sabrina Lekmine, Hamza Moussa, Nabil Touzout, Mohammad Shamsul Ola, Zakaria Triki, Meriem Zamouche, Mohammed Kebir, Noureddine Nasrallah, Amine Aymen Assadi, Yacine Benguerba, Jie Zhang and Abdeltif Amrane
Water 2025, 17(20), 2939; https://doi.org/10.3390/w17202939 - 12 Oct 2025
Viewed by 527
Abstract
Total alkalinity (TAC) plays a pivotal role in buffering acid–base fluctuations and maintaining pH stability in aquatic ecosystems. This study presents a data-driven approach to model TAC using decision tree regression, applied to a comprehensive dataset of 454 water samples collected in diverse [...] Read more.
Total alkalinity (TAC) plays a pivotal role in buffering acid–base fluctuations and maintaining pH stability in aquatic ecosystems. This study presents a data-driven approach to model TAC using decision tree regression, applied to a comprehensive dataset of 454 water samples collected in diverse aquatic environments of the Médéa region, Algeria. Twenty physicochemical parameters, including concentrations of bicarbonates, hardness, major ions, and trace elements, were analyzed as input features. The decision tree algorithm was optimized using the Dragonfly metaheuristic algorithm coupled with 5-fold cross-validation. The optimized model (DT_DA) demonstrated exceptional predictive performance, with a correlation coefficient R of 0.9999, and low prediction errors (RMSE = 0.3957, MAE = 0.3572, and MAPE = 0.4531). External validation on an independent dataset of 68 samples confirmed the model’s robustness (R = 0.9999; RMSE = 0.4223; MAE = 0.3871, and MAPE = 0.4931). The tree structure revealed that total hardness (threshold: 78.5 °F) and bicarbonate concentration (threshold: 421.68 mg/L) were the most influential variables in TAC determination. The model offers not only accurate predictions but also interpretable decision rules, allowing the identification of critical physicochemical thresholds that govern alkalinity. These findings provide a valuable tool for anticipating pH instability and guiding water quality management and protection strategies in freshwater ecosystems. Full article
Show Figures

Figure 1

21 pages, 6390 KB  
Article
Machine Learning-Based Characterization of Bacillus anthracis Phenotypes from pXO1 Plasmid Proteins
by William Harrigan, Thi Hai Au La, Prashant Dahal, Mahdi Belcaid and Michael H. Norris
Pathogens 2025, 14(10), 1019; https://doi.org/10.3390/pathogens14101019 - 8 Oct 2025
Viewed by 465
Abstract
The Bacillus anthracis pXO1 plasmid, encoding ~143 proteins, presents a compact model for exploring protein function and evolutionary patterns using protein language models. Due to the organism’s slow evolutionary rate, its limited amino acid variation enhances detection of physiologically relevant patterns in plasmid [...] Read more.
The Bacillus anthracis pXO1 plasmid, encoding ~143 proteins, presents a compact model for exploring protein function and evolutionary patterns using protein language models. Due to the organism’s slow evolutionary rate, its limited amino acid variation enhances detection of physiologically relevant patterns in plasmid protein composition. In this study, we applied embedding-based analyses and machine learning methods to characterize pXO1 protein modules across diverse B. anthracis lineages. We generated protein sequence embeddings, constructed phylogenies, and compared plasmid content with whole genome variation. While whole genome and plasmid-based phylogenies diverge, the composition of proteins encoded along the pXO1 plasmid revealed lineage specific structure. Association rule mining combined with decision tree classification produced plasmid-encoded targets for assessing anthrax sublineage, which yielded functionally redundant protein modules that reflected geographic and phylogenetic patterns. A conserved DNA replication module exhibited both shared and B. anthracis lineage specific features. These results show that pXO1 plasmid protein modules contain biologically meaningful and evolutionarily informative signatures, exemplifying their value in phylogeographic characterizations of bacterial pathogens. This framework can be extended to study additional virulence plasmids across Bacillus and other environmental pathogens using scalable protein language model tools. Full article
(This article belongs to the Section Bacterial Pathogens)
Show Figures

Figure 1

21 pages, 2625 KB  
Article
Interpretable Self-Supervised Learning for Fault Identification in Printed Circuit Board Assembly Testing
by Md Rakibul Islam, Shahina Begum and Mobyen Uddin Ahmed
Appl. Sci. 2025, 15(18), 10080; https://doi.org/10.3390/app151810080 - 15 Sep 2025
Viewed by 486
Abstract
Fault identification in Printed Circuit Board Assembly (PCBA) testing is essential for assuring product quality; nevertheless, conventional methods still have difficulties due to the lack of labeled faulty data and the “black box” nature of advanced models. This study introduces a label-free, interpretable [...] Read more.
Fault identification in Printed Circuit Board Assembly (PCBA) testing is essential for assuring product quality; nevertheless, conventional methods still have difficulties due to the lack of labeled faulty data and the “black box” nature of advanced models. This study introduces a label-free, interpretable self-supervised framework that uses two pretext tasks: (i) an autoencoder (reconstruction error and two latent features) and (ii) isolation forest (faulty score) to form a four-dimensional representation of each test sequence. A two-component Gaussian Mixture Model is used, and the samples are clustered into normal and fault groups. The decision is explained with cluster mean differences, SHAP (LinearSHAP or LinearExplainer on a logistic-regression surrogate), and a shallow decision tree that generated if–then rules. On real PCBA data, internal indices showed compact and well-separated clusters (Silhouette 0.85, Calinski–Harabasz 50,344.19, Davies–Bouldin 0.39), external metrics were high (ARI 0.72; NMI 0.59; Fowlkes–Mallows 0.98), and the clustered result used as a fault predictor reached 0.98 accuracy, 0.98 precision, and 0.99 recall. Explanations show that the IForest score and reconstruction error drive most decisions, causing simple thresholds that can guide inspection. An ablation without the self-supervised tasks results in degraded clustering quality. The proposed approach offers accurate, label-free fault prediction with transparent reasoning and is suitable for deployment in industrial test lines. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

25 pages, 5281 KB  
Article
Detection and Mitigation in IoT Ecosystems Using oneM2M Architecture and Edge-Based Machine Learning
by Yu-Yong Luo, Yu-Hsun Chiu and Chia-Hsin Cheng
Future Internet 2025, 17(9), 411; https://doi.org/10.3390/fi17090411 - 8 Sep 2025
Viewed by 466
Abstract
Distributed denial-of-service (DDoS) attacks are a prevalent threat to resource-constrained IoT deployments. We present an edge-based detection and mitigation system integrated with the oneM2M architecture. By using a Raspberry Pi 4 client and five Raspberry Pi 3 attack nodes in a smart-home testbed, [...] Read more.
Distributed denial-of-service (DDoS) attacks are a prevalent threat to resource-constrained IoT deployments. We present an edge-based detection and mitigation system integrated with the oneM2M architecture. By using a Raspberry Pi 4 client and five Raspberry Pi 3 attack nodes in a smart-home testbed, we collected 200,000 packets with 19 features across four traffic states (normal, SYN/UDP/ICMP floods), trained Decision Tree, 2D-CNN, and LSTM models, and deployed the best model on an edge computer for real-time inference. The edge node classifies traffic and triggers per-attack defenses on the device (SYN cookies, UDP/ICMP iptables rules). On a held-out test set, the 2D-CNN achieved 98.45% accuracy, outperforming the LSTM (96.14%) and Decision Tree (93.77%). In end-to-end trials, the system sustained service during SYN floods (time to capture 200 packets increased from 5.05 s to 5.51 s after enabling SYN cookies), mitigated ICMP floods via rate limiting, and flagged UDP floods for administrator intervention due to residual performance degradation. These results show that lightweight, edge-deployed learning with targeted controls can harden oneM2M-based IoT systems against common DDoS vectors. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

10 pages, 258 KB  
Article
Three Problems for Decision Rule Systems from Closed Classes
by Kerven Durdymyradov and Mikhail Moshkov
Axioms 2025, 14(8), 648; https://doi.org/10.3390/axioms14080648 - 21 Aug 2025
Viewed by 306
Abstract
The study of the relationships between DRSs (Decision Rule Systems) and DTs (Decision Trees) is of considerable interest in computer science. In this paper, we consider classes of DRSs that are closed under specific operations. First, we examine classes that are closed under [...] Read more.
The study of the relationships between DRSs (Decision Rule Systems) and DTs (Decision Trees) is of considerable interest in computer science. In this paper, we consider classes of DRSs that are closed under specific operations. First, we examine classes that are closed under the operation of the removal of features and analyze the functions characterizing the worst-case dependence of the minimum depth of DDTs (Deterministic Decision Trees) and NDTs (Nondeterministic Decision Trees), solving the task of finding all true DRs in a DRS on the number of different features in the system. Second, we extend our analysis to classes that are closed under the removal of features and rules, studying the worst-case behavior of the minimum DT depth for the task of finding at least one true DR. Third, we investigate classes closed under the removal of features and rules in the context of finding all right-hand sides of true DRs. We prove that, in all three cases, the corresponding functions characterizing the worst-case minimum depth of DTs are either bounded from above by a constant or grow linearly. Full article
Show Figures

Figure 1

13 pages, 1357 KB  
Article
Decision Tree Modeling to Predict Myopia Progression in Children Treated with Atropine: Toward Precision Ophthalmology
by Jun-Wei Chen, Chi-Jie Lu, Chieh-Han Yu, Tzu-Chi Liu and Tzu-En Wu
Diagnostics 2025, 15(16), 2096; https://doi.org/10.3390/diagnostics15162096 - 20 Aug 2025
Viewed by 879
Abstract
Background/Objectives: Myopia is a growing global health concern, especially among school-aged children in East Asia. Topical atropine is a key treatment for pediatric myopia control, but individual responses vary, with some children showing rapid progression despite higher doses. This retrospective observational study [...] Read more.
Background/Objectives: Myopia is a growing global health concern, especially among school-aged children in East Asia. Topical atropine is a key treatment for pediatric myopia control, but individual responses vary, with some children showing rapid progression despite higher doses. This retrospective observational study aims to develop an interpretable machine learning model to predict individualized treatment responses and support personalized clinical decisions, based on data collected over a 3-year period without a control group. Methods: A total of 1545 pediatric eyes treated with topical atropine for myopia control at a single tertiary medical center are analyzed. Classification and regression tree (CART) is constructed to predict changes in spherical equivalent (SE) and identify influencing risk factors. These factors are mainly received treatments for myopia including atropine dosage records, treatment duration, and ophthalmic examinations. Furthermore, decision rules that closely resemble the clinical diagnosis process are provided to assist clinicians with more interpretable insights into personalized treatment decisions. The performance of CART is evaluated by comparing with the benchmark model of least absolute shrinkage and selection operator regression (Lasso) to confirm the practicality of CART usage. Results: Both the CART and Lasso models demonstrated comparable predictive performance. The CART model identified baseline SE as the primary determinant of myopia progression. Children with a baseline SE more negative than −3.125 D exhibited greater myopic progression, particularly those with prolonged treatment duration and higher cumulative atropine dosage. Conclusions: Baseline SE has been identified as the key factor affecting SE difference. The generated decision rules from CART demonstrate the use of explainable machine learning in precision myopia management. Full article
Show Figures

Figure 1

28 pages, 8921 KB  
Article
LUNTIAN: An Agent-Based Model of an Industrial Tree Plantation for Promoting Sustainable Harvesting in the Philippines
by Zenith Arnejo, Benoit Gaudou, Mehdi Saqalli and Nathaniel Bantayan
Forests 2025, 16(8), 1293; https://doi.org/10.3390/f16081293 - 8 Aug 2025
Viewed by 937
Abstract
Industrial tree plantations (ITPs) are increasingly recognized as a sustainable response to deforestation and the decline in native wood resources in the Philippines. This study presents LUNTIAN (Labor, UNiversity, Timber Investment, and Agent-based Nexus), an agent-based model that simulates an experimental ITP operation [...] Read more.
Industrial tree plantations (ITPs) are increasingly recognized as a sustainable response to deforestation and the decline in native wood resources in the Philippines. This study presents LUNTIAN (Labor, UNiversity, Timber Investment, and Agent-based Nexus), an agent-based model that simulates an experimental ITP operation within a mountain forest managed by University of the Philippines Los Baños. The model integrates biophysical processes—such as tree growth, hydrology, and stand dynamics—with socio-economic components such as investment decision making based on risk preferences, employment allocation influenced by local labor availability, and informal harvesting behavior driven by job scarcity. These are complemented by institutional enforcement mechanisms such as forest patrolling, reflecting the complex interplay between financial incentives and rule compliance. To assess the model’s validity, its outputs were compared to those of the 3PG forest growth model, with results demonstrating alignment in growth trends and spatial distributions, thereby supporting LUNTIAN’s potential to represent key ecological dynamics. Sensitivity analysis identified investor earnings share and community member count as significant factors influencing net earnings and management costs. Parameter calibration using the Non-dominated Sorting Genetic Algorithm yielded an optimal configuration that ensured profitability for resource managers, investors, and community-hired laborers while minimizing unauthorized independent harvesting. Notably, even with continuous harvesting during a 17-year rotation, the final tree population increased by 55%. These findings illustrate the potential of LUNTIAN to support the exploration of sustainable ITP management strategies in the Philippines by offering a robust framework for analyzing complex social–ecological interactions. Full article
(This article belongs to the Section Forest Operations and Engineering)
Show Figures

Figure 1

16 pages, 2222 KB  
Article
Integration of Data Analytics and Data Mining for Machine Failure Mitigation and Decision Support in Metal–Mechanical Industry
by Sidnei Alves de Araujo, Silas Luiz Bomfim, Dimitria T. Boukouvalas, Sergio Ricardo Lourenço, Ugo Ibusuki and Geraldo Cardoso de Oliveira Neto
Logistics 2025, 9(3), 109; https://doi.org/10.3390/logistics9030109 - 7 Aug 2025
Cited by 1 | Viewed by 797
Abstract
Background: The growing complexity of production processes in the metal–mechanical industry demands ever more effective strategies for managing machine and equipment maintenance, as unexpected failures can incur high operational costs and compromise productivity by interrupting workflows and delaying deliveries. However, few studies [...] Read more.
Background: The growing complexity of production processes in the metal–mechanical industry demands ever more effective strategies for managing machine and equipment maintenance, as unexpected failures can incur high operational costs and compromise productivity by interrupting workflows and delaying deliveries. However, few studies have combined end-to-end data analytics and data mining methods to proactively predict and mitigate such failures. This study aims to develop and validate a comprehensive framework combining data analytics and data mining to prevent machine failures and support decision-making in a metal–mechanical manufacturing environment. Methods: First, exploratory data analytics were performed on the sensor and logistics data to identify significant relationships and trends between variables. Next, a preprocessing pipeline including data cleaning, data transformation, feature selection, and resampling was applied. Finally, a decision tree model was trained to identify conditions prone to failures, enabling not only predictions but also the explicit representation of knowledge in the form of decision rules. Results: The outstanding performance of the decision tree (82.1% accuracy and a Kappa index of 78.5%), which was modeled from preprocessed data and the insights produced by data analytics, demonstrates its ability to generate reliable rules for predicting failures to support decision-making. The implementation of the proposed framework enables the optimization of predictive maintenance strategies, effectively reducing unplanned downtimes and enhancing the reliability of production processes in the metal–mechanical industry. Full article
Show Figures

Figure 1

17 pages, 2649 KB  
Article
Four-Dimensional Hyperspectral Imaging for Fruit and Vegetable Grading
by Laraib Haider Naqvi, Badrinath Balasubramaniam, Jiaqiong Li, Lingling Liu and Beiwen Li
Agriculture 2025, 15(15), 1702; https://doi.org/10.3390/agriculture15151702 - 6 Aug 2025
Viewed by 1188
Abstract
Reliable, non-destructive grading of fresh fruit requires simultaneous assessment of external morphology and hidden internal defects. Camera-based grading of fresh fruit using colorimetric (RGB) and near-infrared (NIR) imaging often misses subsurface bruising and cannot capture the fruit’s true shape, leading to inconsistent quality [...] Read more.
Reliable, non-destructive grading of fresh fruit requires simultaneous assessment of external morphology and hidden internal defects. Camera-based grading of fresh fruit using colorimetric (RGB) and near-infrared (NIR) imaging often misses subsurface bruising and cannot capture the fruit’s true shape, leading to inconsistent quality assessment and increased waste. To address this, we developed a 4D-grading pipeline that fuses visible and near-infrared (VNIR) and short-wave infrared (SWIR) hyperspectral imaging with structured-light 3D scanning to non-destructively evaluate both internal defects and external form. Our contributions are (1) flagging the defects in fruits based on the reflectance information, (2) accurate shape and defect measurement based on the 3D data of fruits, and (3) an interpretable, decision-tree framework that assigns USDA-style quality (Premium, Grade 1/2, Reject) and size (Small–Extra Large) labels. We demonstrate this approach through preliminary results, suggesting that 4D hyperspectral imaging may offer advantages over single-modality methods by providing clear, interpretable decision rules and the potential for adaptation to other produce types. Full article
Show Figures

Figure 1

Back to TopTop