Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,868)

Search Parameters:
Keywords = intelligent decision-making model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 5833 KB  
Article
High-Level Synthesis-Based FPGA Hardware Accelerator for Generalized Hebbian Learning Algorithm for Neuromorphic Computing
by Shivani Sharma and Darshika G. Perera
Electronics 2026, 15(8), 1725; https://doi.org/10.3390/electronics15081725 (registering DOI) - 18 Apr 2026
Abstract
With the advent of AI and the smart systems era, neuromorphic computing will be imperative to support next-generation AI-related applications. Existing intelligent systems, (such as smart cities, robotics), face many challenges and requirements including, high performance, adaptability, scalability, dynamic decision-making, and low power. [...] Read more.
With the advent of AI and the smart systems era, neuromorphic computing will be imperative to support next-generation AI-related applications. Existing intelligent systems, (such as smart cities, robotics), face many challenges and requirements including, high performance, adaptability, scalability, dynamic decision-making, and low power. Neuromorphic computing is emerging as a complementary solution to address these challenges and requirements of next-gen intelligent systems. Neuromorphic computing comprises many traits, such as adaptive, low-power, scalable, parallel computing, that satisfies the requirements of future intelligent systems. There is a need for innovative solutions (in terms of models, architectures, techniques) for neuromorphic computing to support next-gen intelligent systems to overcome several challenges hindering the advancement of neuromorphic computing. In this research work, we introduce a novel and efficient FPGA-HLS-based hardware accelerator for the Generalized Hebbian learning algorithm (GHA) for neuromorphic computing applications. We decided to focus on GHA, since it was demonstrated that GHA enables online and incremental learning, and provides a hardware-efficient unsupervised learning framework that aligns closely with the principles of biological adaptation—traits that are vital for neuromorphic computing applications. In addition, our previous work showed that FPGAs have many features, such as low power, customized circuits, parallel computing capabilities, low latency, and especially adaptive nature, which make FPGAs suitable for neuromorphic computing applications. We propose two different hardware versions of FPGA-HLS-based GHA hardware accelerators: one is memory-mapped interface-based and another one is streaming interface-based. Our streaming interface-based FPGA-HLS-based GHA hardware IP achieves up to 51.13× speedup compared to its embedded software counterpart, while maintaining small area and low power requirements of neuromorphic computing applications. Our experimental results show great potential in utilizing FPGA-based architectures to support neuromorphic computing applications. Full article
Show Figures

Figure 1

19 pages, 2476 KB  
Article
Machine Learning and Geographic Information Systems for Aircraft Route Analysis in Large-Scale Airport Transportation Networks
by Saadi Turied Kurdi, Luttfi A. Al-Haddad and Zeashan Hameed Khan
Computers 2026, 15(4), 255; https://doi.org/10.3390/computers15040255 (registering DOI) - 18 Apr 2026
Abstract
This study proposes a scalable, AI-driven, and Geographic Information System (GIS)-integrated framework for intelligent route-level classification in large-scale airport transportation networks to support airport operations, logistics planning, and network-level decision-making. The framework addresses the need for practical artificial intelligence applications that combine spatial [...] Read more.
This study proposes a scalable, AI-driven, and Geographic Information System (GIS)-integrated framework for intelligent route-level classification in large-scale airport transportation networks to support airport operations, logistics planning, and network-level decision-making. The framework addresses the need for practical artificial intelligence applications that combine spatial network analysis with supervised machine learning to improve route assessment and resource allocation in complex air transport systems. A structured dataset was developed using operational and traffic-related attributes, including route distance, aircraft capacity, weekly frequency, annual passenger volume, demand variability, and route performance indicators, with additional normalized features to improve data representation. A Gradient Boosting ensemble classifier was trained to categorize routes into high-, medium-, and low-priority classes. The model achieved strong predictive performance, with a testing area under the ROC curve of 0.961, accuracy of 0.922, F1-score of 0.915, precision of 0.918, and a recall of 0.922. Feature importance analysis identified demand variability and route-density indicators as the main drivers of classification, enhancing interpretability and practical trust. The proposed framework demonstrates the real-world potential of AI for scalable, explainable, and efficient decision support in airport logistics and transportation network management. Full article
(This article belongs to the Special Issue AI in Action: Innovations and Breakthroughs)
Show Figures

Figure 1

20 pages, 783 KB  
Article
A Machine Learning Framework for Prognostic Modeling in Stage III Colon Cancer
by Rümeysa Sungur, Selin Aktürk Esen, Hilal Arslan, Sevil Uygun İlikhan, Hatice Rüveyda Akça, Efnan Algın, Öznur Bal, Şebnem Yaman and Doğan Uncu
J. Clin. Med. 2026, 15(8), 3091; https://doi.org/10.3390/jcm15083091 - 17 Apr 2026
Abstract
Objective: To evaluate overall survival and to identify clinical, pathological, and demographic factors associated with survival in patients with stage III colon cancer. Methods: This retrospective cross-sectional study included 452 patients with stage III colon cancer who were followed at Ankara Bilkent City [...] Read more.
Objective: To evaluate overall survival and to identify clinical, pathological, and demographic factors associated with survival in patients with stage III colon cancer. Methods: This retrospective cross-sectional study included 452 patients with stage III colon cancer who were followed at Ankara Bilkent City Hospital between 2005 and 2025. Patient data, including age, sex, ECOG performance status, comorbidities, tumor characteristics, treatment-related toxicities, and recurrence, were analyzed using PASW Statistics 18.0 (SPSS Inc., Chicago, IL, USA). Kaplan–Meier and log-rank tests were used for survival analysis. Prognostic factors, survival, mortality, and recurrence predictions were evaluated using machine learning algorithms, including coarse tree, bagged trees, support vector machines, and k-nearest neighbors. Furthermore, an explainable artificial intelligence framework was incorporated to improve model transparency and reveal clinically meaningful feature contributions. Model performance was assessed using accuracy, sensitivity, specificity, and F-score. Results: According to statistical analyses, older age, ECOG performance score ≥ 2, stage IIIC disease, N2-level lymph node metastasis, and the presence of comorbidities—particularly diabetes mellitus—were significantly associated with worse survival (p < 0.05). Machine learning analyses identified key prognostic factors, including positive surgical margins, rash, mucositis, thrombocytopenia, number of chemotherapy cycles, pathological tumor subtype, diarrhea, age at diagnosis, and anemia. SHAP analysis further demonstrated that treatment-related variables, particularly surgical margin positivity and chemotherapy-associated toxicities, were among the most influential predictors of survival. Several machine learning models outperformed traditional statistical methods in predicting mortality and recurrence, with the highest accuracy observed in ensemble methods such as coarse tree (87%) and bagged trees. Conclusions: This study identifies key prognostic factors influencing survival in stage III colon cancer and demonstrates that machine learning-based approaches can complement conventional statistical methods. The integration of clinical and treatment-related variables may improve individualized risk stratification and support clinical decision-making. These findings may also guide future large-scale, multicenter, and prospective studies. Full article
(This article belongs to the Section Oncology)
24 pages, 11332 KB  
Article
Intelligent Optimization Methods for Cloud–Edge Collaborative Vehicular Networks via the Integration of Bayesian Decision-Making and Reinforcement Learning
by Youjian Yu, Zhaowei Song, Sifeng Zhu and Qinghua Zhang
Future Internet 2026, 18(4), 215; https://doi.org/10.3390/fi18040215 - 17 Apr 2026
Abstract
To improve vehicle user service quality and address data privacy and security issues in intelligent transportation vehicle networking systems, a three-tier communication architecture with cloud-edge-end collaboration was designed in this paper. A Bayesian decision criterion was utilized to divide user data segments into [...] Read more.
To improve vehicle user service quality and address data privacy and security issues in intelligent transportation vehicle networking systems, a three-tier communication architecture with cloud-edge-end collaboration was designed in this paper. A Bayesian decision criterion was utilized to divide user data segments into fine-grained slices based on their privacy levels, and differential privacy techniques were applied to protect the offloaded data. To achieve multi-objective optimization between user service quality and data privacy and security, the problem was formulated as a constrained Markov decision process. A communication model, a caching model, a latency model, an energy consumption model, and a data-fragment privacy protection model were designed. Additionally, a deep reinforcement learning algorithm based on the actor–critic approach was proposed for the collaborative and centralized training of multiple intelligent agents (CTMA-AC), enabling multi-objective optimization decision-making for the protection of offloaded private user data. Simulation experiments demonstrate that the proposed multi-agent collaborative privacy data offloading protection strategy can effectively safeguard private user data while ensuring high service quality. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
22 pages, 6370 KB  
Article
Interpretable Data-Driven Prediction, Optimization, and Decision-Making for Coking Coal Flotation
by Ying Wang and Deqian Cui
Processes 2026, 14(8), 1289; https://doi.org/10.3390/pr14081289 - 17 Apr 2026
Abstract
Coking coal flotation is a typical nonlinear, multi-variable, and multi-objective process in which concentrate quality and combustible matter recovery must be balanced under fluctuating feed and operating conditions. To improve both predictive reliability and decision support, this study proposes an integrated data-driven framework [...] Read more.
Coking coal flotation is a typical nonlinear, multi-variable, and multi-objective process in which concentrate quality and combustible matter recovery must be balanced under fluctuating feed and operating conditions. To improve both predictive reliability and decision support, this study proposes an integrated data-driven framework that combines particle swarm optimization-back propagation (PSO-BP) prediction, SHapley Additive exPlanations (SHAP) based interpretation, Non-dominated Sorting Genetic Algorithm II (NSGA-II) optimization, and entropy-weighted Technique for Order Preference by Similarity to Ideal Solution (Entropy-TOPSIS) decision-making. After three-sigma outlier screening, 2000 valid distributed control system (DCS) samples were retained for model development and temporal holdout evaluation, and an additional 200 later-period industrial samples were used for independent validation. The data were partitioned chronologically, with months 1–4, month 5, and month 6 used for training, validation, and temporal holdout testing, respectively, while the months 7–8 dataset was reserved for later-period validation. The results show that PSO-BP consistently outperformed conventional BP under both temporal holdout and later-period validation. SHAP analysis identified raw coal ash and collector dosage as the dominant factors for product-quality prediction, while collector dosage and frother dosage contributed most strongly to tailing heat of combustion. NSGA-II further revealed the trade-off among clean coal ash, clean coal sulfur, and tailing heat of combustion, and Entropy-TOPSIS converted the Pareto-optimal candidate set into a practically balanced operating recommendation. Sensitivity and robustness analyses indicated acceptable stability of both the optimization process and the final decision result. Overall, the proposed framework provides an interpretable prediction–optimization–decision workflow for coking coal flotation and offers a practical basis for future DCS-assisted intelligent regulation. Full article
(This article belongs to the Special Issue Mineral Processing Equipments and Cross-Disciplinary Approaches)
Show Figures

Figure 1

22 pages, 2585 KB  
Article
Enhancing Supply Chain Resilience in Textile SMEs: A Human-Centric Customer-to-Manufacturer Framework Using Public E-Commerce Data
by Chien-Chih Wang, Yu-Teng Hsu and Hsuan-Yu Kuo
J. Theor. Appl. Electron. Commer. Res. 2026, 21(4), 123; https://doi.org/10.3390/jtaer21040123 - 17 Apr 2026
Abstract
Upstream textile small and medium-sized enterprises (SMEs) frequently exhibit constrained supply chain resilience owing to persistent information latency and structural dependence on downstream orders. To address these challenges, this study develops and validates a customer-to-manufacturer (C2M) intelligence framework that enables data-driven production planning [...] Read more.
Upstream textile small and medium-sized enterprises (SMEs) frequently exhibit constrained supply chain resilience owing to persistent information latency and structural dependence on downstream orders. To address these challenges, this study develops and validates a customer-to-manufacturer (C2M) intelligence framework that enables data-driven production planning using publicly available e-commerce data. The framework incorporates ethically compliant acquisition of consumer demand signals, semantic translation of unstructured market data into textile engineering attributes, machine-learning-based demand forecasting, and human-centric decision support. Utilizing 3.87 million consumer comments from 127,846 product listings, a Neural Boosted Tree model with entity embeddings for textile attributes was constructed. This model achieved a mean R2 of 0.921 in cross-validation, surpassing benchmark methods. Consumer comment volume was validated as a proxy for sales activity, facilitating demand estimation. Forecasts were translated into production guidance using Monte Carlo simulation and a decision dashboard. In a 12-month field study at a Taiwanese dyeing SME, implementation resulted in a 28% reduction in inventory value, a 31% decrease in dye lot changeovers, and a 16% increase in capacity utilization. This research extends the C2M paradigm from downstream retail contexts to upstream textile SMEs, proposes an integrated and operationally feasible intelligence framework for resource-constrained manufacturers, and demonstrates how digital intelligence can enhance supply chain resilience while supporting, rather than replacing, human decision-making. The results indicate that upstream textile SMEs can leverage publicly visible e-commerce signals to enhance production planning responsiveness, minimize inventory exposure and dye-lot disruptions, and strengthen resilience to demand uncertainty through planner-centered digital decision support. Full article
(This article belongs to the Section Data Science, AI, and e-Commerce Analytics)
Show Figures

Figure 1

32 pages, 5970 KB  
Systematic Review
Reframing BIM and Digital Twins for Intelligent Built Environments
by Abdullahi Abdulrahman Muhudin, Md Shafiullah, Baqer Al-Ramadan, Mohammad Sharif Zami, Mohammad Tahir Zamani and Lazhari Herzallah
Smart Cities 2026, 9(4), 71; https://doi.org/10.3390/smartcities9040071 - 17 Apr 2026
Abstract
The integration of Building Information Modeling [BIM] and Digital Twins [DT] has emerged as a central driver of digital transformation in the architecture, engineering, and construction sector. Yet, its systemic impact remains constrained by conceptual fragmentation and uneven institutional adoption. This study synthesizes [...] Read more.
The integration of Building Information Modeling [BIM] and Digital Twins [DT] has emerged as a central driver of digital transformation in the architecture, engineering, and construction sector. Yet, its systemic impact remains constrained by conceptual fragmentation and uneven institutional adoption. This study synthesizes contemporary BIM–DT scalability and each to identify dominant technological and application dimensions, examine the governance conditions shaping scalability, and develop an analytical framework that advances understanding beyond technology-centered syntheses. A two-stage analytical design was employed, combining bibliometric keyword co-occurrence analysis of 1295 Scopus-indexed records with systematic qualitative synthesis of 56 peer-reviewed journal articles published between 2020 and 2025, following PRISMA guidelines. Six interrelated analytical dimensions characterize the current BIM–DT research landscape: BIM–DT integration advancements and applications; interoperability and visualization; safety enhancement; energy efficiency; data-driven decision making; and stakeholder collaboration. Across these dimensions, a persistent misalignment emerges between technological capability and organizational readiness, with deficiencies in standards, governance, and sociotechnical coordination constituting the principal barriers to large-scale deployment. The findings reframe BIM–DT convergence not as a discrete technological upgrade but as the emergence of a coordinated socio-technical information ecosystem spanning the full building lifecycle. By foregrounding governance conditions, data stewardship, and institutional coordination, this study extends understanding of how digital twins expand BIM from design coordination to operational governance and establishes a foundation for more systematic implementation of intelligent, resilient, and sustainable built-environment systems. Full article
(This article belongs to the Section Buildings in Smart Cities)
Show Figures

Figure 1

24 pages, 1136 KB  
Review
Explainable Deep Learning for Research on the Synergistic Mechanisms of Multiple Pollutants: A Critical Review
by Chang Liu, Anfei He, Jie Gu, Mulan Ji, Jie Hu, Shufeng Qiao, Fenghe Wang, Jing Hua and Jian Wang
Toxics 2026, 14(4), 335; https://doi.org/10.3390/toxics14040335 - 16 Apr 2026
Abstract
The synergistic control of multiple pollutants is critically challenged by complex nonlinear interactions, strong spatiotemporal heterogeneity, and the difficulty of tracing causal drivers. Deep learning offers high predictive power but suffers from the “black-box” problem, limiting its acceptance in environmental decision-making. Explainable Deep [...] Read more.
The synergistic control of multiple pollutants is critically challenged by complex nonlinear interactions, strong spatiotemporal heterogeneity, and the difficulty of tracing causal drivers. Deep learning offers high predictive power but suffers from the “black-box” problem, limiting its acceptance in environmental decision-making. Explainable Deep Learning (XDL) integrates physical mechanisms with interpretable algorithms, achieving both prediction accuracy and explanatory transparency. This review systematically evaluates the effectiveness and limitations of XDL in analyzing multi-pollutant interactions, with a comparative focus on atmospheric and aquatic environments. Key techniques, including SHAP, attention mechanisms, and physics-informed neural networks, are examined for their roles in synergistic monitoring, source apportionment, and regulatory optimization. The main findings reveal that: (1) XDL, particularly the “tree model + SHAP” paradigm, has become a dominant tool for quantifying driving factors, yet most attributions remain correlational rather than causal; (2) physics-informed fusion (soft vs. hard constraints) improves physical consistency but faces unresolved conflicts between data and physical laws, with current models lacking a conflict detection mechanism; (3) cross-media comparison shows a unified technical logic of “physical mechanism guidance + post hoc feature attribution”, but atmospheric applications lead in embedding advection–diffusion constraints, while aquatic research excels in spatial topology modeling via graph neural networks; (4) critical bottlenecks include the lack of causal inference, uncertainty-unaware interpretations, and data scarcity. Future directions demand a shift from correlation-only to causal-aware attribution, from blind fusion to conflict-detecting systems, and from no evaluation standards to domain-specific validation benchmarks. XDL is poised to transform multi-pollutant governance from experience-driven to intelligence-driven approaches, provided that verifiable interpretability and physical consistency become core design principles. Full article
Show Figures

Figure 1

20 pages, 2493 KB  
Article
Non-Destructive Determination of Moisture Content in White Tea During Withering Using VNIR Spectroscopy and Ensemble Modeling
by Qinghai He, Hongkai Shen, Zhiyuan Liu, Benxue Ma, Yong He, Zhi Lin, Weihong Liu, Pei Wang, Xiaoli Li and Peng Qi
Horticulturae 2026, 12(4), 488; https://doi.org/10.3390/horticulturae12040488 - 16 Apr 2026
Abstract
As one of the six major traditional tea types in China, white tea’s quality formation is primarily influenced by the withering process. However, traditional methods for monitoring withering fail to achieve precise and stable control of moisture content. To address this issue, a [...] Read more.
As one of the six major traditional tea types in China, white tea’s quality formation is primarily influenced by the withering process. However, traditional methods for monitoring withering fail to achieve precise and stable control of moisture content. To address this issue, a total of 650 samples were collected at 13 withering time points (0–36 h), and the dataset was split into training and test sets at a 7:3 ratio. This study proposes a PRXBoost ensemble model for quantitative detection of withered white tea, which integrates data augmentation and intelligent algorithms. The ensemble model uses a Bagging-based weighted integration technique to combine Partial Least Squares Regression (PLSR), Ridge, and Extreme Gradient Boosting (XGBoost) models, and it conducts an in-depth analysis of the decision-making process within the PRXBoost model. First, the effectiveness of the data augmentation strategy and the superiority of the gradient descent algorithm are verified through pre-modeling based on the PLSR model and hyperparameter pre-search using the XGBoost model, respectively. Additionally, the Bayes algorithm is employed to optimize the weights of the sub-models, further enhancing the overall predictive performance. The results show that the PRXBoost model achieved the best performance among the compared models on the test set, with R2 = 0.854 and RMSE = 0.080, exceeding the highest R2 of a single model by 6%. These results indicate that PRXBoost provided improved predictive performance for moisture estimation within the current dataset. Finally, the SHapley Additive exPlanations (SHAP) algorithm is used to analyze the influence of each input feature on the prediction results, successfully identifying the 1916 nm and 1453 nm spectral bands as significant influencers of the prediction outcomes. These results suggest that the proposed model can support rapid, non-destructive monitoring of moisture evolution and provide actionable information for withering endpoint decision control. Full article
26 pages, 956 KB  
Article
Environment-Guided Multimodal Pest Detection and Risk Assessment in Fruit and Vegetable Production Systems
by Jiapeng Sun, Yucheng Peng, Zhimeng Zhang, Wenrui Xu, Boyuan Xi, Yuanying Zhang and Yihong Song
Horticulturae 2026, 12(4), 486; https://doi.org/10.3390/horticulturae12040486 - 16 Apr 2026
Abstract
Aimed at the practical challenge that pest occurrence in fruit and vegetable horticultural production exhibits strong environmental dependency, pronounced stage characteristics, and high sensitivity to control decision-making, a multimodal pest recognition and occurrence risk joint modeling method is proposed to address the limitation [...] Read more.
Aimed at the practical challenge that pest occurrence in fruit and vegetable horticultural production exhibits strong environmental dependency, pronounced stage characteristics, and high sensitivity to control decision-making, a multimodal pest recognition and occurrence risk joint modeling method is proposed to address the limitation that conventional intelligent plant protection systems focus primarily on pest identification while lacking risk discrimination capability. Within a unified network framework, pest visual information and environmental temporal data are integrated through the construction of an environment-guided representation learning mechanism, a recognition–risk joint optimization strategy, and a risk-aware decision representation modeling structure. In this manner, pest category recognition and occurrence risk evaluation are conducted simultaneously, thereby providing direct decision support for precision prevention and control in fruit and vegetable production. Systematic experimental evaluation is conducted based on multi-crop and multi-year field data collected from Wuyuan County, Bayannur City, Inner Mongolia. Overall comparative results demonstrate that an identification accuracy of 0.947, a precision of 0.936, and a recall of 0.924 are achieved on the test set, all of which significantly outperform mainstream visual detection models such as YOLOv8, DETR, and Mask R-CNN. In terms of detection performance, mAP@50 and mAP@75 reach 0.962 and 0.821, respectively, indicating stable localization and discrimination capability under complex backgrounds and dense small-target conditions. For the occurrence risk discrimination task, a risk accuracy of 0.887 is obtained, representing an improvement of approximately 4.5 percentage points compared with the simple multimodal feature concatenation method. Cross-crop, cross-site, and cross-year generalization experiments further show that risk accuracy remains above 0.84 with stable recognition performance under significant distribution shifts. Ablation studies verify the synergistic contributions of the proposed core modules to overall performance improvement. The results indicate that the proposed framework enables the transition from single recognition to risk-driven plant protection decision-making, providing a technically viable pathway for pest diagnosis and control strategy optimization in fruit and vegetable horticulture. Full article
25 pages, 1271 KB  
Review
Recent Advances for Generative AI-Enabled Unmanned Aerial Vehicle Systems and Applicable Technologies
by Hyunbum Kim
Drones 2026, 10(4), 292; https://doi.org/10.3390/drones10040292 - 16 Apr 2026
Abstract
Unmanned Aerial Vehicles (UAVs) have been key platforms to perform sensing, analytics and automation across intelligent transportation, construction, smart agriculture, logistics and defense. Generative AI (GenAI) accelerates intelligence of UAVs by creating synthetic data, simulating environments and improving learning with restricted data conditions. [...] Read more.
Unmanned Aerial Vehicles (UAVs) have been key platforms to perform sensing, analytics and automation across intelligent transportation, construction, smart agriculture, logistics and defense. Generative AI (GenAI) accelerates intelligence of UAVs by creating synthetic data, simulating environments and improving learning with restricted data conditions. When integrated with digital twin and AI frameworks, GenAI enables advanced design, modeling, adaptation and making a decision. In this paper, we survey recent advances for generative AI-enabled UAVs systems and applicable scenarios. Then, we categorize four applicable research branches using generative AI-enabled UAVs for intelligent transportation systems, digital twin and smart infrastructure, smart agriculture, last-mile logistics and delivery. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

34 pages, 1052 KB  
Review
Artificial Intelligence and Machine Learning in Remote Sensing for Tropical Forest Monitoring: Applications, Challenges, and Emerging Solutions
by Belachew Gizachew
Remote Sens. 2026, 18(8), 1193; https://doi.org/10.3390/rs18081193 - 16 Apr 2026
Viewed by 59
Abstract
Tropical forests, despite their critical environmental and socio-economic roles, remain highly vulnerable to deforestation, forest degradation, and climate-related disturbances. There is a growing demand for robust and transparent forest monitoring systems, particularly under REDD+, the Paris Agreement’s Enhanced Transparency Framework (ETF), and emerging [...] Read more.
Tropical forests, despite their critical environmental and socio-economic roles, remain highly vulnerable to deforestation, forest degradation, and climate-related disturbances. There is a growing demand for robust and transparent forest monitoring systems, particularly under REDD+, the Paris Agreement’s Enhanced Transparency Framework (ETF), and emerging climate-finance mechanisms. Conventional approaches based on field inventories and traditional remote sensing are often constrained by limited or uneven field data, persistent cloud cover, complex forest conditions, and limited institutional and technical capacity. This review examines how artificial intelligence (AI) and machine learning (ML) are being integrated into remote sensing–based tropical forest monitoring to address these structural constraints. Using a semi-systematic synthesis of peer-reviewed studies, complemented by operational platforms and grey literature, the review assesses AI/ML approaches, remote sensing datasets, and applications relevant to national and large-scale monitoring. Evidence is synthesized across five analytical dimensions: AI/ML model families and workflows, multi-sensor datasets and training resources, operational monitoring platforms, application domains (including deforestation, degradation, and biomass/carbon estimation), and cross-cutting technical, institutional, and governance barriers. The review finds that AI/ML-enabled remote sensing, particularly those combining optical, radar, and LiDAR time series within cloud-based platforms, has substantially improved the automation, scalability, and speed of tropical forest monitoring. However, effective and equitable adoption remains constrained by limitations in training and validation data, dependence on proprietary platforms and data, uneven technical capacity, and unresolved governance and ethical challenges. Emerging solutions, including open and representative training datasets, platform-agnostic processing infrastructures, long-term capacity building, and inclusive data-governance frameworks, are identified as critical enablers of credible and nationally owned AI/ML-enabled forest-monitoring systems. The review highlights that AI/ML can play a transformative role in supporting climate mitigation, biodiversity conservation, and informed decision-making. This potential, however, depends on transparent data governance arrangements, long-term capacity building, and platform-agnostic infrastructures that support national ownership. Full article
Show Figures

Figure 1

27 pages, 3452 KB  
Review
Current Status and Outlook of Neutron Logging-While-Drilling Technology
by Dong Jiang, Wei Yuan, Bo Qi, Huawei Yu and Li Zhang
Processes 2026, 14(8), 1269; https://doi.org/10.3390/pr14081269 - 16 Apr 2026
Viewed by 117
Abstract
Neutron logging-while-drilling is a nuclear logging technique within the logging-while-drilling (LWD) system, characterized by high sensitivity to hydrogen formation. With the increasing complexity of well trajectories and the development of unconventional oil and gas, it has evolved from a traditional porosity measurement tool [...] Read more.
Neutron logging-while-drilling is a nuclear logging technique within the logging-while-drilling (LWD) system, characterized by high sensitivity to hydrogen formation. With the increasing complexity of well trajectories and the development of unconventional oil and gas, it has evolved from a traditional porosity measurement tool into a critical source of real-time information for geosteering and engineering decision-making. From a systems engineering perspective, this paper reviews the physical basis, tool system configuration, data processing methods, and typical engineering applications of LWD neutron logging. It discusses key technical bottlenecks and development trends. The results indicate that multiple interacting factors, including the neutron source, detector configuration, measurement geometry, environmental suppression capability, and interpretation strategy, constrain its performance. The transition from chemical neutron sources to pulsed neutron generators (PNG) represents a critical turning point, improving measurement safety and expanding the range of measurable parameters while simultaneously introducing new engineering challenges such as target material lifetime and long-term stability. Field practice further demonstrates that the main value of LWD neutron logging lies in providing real-time porosity and related information that overcomes physical limitations during drilling, supporting geosteering and real-time reservoir evaluation decisions. Based on current progress, future work will focus on enhancing the reliability of PNG-based neutron sources and developing data processing and intelligent interpretation workflows that integrate physical models with data-driven methods. Full article
Show Figures

Figure 1

37 pages, 1793 KB  
Systematic Review
The Role of Artificial Intelligence in Prognosis, Recurrence Prediction, and Treatment Outcomes in Laryngeal Cancer: A Systematic Review
by Hadi Afandi Al-Hakami, Ismail A. Abdullah, Nora S. Almutairi, Rimaz R. Aldawsari, Ghadah Ali Alluqmani, Halah Ahmed Fallatah, Yara Saud Alsulami, Elyas Mohammed Alasiri, Rahaf D. Alsufyani, Raghad Ayman Alorabi and Reffal Mohammad Aldainiy
Cancers 2026, 18(8), 1257; https://doi.org/10.3390/cancers18081257 - 16 Apr 2026
Viewed by 81
Abstract
Background: Laryngeal cancer (LC), a common subtype of head and neck cancers (HNC), is most frequently represented by laryngeal squamous cell carcinoma (LSCC). Prognosis largely depends on early detection; however, traditional prognostic tools, including tumor-node-metastasis (TNM) staging, often show limited predictive accuracy. Artificial [...] Read more.
Background: Laryngeal cancer (LC), a common subtype of head and neck cancers (HNC), is most frequently represented by laryngeal squamous cell carcinoma (LSCC). Prognosis largely depends on early detection; however, traditional prognostic tools, including tumor-node-metastasis (TNM) staging, often show limited predictive accuracy. Artificial intelligence (AI), including machine learning (ML), natural language processing, and deep learning (DL), has emerged as a promising approach to improving cancer diagnosis, prognosis, and treatment planning by analyzing clinical data and medical imaging. Objective: This systematic review assesses the role of AI in prognosis, recurrence prediction, and treatment outcomes in LC. Methods: PubMed, MEDLINE, Scopus, Web of Science, IEEE Xplore, and ScienceDirect were searched up to January 2025. A total of 1062 records were identified; after title/abstract screening and full-text assessment, 29 studies were included. Eligible studies involved adult patients with LC and applied AI to diagnose, prognose, predict recurrence, or assess treatment outcomes using human datasets. Study quality and risk of bias were evaluated using the QUADAS-2 and QUIPS. Results: The 29 included studies were mostly retrospective, with sample sizes ranging from 10 to 63,000 patients. Most focused on LSCC, with a higher prevalence in males. The studies utilized various AI techniques, including deep learning models such as convolutional neural networks (CNNs) and DeepSurv, as well as ML algorithms like random survival forest, gradient boosting machines, random forest, k-nearest neighbors, naïve Bayes, and decision trees. AI models demonstrated strong prognostic performance, surpassing Cox regression and TNM staging in predicting survival and recurrence. Several studies reported outcomes related to treatment, such as chemotherapy response, occult lymph node metastasis, and the need for salvage surgery. Methodological quality varied, with biases related to patient selection and confounding factors. Conclusions: AI has the potential to improve prognosis estimation, recurrence prediction, and treatment outcome assessment in LC. However, although AI can be a helpful addition to clinical decision-making, more prospective studies, external validation, and standardized evaluation are necessary before these technologies can be confidently adopted in everyday clinical practice. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

25 pages, 4972 KB  
Article
LLM-Assisted Plan Execution for Robots in Dynamic Environments
by Juan Diego Peña-Narvaez, Rodrigo Pérez-Rodríguez, Juan Carlos Manzanares, Francisco Miguel Moreno and Francisco Martín-Rico
Robotics 2026, 15(4), 80; https://doi.org/10.3390/robotics15040080 - 15 Apr 2026
Viewed by 89
Abstract
In recent years, planning frameworks have enabled the creation and execution of plans in robots using classical planning approaches based on the Planning Domain Definition Language (PDDL). The dynamic nature of the environments in which these robots operate requires that execution plans adapt [...] Read more.
In recent years, planning frameworks have enabled the creation and execution of plans in robots using classical planning approaches based on the Planning Domain Definition Language (PDDL). The dynamic nature of the environments in which these robots operate requires that execution plans adapt to new conditions, either by repairing plans to improve efficiency or because they are no longer valid. Determining the appropriate moment to initiate such repairs is the focus of our research. This paper presents a novel approach to this problem by using Large Language Models (LLMs) to make informed plan repair decisions during robot operation. Our approach introduces an LLM-based semantic evaluation heuristic that goes beyond the traditional heuristic methods employed in symbolic planning frameworks, while addressing the common hallucinations associated with task planning when relying solely on generative artificial intelligence. Our approach uses the semantic evaluation capabilities of LLMs to track environmental features and forecast hazards. This allows the system to proactively identify dangerous situations and adapt plans more efficiently. We experimentally demonstrate the validity of our approach using real robots in environments where both the environmental conditions and the goals to be achieved change dynamically. Full article
(This article belongs to the Special Issue AI-Powered Robotic Systems: Learning, Perception and Decision-Making)
Back to TopTop