Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,503)

Search Parameters:
Keywords = structural artificial neural network model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 26383 KB  
Article
Mineral Prospectivity Mapping Based on a Lightweight Two-Dimensional Fully Convolutional Neural Network: A Case Study of the Gold Deposits in the Xiong’ershan Area, Henan Province, China
by Mingjing Fan, Keyan Xiao, Li Sun, Yang Xu and Shuai Zhang
Minerals 2026, 16(5), 450; https://doi.org/10.3390/min16050450 (registering DOI) - 26 Apr 2026
Viewed by 30
Abstract
With the development of geological data analysis and big data technology, intelligent mineral prospectivity mapping (MPM) has become a key direction in the integration of geoscience and artificial intelligence, showing promising applications in the identification and evaluation of strategic mineral resources such as [...] Read more.
With the development of geological data analysis and big data technology, intelligent mineral prospectivity mapping (MPM) has become a key direction in the integration of geoscience and artificial intelligence, showing promising applications in the identification and evaluation of strategic mineral resources such as gold. To address the limitations of conventional methods—including insufficient training samples, complex model structures, and weak capability in recognizing anomalous zones—this study proposes an improved convolutional neural network (CNN) approach for mineral prediction. A lightweight, modular CNN structure with repeatable stacking is designed to reduce computational cost while enhancing model robustness and generalization. In addition, a dynamic learning rate scheduling strategy is adopted to optimize the training process, significantly improving convergence speed and training stability. Furthermore, high-probability prediction samples and low-probability background samples are combined to form a new training dataset for regional prospectivity evaluation, yielding a high area under the curve (AUC) score. The method is applied and validated in the Xiong’ershan region, and the predicted high-potential zones account for 30% of the study area and contain 81.4% of the known gold deposits. These results demonstrate the method’s effectiveness in mineral information extraction and blind-area targeting, offering a new approach for mineral prospectivity mapping. Full article
(This article belongs to the Section Mineral Exploration Methods and Applications)
15 pages, 640 KB  
Article
Training an Artificial Neural Network Based on Results of the Experiment on Machining of Aluminum Alloys 2196, 2043 and 2099 Used in the Aeronautical Industry
by Nicolae Ioan Pasca, Mihai Banica and Vasile Nasui
Coatings 2026, 16(5), 519; https://doi.org/10.3390/coatings16050519 (registering DOI) - 26 Apr 2026
Viewed by 59
Abstract
The paper presents a study regarding the tool-life of uncoated and DLC-coated cutting inserts used for machining aluminum–lithium components used in the structure of the Airbus A350 aircraft. The experiment was conducted in an industrial environment that produced aircraft parts, using industrial equipment, [...] Read more.
The paper presents a study regarding the tool-life of uncoated and DLC-coated cutting inserts used for machining aluminum–lithium components used in the structure of the Airbus A350 aircraft. The experiment was conducted in an industrial environment that produced aircraft parts, using industrial equipment, under serial processing conditions during 5874 machining hours, resulting in 1440 samples. The experimental results were used as the input data for obtaining predictive models for the estimation of the tool-life machining supervised learning from MATLAB 2025b based on four machine-learning algorithms: trainlm and trainbr (artificial neural networks), fitrtree (decision trees), and fitrensemble (ensemble methods) respectively. The models were evaluated and compared in terms of their performance, which determined the best option. Also, a sensitive analysis of the five predictors was performed. The validation of the four learning algorithms was performed based on a separate set of experimental data, which was not used in learning. The analysis between the experimental results and those predicted by the learning models confirmed their robustness. The analysis between the experimental results and those predicted concluded the best model. Full article
Show Figures

Figure 1

66 pages, 1148 KB  
Review
Explainability and Trust in Deep Learning for Cancer Imaging: Systematic Barriers, Clinical Misalignment, and a Translational Roadmap
by Surekha Borra, Nilanjan Dey, Simon Fong, R. Simon Sherratt and Fuqian Shi
Cancers 2026, 18(9), 1361; https://doi.org/10.3390/cancers18091361 - 24 Apr 2026
Viewed by 659
Abstract
Deep learning (DL) has transformed cancer imaging by enabling automated tumour detection, classification, and risk prediction. Despite impressive diagnostic performance, limited explainability and poor uncertainty calibration continue to restrict clinical integration. This review is guided by five research questions that examine the challenges, [...] Read more.
Deep learning (DL) has transformed cancer imaging by enabling automated tumour detection, classification, and risk prediction. Despite impressive diagnostic performance, limited explainability and poor uncertainty calibration continue to restrict clinical integration. This review is guided by five research questions that examine the challenges, impact, and translational implications of explainable artificial intelligence (XAI) in oncology imaging. We identify key barriers to trust, including dataset bias, shortcut learning, opacity of convolutional neural networks, and workflow misalignment. Evidence suggests that explainable models can increase clinician confidence, reduce false positives, and improve collaborative decision-making when explanations are faithful, semantically meaningful, and uncertainty aware. We evaluate architectural strategies that embed interpretability such as concept-bottleneck models, prototype-based learning, and attention regularization along with post hoc techniques. Beyond performance metrics, we examine how interpretable AI aligns with clinical reasoning processes and analyse regulatory, ethical, and medico-legal considerations influencing deployment. The findings indicate that explainability alone is insufficient, durable trust requires epistemic alignment, prospective validation, lifecycle governance, and equity-focused evaluation. By reframing explainability as a structural design principle rather than a supplementary feature, this review outlines a pathway toward accountable and clinically dependable AI systems in oncology. Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

20 pages, 1554 KB  
Article
Smart Sensor Network Architecture with Machine Learning-Based Predictive Monitoring for High-Complexity Computed Tomography Systems
by Arbnor Pajaziti and Blerta Statovci
Sensors 2026, 26(9), 2619; https://doi.org/10.3390/s26092619 - 23 Apr 2026
Viewed by 560
Abstract
This study addresses the need for intelligent condition monitoring in high-complexity medical imaging systems by proposing a smart sensing architecture for the Revolution EVO Computed Tomography (CT) scanner. Ensuring operational reliability and minimizing unexpected downtime remain critical challenges in advanced CT platforms, motivating [...] Read more.
This study addresses the need for intelligent condition monitoring in high-complexity medical imaging systems by proposing a smart sensing architecture for the Revolution EVO Computed Tomography (CT) scanner. Ensuring operational reliability and minimizing unexpected downtime remain critical challenges in advanced CT platforms, motivating the integration of distributed sensing and data-driven analytics. System logs spanning August 2024 to October 2025 were processed into 10-min intervals and converted into a structured dataset comprising 76 features derived from operational events, scanning parameters, and temporal dynamics. Two supervised learning models, the Support Vector Machine (SVM) and Artificial Neural Network (ANN), were trained to identify abnormal operating conditions. Both models delivered excellent classification performance, achieving an accuracy of 0.973. The SVM demonstrated balanced precision, recall, and F1-score metrics of 0.973, while the ANN outperformed in ranking and sensitivity to anomalies with an AUROC of 0.993 and an AUPRC of 0.976. This framework highlights the potential of sensor-driven machine learning in enabling early detection of system anomalies and optimizing maintenance planning within clinical CT environments. Full article
28 pages, 2328 KB  
Article
Predictive Neural Network Modeling of Nanoporous Anodic Alumina for Controlled Drug Release Implants: An Integrated Machine Learning Approach
by Ao Wang, Wan Fahmin Faiz Wan Ali, Muhamad Azizi Mat Yajid and Jianjun Gu
Materials 2026, 19(9), 1705; https://doi.org/10.3390/ma19091705 - 23 Apr 2026
Viewed by 163
Abstract
Background: Nanoporous anodic alumina (NAA) has emerged as a promising platform for localized drug delivery in biomedical implants owing to its tunable nanoscale pore structure and biocompatibility. However, achieving the desired pore characteristics currently relies on time-consuming trial-and-error adjustments of anodization parameters. Methods: [...] Read more.
Background: Nanoporous anodic alumina (NAA) has emerged as a promising platform for localized drug delivery in biomedical implants owing to its tunable nanoscale pore structure and biocompatibility. However, achieving the desired pore characteristics currently relies on time-consuming trial-and-error adjustments of anodization parameters. Methods: We developed a comprehensive data-driven machine learning framework using a feed-forward artificial neural network (ANN) with three hidden layers (64-32-16 neurons) trained on 77 samples from a compiled dataset of 99 anodization experiments spanning 1995–2025. The model predicts the NAA pore diameter based on anodization conditions (electrolyte type, concentration, voltage, temperature, and time). Results: The ANN achieved R2 = 0.803, root mean square error (RMSE) = 25.83 nm, and mean absolute error (MAE) = 17.05 nm on training data; however, 5-fold cross-validation revealed moderate generalization (CV R2 = 0.471 ± 0.078). Multiple linear regression showed comparable training performance (R2 = 0.804) but superior cross-validation (CV R2 = 0.729 ± 0.083). Feature importance analysis identified anodization voltage (29.15% ANN importance) and electrolyte type (30.23%) as the most influential factors. Coupling ANN-predicted pore dimensions with Higuchi diffusion modeling demonstrated that the pore diameter increased from 50 to 100 nm, nearly doubling the initial release rates (8 to 11 h−1) and reducing the time to 50% release from 39.1 to 20.7 h. Conclusions: This data-driven approach offers a powerful tool to reduce experimental iteration and accelerate the development of advanced drug-delivery implants by enabling the rational design of NAA pore structures for optimized drug loading and release kinetics. Full article
(This article belongs to the Special Issue Fabrication of Advanced Materials)
Show Figures

Figure 1

70 pages, 5036 KB  
Review
A Review of Mathematical Reduced-Order Modeling of PCM-Based Latent Heat Storage Systems
by John Nico Omlang and Aldrin Calderon
Energies 2026, 19(9), 2017; https://doi.org/10.3390/en19092017 - 22 Apr 2026
Viewed by 363
Abstract
Phase change material (PCM)-based latent heat storage (LHS) systems help address the mismatch between renewable energy supply and thermal demand. However, their practical implementation is constrained by the strongly nonlinear and multiphysics nature of phase change, which makes high-fidelity simulations and real-time applications [...] Read more.
Phase change material (PCM)-based latent heat storage (LHS) systems help address the mismatch between renewable energy supply and thermal demand. However, their practical implementation is constrained by the strongly nonlinear and multiphysics nature of phase change, which makes high-fidelity simulations and real-time applications computationally expensive. This review examines mathematical reduced-order modeling (ROM) as an effective strategy to overcome this limitation by combining physics-based simplifications, projection methods, interpolation techniques, and data-driven models for PCM-based LHS systems. While physical simplifications (such as dimensional reduction and effective property approximations) represent an important first layer of model reduction, the primary focus of this work is on the mathematical ROM methodologies that operate on the governing equations after such physical simplifications have been applied. The review covers approaches including two-temperature non-equilibrium and analytical thermal-resistance models, Proper Orthogonal Decomposition (POD), CFD-derived look-up tables, kriging and ε-NTU grey/black-box metamodels, and machine-learning methods such as artificial neural networks and gradient-boosted regressors trained from CFD data. These ROM techniques have been applied to packed beds, PCM-integrated heat exchangers, finned enclosures, triplex-tube systems, and solar thermal components, achieving speed-ups from tens to over 80,000 times faster than full CFD simulations while maintaining prediction errors typically below 5% or within sub-Kelvin temperature deviations. A critical comparative analysis exposes the fundamental trade-off between interpretability, data dependence, and computational efficiency, leading to a practical decision-making framework that guides method selection for specific applications such as design optimization, real-time control, and system-level simulation. Remaining challenges—including accurate representation of phase change nonlinearity, moving phase boundaries, multi-timescale dynamics, generalization across geometries, experimental validation, and integration into industrial workflows—motivate a structured roadmap for future hybrid physics–machine learning developments, standardized validation protocols, and pathways toward industrial deployment. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

42 pages, 966 KB  
Article
Garbage In, Garbage Out? The Impact of Data Quality on the Performance of Financial Distress Prediction Models
by Veronika Labosova, Lucia Duricova, Katarina Kramarova and Marek Durica
Forecasting 2026, 8(3), 35; https://doi.org/10.3390/forecast8030035 - 22 Apr 2026
Viewed by 303
Abstract
Financial distress prediction remains a central topic in corporate finance and risk management, with extensive research devoted to improving classification accuracy through increasingly sophisticated statistical and machine learning techniques. Nevertheless, the influence of data preparation on predictive performance has received comparatively less systematic [...] Read more.
Financial distress prediction remains a central topic in corporate finance and risk management, with extensive research devoted to improving classification accuracy through increasingly sophisticated statistical and machine learning techniques. Nevertheless, the influence of data preparation on predictive performance has received comparatively less systematic attention. This study examines how an economically grounded data-preparation process affects the predictive performance of selected statistical and machine-learning models dedicated to predicting corporate financial distress. Using the chosen financial ratios, generally accepted indicators of corporate financial stability and economic performance, financial distress models are estimated on both raw, unprocessed input data and pre-processed data involving the exclusion of economically implausible accounting values, treatment of missing observations, and class balancing. In light of the above, the study adopts a structured methodological approach to assess the predictive performance of selected classification models, namely decision tree algorithms (CART, CHAID, and C5.0), artificial neural networks (ANNs), logistic regression (LR), and linear discriminant analysis (DA), using confusion-matrix–based evaluation and a comprehensive set of evaluation measures. The results suggest that the process of input data preparation is a critical factor, significantly improving the predictive performance of financial distress prediction models across most modelling techniques employed. The most pronounced gains are observed in decision tree models. ANNs also demonstrate marked improvement after input data preparation, whereas LR benefits more moderately, and linear DA remains limited despite preprocessing. The average gain in accuracy across all six modelling techniques, calculated as the difference between pre-processed and raw performance for each method and averaged across methods, was approximately 15.6 percentage points, with specificity improving by approximately 26.9 percentage points on average, amounting to roughly half the performance variation attributable to algorithm choice, which underscores that data preparation is a primary determinant of model reliability alongside algorithm selection. A step-level detailed analysis further shows that missing value imputation is the dominant driver of improvement for tree-based models, while class balancing contributes most for ANNs and logistic regression. The findings highlight that reliable financial distress prediction depends not only on technique selection but also on the consistency and economic plausibility of the input data, underscoring the central role of structured data preparation in developing robust early-warning models. Full article
Show Figures

Figure 1

15 pages, 1454 KB  
Proceeding Paper
Physics-Regularized Neural Networks for Photovoltaic Power Prediction Under Limited Experimental Data
by Aswin Karkadakattil
Eng. Proc. 2026, 138(1), 1; https://doi.org/10.3390/engproc2026138001 - 20 Apr 2026
Viewed by 187
Abstract
Accurate photovoltaic (PV) power prediction under limited experimental data remains a significant challenge, particularly when purely data-driven models generate predictions that violate fundamental physical constraints. This study proposes a physics-regularized neural network framework for data-efficient PV power modeling using only 45 real experimental [...] Read more.
Accurate photovoltaic (PV) power prediction under limited experimental data remains a significant challenge, particularly when purely data-driven models generate predictions that violate fundamental physical constraints. This study proposes a physics-regularized neural network framework for data-efficient PV power modeling using only 45 real experimental measurements of irradiance and temperature. To address data sparsity while preserving physical realism, a physics-guided synthetic augmentation strategy is introduced to generate additional training samples strictly within experimentally validated operating bounds. The proposed Physics-Informed Neural Network (PINN) incorporates two complementary physical constraints directly into the training objective: (i) enforcement of the Shockley–Queisser thermodynamic efficiency limit to maintain compliance with theoretical conversion bounds and (ii) monotonicity regularization to ensure non-negative power gradients with respect to irradiance. Unlike conventional post-processing correction methods, these physical constraints are embedded during model training, enabling simultaneous improvement in predictive accuracy and physical consistency. When benchmarked against a structurally identical unconstrained Artificial Neural Network (ANN), the proposed framework achieves strong predictive performance (R2 = 0.9947, RMSE = 5.21 W) while reducing monotonicity violations by approximately 82%. Robustness evaluations under extrapolated irradiance conditions and elevated temperature scenarios further demonstrate stable and physically admissible behavior beyond the training domain. Overall, the results demonstrate that integrating limited experimental measurements with embedded physical priors enables reliable and physically consistent PV power prediction in sparse-data environments, highlighting the potential of physics-regularized learning for renewable energy modeling applications. Full article
Show Figures

Figure 1

21 pages, 1864 KB  
Article
Rapid Electrochemical Profiling of Fecal Short-Chain Fatty Acids Using Esterification/Dissociation Fingerprints and Artificial Neural Networks
by Bing-Chen Gu, Guan-Ying Jiang, Ching-Hung Tseng, Yi-Ju Chen, Chun-Ying Wu, Zhi-Xuan Lin, Zhung-Wen Yeh and Chia-Che Wu
Biosensors 2026, 16(4), 223; https://doi.org/10.3390/bios16040223 - 17 Apr 2026
Viewed by 324
Abstract
Short-chain fatty acids (SCFAs) are key biomarkers of gut microbiota activity; however, routine quantification in fecal samples relies largely on chromatography, which is instrument-intensive and throughput-limited chromatography techniques. Herein, we present a rapid machine-learning-assisted electroanalysis platform for SCFAs profiling that integrates a disposable [...] Read more.
Short-chain fatty acids (SCFAs) are key biomarkers of gut microbiota activity; however, routine quantification in fecal samples relies largely on chromatography, which is instrument-intensive and throughput-limited chromatography techniques. Herein, we present a rapid machine-learning-assisted electroanalysis platform for SCFAs profiling that integrates a disposable three-electrode planar gold chip with voltammetric fingerprinting and artificial neural network (ANN)-based signal decoupling. To generate orthogonal chemical information and improve the discrimination of structurally similar species, a dual pretreatment strategy combining acid-catalyzed esterification and alkaline dissociation was employed prior to electrochemical analyses. Differential pulse voltammetry (DPV) and cyclic voltammetry (CV) were employed to acquire high-dimensional fingerprints, from which current-, potential-, and area-based descriptors were extracted using a cross-information feature strategy. A hierarchical modeling framework improved total SCFAs prediction by incorporating ANN-predicted propionate and butyrate concentrations as auxiliary inputs. While linear calibration was achievable in standard mixtures, direct linear models performed poorly in real fecal matrices due to strong sample-dependent matrix interference. In contrast, the ANN captured nonlinear relationships among multifeature inputs and suppressed matrix effects. Validation against gas chromatography–mass spectrometry in an independent fecal test cohort (n = 30) demonstrated excellent agreement and low prediction errors, with mean absolute error/root mean square error values of 0.063/0.072 mM (propionic acid), 0.029/0.034 mM (butyric acid), and 0.135/0.202 mM (total SCFAs). The DPV/CV acquisition requires only minutes per sample, whereas pretreatment takes 1~3 h depending on the target route but can be performed in parallel for batch processing; thus, overall throughput is determined mainly by batch pretreatment rather than per-sample instrument time. This electrochemical–ANN workflow provides a portable, high-throughput alternative to chromatography for fecal SCFAs profiling in clinical screening and microbiome research. Full article
Show Figures

Figure 1

34 pages, 1891 KB  
Review
Deep Learning and Cardiovascular Diseases: An Updated Narrative Review
by Angelika Myśliwiec, Dorota Bartusik-Aebisher, Marvin Xavierselvan, Avijit Paul and David Aebisher
J. Clin. Med. 2026, 15(8), 3053; https://doi.org/10.3390/jcm15083053 - 16 Apr 2026
Viewed by 542
Abstract
Background: Artificial intelligence (AI) and deep learning (DL) are rapidly changing the field of diagnostics and imaging in cardiology, offering tools for automatic segmentation, quantification of changes, and risk stratification. These technologies have the potential to increase diagnostic accuracy, work efficiency, and [...] Read more.
Background: Artificial intelligence (AI) and deep learning (DL) are rapidly changing the field of diagnostics and imaging in cardiology, offering tools for automatic segmentation, quantification of changes, and risk stratification. These technologies have the potential to increase diagnostic accuracy, work efficiency, and individualization of patient care. Methods: This structured narrative review critically evaluates clinically validated applications of artificial intelligence (AI) and deep learning (DL) in cardiovascular medicine, focusing on imaging (echocardiography, coronary CT angiography, cardiac MRI, and ECG), risk stratification, and biomarker integration. A systematic literature search was conducted in PubMed for studies published between January 2015 and December 2026, supplemented by references from key articles. Original English-language studies reporting quantitative clinical outcomes were included, with 78 studies ultimately analyzed. Results: AI and DL models, including convolutional neural networks and transformers, achieved performance comparable to experts in cardiac imaging, myocardial perfusion assessment, valve defect detection, and coronary event prediction. Multimodal approaches improved diagnostic accuracy and reproducibility, while explainable AI enhanced transparency and clinical confidence. Deep learning also enabled faster image acquisition and processing without compromising precision. Conclusions: AI and DL have transformative potential in cardiology, offering fast, accurate, and scalable diagnostic tools. The integration of multimodal data, the validation of algorithms in prospective studies, and ensuring the transparency of models are key. Future research should focus on prospective, multicenter validations and the ethical and safe implementation of AI in everyday clinical practice. Full article
Show Figures

Figure 1

30 pages, 4101 KB  
Article
Influence of Data Structure on Prediction Error in Machine Learning-Based Concrete Compressive Strength Models
by Yelan Mo, Bixiong Li, Chengcheng Yan and Xiangxin Hu
Buildings 2026, 16(8), 1537; https://doi.org/10.3390/buildings16081537 - 14 Apr 2026
Viewed by 224
Abstract
Machine learning has been widely used for concrete compressive strength prediction, yet previous studies have focused mainly on algorithm comparison and isolated feature-processing strategies. The coupled influence of dataset characteristics on prediction error has received less systematic attention. This study investigates concrete strength [...] Read more.
Machine learning has been widely used for concrete compressive strength prediction, yet previous studies have focused mainly on algorithm comparison and isolated feature-processing strategies. The coupled influence of dataset characteristics on prediction error has received less systematic attention. This study investigates concrete strength prediction from a data structure perspective by examining three structural variables, namely, sample size, feature size, and compressive strength range. A unified experimental framework was constructed using 15 concrete datasets. Correlation, partial correlation, information entropy, and relief were employed to reorganize feature subsets, and the resulting error trends were evaluated using artificial neural network (ANN), support vector regression (SVR), and random forest (RF) models. The results show that prediction error generally decreases first and then becomes stable as feature size increases, although the location of the low-error region depends on the dataset and the filtering method. Larger sample size is associated with improved prediction stability, whereas wider strength range tends to increase prediction difficulty. Based on these observations, an empirical relationship was established to describe the joint effect of sample size, feature size, and strength range on prediction error. The findings indicate that the attainable error level in concrete strength prediction is controlled not only by model form but also by dataset organization and feature configuration. Within the present framework, the study provides a practical basis for designing feature systems and interpreting model performance across datasets with different structural characteristics. Full article
Show Figures

Figure 1

23 pages, 2546 KB  
Article
Data-Driven Predictive Modeling of Passenger-Accepted Vehicle Occupancy in Transport Systems
by Katarina Trifunović, Tijana Ivanišević, Aleksandar Trifunović, Svetlana Čičević, Draženko Glavić, Gabriel Fedorko and Vieroslav Molnar
Mathematics 2026, 14(8), 1274; https://doi.org/10.3390/math14081274 - 11 Apr 2026
Viewed by 420
Abstract
Mathematical modeling plays a key role in understanding and optimizing transport system operations under uncertain and dynamic conditions. This study proposes a data-driven predictive framework for estimating passenger-accepted vehicle occupancy, addressing a critical gap in transport system planning under public health-related constraints. Using [...] Read more.
Mathematical modeling plays a key role in understanding and optimizing transport system operations under uncertain and dynamic conditions. This study proposes a data-driven predictive framework for estimating passenger-accepted vehicle occupancy, addressing a critical gap in transport system planning under public health-related constraints. Using data from a structured survey conducted across seven Southeast European countries (N = 476), the study integrates statistical analysis and machine learning approaches to model acceptable occupancy levels across multiple transport modes, including passenger cars, taxis, tourist buses, and public buses. The problem is formulated as a predictive mapping between multidimensional input variables and occupancy acceptance levels, modeled using both probabilistic and nonlinear function approximation methods. The results highlight that age, gender, and area of residence are the most significant determinants of occupancy acceptance, while education level has limited predictive relevance. Furthermore, a multi-layer feedforward artificial neural network is developed to capture nonlinear relationships between variables, achieving strong predictive performance (minimum MSE = 0.0089). The main contribution of this research lies in linking behavioral data with predictive modeling to quantify acceptable occupancy thresholds and support realistic simulation of passenger responses in crisis conditions. The proposed modeling framework contributes to transport system planning, enabling data-driven capacity management, enhanced safety strategies, and improved resilience of passenger transport operations. Full article
(This article belongs to the Special Issue Modeling of Processes in Transport Systems)
Show Figures

Figure 1

33 pages, 2787 KB  
Article
Energy-Aware Adaptive Communication Topology with Edge-AI Navigation for UAV Swarms in GNSS-Denied Environments
by Alizhan Tulembayev, Alexandr Dolya, Ainur Kuttybayeva, Timur Jussupbekov and Kalmukhamed Tazhen
Drones 2026, 10(4), 273; https://doi.org/10.3390/drones10040273 - 9 Apr 2026
Viewed by 359
Abstract
Energy-efficient and resilient decentralized unmanned aerial vehicles (UAV) swarm operation in global navigation satellite system (GNSS) denied environments remains challenging because propulsion demand, communication load, and onboard inference are tightly coupled at the mission level. Although prior studies have examined some of these [...] Read more.
Energy-efficient and resilient decentralized unmanned aerial vehicles (UAV) swarm operation in global navigation satellite system (GNSS) denied environments remains challenging because propulsion demand, communication load, and onboard inference are tightly coupled at the mission level. Although prior studies have examined some of these components separately, their joint evaluation within adaptive decentralized swarms remains limited under degraded navigation conditions. This study proposes an energy-aware adaptive communication-topology framework integrated with lightweight edge artificial intelligence (AI)-assisted navigation for decentralized UAV swarms operating without reliable GNSS support. The approach combines a unified mission-level energy-accounting structure for propulsion, communication, and onboard inference, a residual-energy-aware topology adaptation mechanism for preserving swarm connectivity, and a convolutional neural network-long short-term memory (CNN–LSTM) based edge-AI navigation module for improving localization robustness. The framework was evaluated in 1200 s Robot Operating System 2 (ROS2)–Gazebo–PX4 simulation scenarios against fixed topology and extended Kalman filter (EKF)-based baselines. Under the adopted simulation assumptions, the proposed configuration achieved a 22.7% reduction in total energy consumption, with the largest decrease observed in the communication-energy component, while preserving positive algebraic connectivity across all evaluated runs. The edge-AI module yielded a 4.8% root mean square error (RMSE) reduction relative to the EKF baseline, indicating a modest but meaningful improvement in localization performance. These results support the feasibility of integrated energy-aware swarm coordination in GNSS-denied environments; however, they should be interpreted as simulation-based evidence under the adopted modeling assumptions, and further high-fidelity propagation modeling, broader learning validation, and hardware-in-the-loop studies remain necessary. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

20 pages, 1083 KB  
Article
FGeo-ISRL: A MCTS-Enhanced Deep Reinforcement Learning System for Plane Geometry Problem-Solving via Inverse Search
by Yang Li, Xiaokai Zhang, Cheng Qin, Zhengyu Hu and Tuo Leng
Symmetry 2026, 18(4), 628; https://doi.org/10.3390/sym18040628 - 9 Apr 2026
Viewed by 257
Abstract
Geometric problem-solving has always been a great challenge in the field of deductive reasoning and artificial intelligence. Symmetry is a defining characteristic of geometric shapes and properties. Consequently, the application of symmetry principles to geometric reasoning arises as a natural choice. To address [...] Read more.
Geometric problem-solving has always been a great challenge in the field of deductive reasoning and artificial intelligence. Symmetry is a defining characteristic of geometric shapes and properties. Consequently, the application of symmetry principles to geometric reasoning arises as a natural choice. To address the efficiency degradation and limited generalization, we propose FGeo-ISRL, a neural-symbolic inverse search framework whose core is the synergistic integration of a task-fine-tuned large language model and Monte Carlo Tree Search. Under the formal framework of FormalGeo, geometric theorems are iteratively applied starting from the given conditions and the target conclusion, in order to infer the necessary supporting premises. The large language model simultaneously serves as a policy network and a value network, guiding theorem application decisions and evaluating intermediate proof states, whereas the Monte Carlo Tree Search performs structured exploration over the state space, both training for policy refinement and inference for online search. The reinforcement learning agent is trained with a hybrid reward scheme, combining immediate feedback from the value difference and a sparse success reward. Experiments demonstrate the effectiveness and correctness of FGeo-ISRL. It not only achieves a Single-Step Theorem Accuracy of 90.2% and a Geometric Problem-Solving Accuracy of 83.14%, but also ensures that every step of the proof process remains readable, verifiable, and traceable. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

27 pages, 1493 KB  
Article
Emergency Alert and Warning Systems and Their Impact on Sustainable Disaster Preparedness and Awareness in the Philippines: A SEM–ANN Analysis
by Charmine Sheena R. Saflor and Kyla Kudhal
Sustainability 2026, 18(7), 3590; https://doi.org/10.3390/su18073590 - 6 Apr 2026
Viewed by 869
Abstract
Emergency Alert and Warning Systems (EAWSs) are essential components of sustainable disaster risk reduction, providing communities with timely information to prepare for and respond to impending hazards. In the Philippines, one of the world’s most disaster-prone countries, earthquakes, typhoons, and other natural hazards [...] Read more.
Emergency Alert and Warning Systems (EAWSs) are essential components of sustainable disaster risk reduction, providing communities with timely information to prepare for and respond to impending hazards. In the Philippines, one of the world’s most disaster-prone countries, earthquakes, typhoons, and other natural hazards occur frequently. However, national statistics from 2018 indicated that only 40% of Filipinos considered themselves well prepared for disasters, while 31% reported being slightly prepared or not prepared at all. This study investigates the perceived effectiveness of EAWSs in enhancing disaster awareness and preparedness among Filipino residents. Guided by the Theory of Planned Behavior (TPB), the research develops an integrated framework to examine behavioral, technical, and perceptual factors influencing preparedness intentions. Data were collected from 200 respondents through a structured survey. Structural Equation Modeling (SEM) was employed to identify significant linear relationships among the constructs, while an Artificial Neural Network (ANN) analysis was subsequently applied to capture nonlinear patterns and rank the relative importance of key predictors. Unlike previous studies that rely solely on SEM or descriptive approaches, the combined SEM–ANN framework enables a more comprehensive understanding of both causal relationships and complex behavioral dynamics influencing disaster preparedness. The findings reveal that behavioral intention, system reliability, message clarity, and trust in EAWS substantially affect individuals’ preparedness behavior and risk mitigation actions. These results underscore the importance of strengthening EAWS design and communication strategies to support long-term disaster resilience. The study provides practical insights for national agencies, local governments, and policymakers on refining emergency communication systems and developing sustainable, evidence-based disaster preparedness initiatives. Full article
Show Figures

Figure 1

Back to TopTop