Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (887)

Search Parameters:
Keywords = deterministic limit

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 2550 KB  
Article
An Enhanced Projection-Iterative-Methods-Based Optimizer for Complex Constrained Engineering Design Problems
by Xuemei Zhu, Han Peng, Haoyu Cai, Yu Liu, Shirong Li and Wei Peng
Computation 2026, 14(2), 45; https://doi.org/10.3390/computation14020045 - 6 Feb 2026
Abstract
This paper proposes an Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) to overcome the limitations of its predecessor, the Projection-Iterative-Methods-based Optimizer (PIMO), including deterministic parameter decay, insufficient diversity maintenance, and static exploration–exploitation balance. The enhancements incorporate three core strategies: (1) an adaptive decay strategy that introduces [...] Read more.
This paper proposes an Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) to overcome the limitations of its predecessor, the Projection-Iterative-Methods-based Optimizer (PIMO), including deterministic parameter decay, insufficient diversity maintenance, and static exploration–exploitation balance. The enhancements incorporate three core strategies: (1) an adaptive decay strategy that introduces stochastic perturbations into the step-size evolution; (2) a mirror opposition-based learning strategy to actively inject structured population diversity; and (3) an adaptive adjustment mechanism for the Lévy flight parameter β to enable phase-sensitive optimization behavior. The effectiveness of EPIMO is validated through a multi-stage experimental framework. Systematic evaluations on the CEC 2017 and CEC 2022 benchmark suites, alongside four classical engineering optimization problems (Himmelblau function, step-cone pulley design, hydrostatic thrust bearing design, and three-bar truss design), demonstrate its comprehensive superiority. The Wilcoxon rank-sum test confirms statistically significant performance improvements over its predecessor (PIMO) and a range of state-of-the-art and classical algorithms. EPIMO exhibits exceptional performance in convergence accuracy, stability, robustness, and constraint-handling capability, establishing it as a highly reliable and efficient metaheuristic optimizer. This research contributes a systematic, adaptive enhancement framework for projection-based metaheuristics, which can be generalized to improve other swarm intelligence systems when facing complex, constrained, and high-dimensional engineering optimization tasks. Full article
(This article belongs to the Section Computational Engineering)
19 pages, 10669 KB  
Article
NutriRadar: A Mobile Application for the Digital Automation of Childhood Nutritional Classification Based on WHO Standards in the Peruvian Amazon
by Jaime Cesar Prieto-Luna, Luis Alberto Holgado-Apaza, David Ccolque-Quispe, Nestor Antonio Gallegos Ramos, Denys Alberto Jaramillo-Peralta, Roxana Madueño-Portilla, José Alfredo Herrera Quispe, Aldo Alarcon-Sucasaca, Frank Arpita-Salcedo and Danger David Castellon-Apaza
Sustainability 2026, 18(3), 1639; https://doi.org/10.3390/su18031639 - 5 Feb 2026
Abstract
Acute malnutrition affects 3.1% of children under five years of age in Amazonian communities in Peru, where limited access to health services constrains timely nutritional assessment. In this context, this study aimed to develop, implement, and evaluate NutriRadar, a mobile application for automated [...] Read more.
Acute malnutrition affects 3.1% of children under five years of age in Amazonian communities in Peru, where limited access to health services constrains timely nutritional assessment. In this context, this study aimed to develop, implement, and evaluate NutriRadar, a mobile application for automated childhood nutritional classification based on the anthropometric standards of the World Health Organization (WHO). The application was developed using a waterfall software development methodology and implements the calculation of the Weight-for-Height Z-score (WHZ) from basic anthropometric variables (weight, height, age, and sex). NutriRadar was designed with offline functionality, deferred data synchronization, and compatibility with low-end mobile devices to support operational use in Amazonian settings. Field validation was conducted in two early childhood education institutions in Puerto Maldonado, Peru, and included anthropometric assessments of 75 children aged 3–4 years. The application demonstrated stable offline operation, response times suitable for clinical practice, and nutritional classification results equivalent to the WHO Anthro reference tool. NutriRadar represents a viable and reproducible digital automation solution for the operational application of a deterministic WHO anthropometric protocol, contributing to the reduction of operational errors and strengthening standardized nutritional assessment in resource-limited Amazonian contexts. Full article
Show Figures

Figure 1

9 pages, 1037 KB  
Proceeding Paper
Hybrid Dictionary–Retrieval-Augmented Generation–Large Language Model for Low-Resource Translation
by Reen-Cheng Wang, Cheng-Kai Yang, Tun-Chieh Yang and Yi-Xuan Tseng
Eng. Proc. 2025, 120(1), 52; https://doi.org/10.3390/engproc2025120052 - 5 Feb 2026
Abstract
The rapid decline of linguistic diversity, driven by globalization and technological standardization, presents significant challenges for the preservation of endangered languages, many of which lack sufficient parallel corpora for effective machine translation. Conventional neural translation models perform poorly in such contexts, often failing [...] Read more.
The rapid decline of linguistic diversity, driven by globalization and technological standardization, presents significant challenges for the preservation of endangered languages, many of which lack sufficient parallel corpora for effective machine translation. Conventional neural translation models perform poorly in such contexts, often failing to capture semantic precision, grammatical complexity, and culturally specific nuances. This study addresses these limitations by proposing a hybrid translation framework that combines dictionary-based pre-translation, retrieval-augmented generation, and large language model post-editing. The system is designed to improve translation quality for extremely low-resource languages, with a particular focus on the endangered Paiwan language in Taiwan. In the proposed approach, a handcrafted bilingual dictionary is the first to establish deterministic lexical alignments to generate a symbolically precise intermediate representation. When gaps occur due to missing vocabulary or sparse training data, a retrieval module enriches contextual understanding by dynamically sourcing semantically relevant examples from a vector database. These enriched words are then processed by an instruction-tuned large language model that reorders syntactic structures, inflects verbs appropriately, and resolves lexical ambiguities to produce fluent and culturally coherent translations. The evaluation is conducted on a 250-sentence Paiwan–Mandarin dataset, and the results demonstrate substantial performance gains across key metrics, with cosine similarity increasing from 0.210–0.236 to 0.810–0.846, BLEU scores rising from 1.7–4.4 to 40.8–51.9, and ROUGE-L F1 scores improving from 0.135–0.177 to 0.548–0.632. These results corroborate the effectiveness of the proposed hybrid pipeline in mitigating semantic drift, preserving core meaning, and enhancing linguistic alignment in low-resource settings. Beyond technical performance, the framework contributes to broader efforts in language revitalization and cultural preservation by supporting the transmission of Indigenous knowledge through accurate, contextually grounded, and accessible translations. This research demonstrates that integrating symbolic linguistic resources with retrieval-augmented large language models offers a scalable and efficient solution for endangered language translation and provides a foundation for sustainable digital heritage preservation in multilingual societies. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

21 pages, 3795 KB  
Article
Assessing Seepage Behavior and Hydraulic Gradient Conditions in the Lam Phra Phloeng Earth Fill Dam, Thailand
by Pinit Tanachaichoksirikun, Uma Seeboonruang, Uba Sirikaew and Witthawin Horpeancharoen
Water 2026, 18(3), 406; https://doi.org/10.3390/w18030406 - 4 Feb 2026
Viewed by 48
Abstract
This study evaluates seepage behavior and hydraulic gradient conditions at the Lam Phra Phloeng Earthfill Dam in Nakhon Ratchasima, Thailand, by integrating long-term instrumentation records, updated geotechnical data, and deterministic numerical modeling. Piezometer and observation-well data collected between 2007 and 2023 were screened [...] Read more.
This study evaluates seepage behavior and hydraulic gradient conditions at the Lam Phra Phloeng Earthfill Dam in Nakhon Ratchasima, Thailand, by integrating long-term instrumentation records, updated geotechnical data, and deterministic numerical modeling. Piezometer and observation-well data collected between 2007 and 2023 were screened for reliability, revealing that several sensors exhibited abnormal or non-responsive behavior, limiting direct interpretation of phreatic surface variations in critical zones. Reliable datasets were incorporated into SEEP/W seepage simulations using representative dam cross-sections and soil parameters derived from recent drilling and laboratory testing. The results indicate that under normal reservoir operation, the phreatic surface remains within the core–drainage system and hydraulic gradients are well below estimated critical thresholds for the clayey foundation. Elevated reservoir levels lead to increased pore-water pressures and higher hydraulic gradients, particularly near the downstream zones and the deep central section of the dam. Rapid drawdown produces the most unfavorable hydraulic condition, generating steep transient pore-pressure gradients that approach critical values and reduce hydraulic safety margins. Although no immediate evidence of piping or uncontrolled seepage was identified, malfunctioning instrumentation creates monitoring blind spots that increase uncertainty in real-time seepage assessment. This study demonstrates that hydraulic gradient-based interpretation of deterministic seepage modeling provides a practical screening tool for dam safety evaluation under data-limited conditions. The findings emphasize the importance of enhanced monitoring redundancy and conservative operational control to support risk-informed management of aging earthfill dams under increasing hydrological variability. Full article
(This article belongs to the Section Soil and Water)
Show Figures

Figure 1

16 pages, 615 KB  
Article
Multimodal Large Language Model for Fracture Detection in Emergency Orthopedic Trauma: A Diagnostic Accuracy Study
by Sadık Emre Erginoğlu, Nuri Koray Ülgen, Nihat Yiğit, Ali Said Nazlıgül and Mehmet Orçun Akkurt
Diagnostics 2026, 16(3), 476; https://doi.org/10.3390/diagnostics16030476 - 3 Feb 2026
Viewed by 135
Abstract
Background: Rapid and accurate fracture detection is critical in emergency departments (EDs), where high patient volume and time pressure increase the risk of diagnostic error, particularly in radiographic interpretation. Multimodal large language models (LLMs) with image-recognition capability have recently emerged as general-purpose [...] Read more.
Background: Rapid and accurate fracture detection is critical in emergency departments (EDs), where high patient volume and time pressure increase the risk of diagnostic error, particularly in radiographic interpretation. Multimodal large language models (LLMs) with image-recognition capability have recently emerged as general-purpose tools for clinical decision support, but their diagnostic performance within routine emergency department imaging workflows in orthopedic trauma remains unclear. Methods: In this retrospective diagnostic accuracy study, we included 1136 consecutive patients referred from the ED to orthopedics between 1 January and 1 June 2025 at a single tertiary center. Given the single-center, retrospective design, the findings should be interpreted as hypothesis-generating and may not be fully generalizable to other institutions. Emergency radiographs and clinical data were processed by a multimodal LLM (2025 version) via an official API using a standardized, deterministic prompt. The model’s outputs (“Fracture present”, “No fracture”, or “Uncertain”) were compared with final diagnoses established by blinded orthopedic specialists, which served as the reference standard. Diagnostic agreement was analyzed using Cohen’s kappa (κ), sensitivity, specificity, accuracy, and 95% confidence intervals (CIs). False-negative (FN) cases were defined as instances where the LLM reported “no acute fracture” but the specialist identified a fracture. The evaluated system is a general-purpose multimodal LLM and was not trained specifically on orthopedic radiographs. Results: Overall, the LLM showed good diagnostic agreement with orthopedic specialists, with concordant results in 808 of 1136 patients (71.1%; κ = 0.634; 95% CI: 68.4–73.7). The model achieved balanced performance with sensitivity of 76.9% and specificity of 66.8%. The highest agreement was observed in knee trauma (91.7%), followed by wrist (78.8%) and hand (69.6%). False-negative cases accounted for 184 patients (16.2% of the total cohort), representing 32.4% of all LLM-negative assessments. Most FN fractures were non-displaced (82.6%), and 17.4% of FN cases required surgical treatment. Ankle and foot regions showed the highest FN rates (30.4% and 17.4%, respectively), reflecting the anatomical and radiographic complexity of these areas. Positive predictive value (PPV) and negative predictive value (NPV) were 69.4% and 74.5%, respectively, with likelihood ratios indicating moderate shifts in post-test probability. Conclusions: In an emergency department-to-orthopedics consultation cohort reflecting routine clinical workflow, a multimodal LLM demonstrated moderate-to-good diagnostic agreement with orthopedic specialists, broadly within the range reported in prior fracture-detection AI studies; however, these comparisons are indirect because model architectures, training strategies, datasets, and endpoints differ across studies. However, its limited ability to detect non-displaced fractures—especially in anatomically complex regions like the ankle and foot—carries direct patient safety implications and confirms that specialist review remains indispensable. At present, such models may be explored as hypothesis-generating triage or decision-support tools, with mandatory specialist confirmation, rather than as standalone diagnostic systems. Prospective, multi-center studies using high-resolution imaging and anatomically optimized algorithms are needed before routine clinical adoption in emergency care. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Orthopedics)
Show Figures

Figure 1

31 pages, 925 KB  
Article
RAISE: Robust and Adversarially Informed Safe Explanations for Reinforcement Learning
by SeongIn Kim and Takeshi Shibuya
Electronics 2026, 15(3), 666; https://doi.org/10.3390/electronics15030666 - 3 Feb 2026
Viewed by 66
Abstract
Deep Reinforcement Learning (DRL) policies often exhibit fragility in unseen environments, limiting their deployment in safety-critical applications. While Robust Markov Decision Processes (R-MDPs) enhance control performance by optimizing against worst-case disturbances, the resulting conservative behaviors are difficult to interpret using standard Explainable RL [...] Read more.
Deep Reinforcement Learning (DRL) policies often exhibit fragility in unseen environments, limiting their deployment in safety-critical applications. While Robust Markov Decision Processes (R-MDPs) enhance control performance by optimizing against worst-case disturbances, the resulting conservative behaviors are difficult to interpret using standard Explainable RL (XRL) methods, which typically ignore adversarial disturbances. To bridge this gap, this paper proposes RAISE (Robust and Adversarially Informed Safe Explanations), a novel framework designed for the Noisy Action Robust MDP (NR-MDP) setting. We first introduce the Decomposed Reward NR-MDP (DRNR-MDP) and the DRNR-Deep Deterministic Policy Gradient (DRNR-DDPG) algorithm to learn robust policies and a vector-valued value function. RAISE utilizes this vectorized value function to generate contrastive explanations (“Why action a instead of b?”), explicitly highlighting the reward components such as safety or energy efficiency prioritized under worst-case attacks. Experiments on a continuous Cliffworld benchmark and the MuJoCo Hopper task demonstrate that the proposed method preserves robust performance under dynamics variations and produces meaningful, component-level explanations that align with intuitive safety and performance trade-offs. Ablation results further show that ignoring worst-case disturbances can substantially alter or invalidate explanations, underscoring the importance of adversarial awareness for reliable interpretability in robust RL. Full article
Show Figures

Figure 1

18 pages, 4409 KB  
Article
CAE-RBNN: An Uncertainty-Aware Model of Island NDVI Prediction
by Zheng Xiang, Cunjin Xue, Ziyue Ma, Qingrui Liu and Zhi Li
ISPRS Int. J. Geo-Inf. 2026, 15(2), 65; https://doi.org/10.3390/ijgi15020065 - 3 Feb 2026
Viewed by 89
Abstract
The unique geographical isolation and climate sensitivity of island ecosystems make them valuable for ecological research. The Normalized Difference Vegetation Index (NDVI) is an important indicator when monitoring and evaluating these systems, and its prediction has become a key research focus. However, island [...] Read more.
The unique geographical isolation and climate sensitivity of island ecosystems make them valuable for ecological research. The Normalized Difference Vegetation Index (NDVI) is an important indicator when monitoring and evaluating these systems, and its prediction has become a key research focus. However, island NDVI prediction remains uncertain due to a limited understanding of vegetation growth and insufficient high-quality data. Deterministic models fail to capture or quantify such uncertainty, often leading to overfitting. To address this issue, this study proposes an uncertainty prediction model for the island NDVI within a coding–prediction–decoding framework, referred to as a Convolutional Autoencoder–Regularized Bayesian Neural Network (CAE-RBNN). The model integrates a convolutional autoencoder with feature regularization to extract latent NDVI features, aiming to reconcile spatial scale disparities with environmental data, while a Bayesian Neural Network (BNN) quantifies uncertainty arising from limited samples and an incomplete understanding of the process. Finally, Monte Carlo sampling and SHAP analysis evaluate model performance, quantify predictive uncertainty, and enhance interpretability. Experiments on six islands in the Xisha archipelago demonstrate that CAE-RBNN outperforms the Convolutional Neural Network–Recurrent Neural Network (CNN-RNN), the Convolutional Recurrent Neural Network (ConvRNN), Convolutional Long Short-Term Memory (ConvLSTM), and Random Forest (RF). Among them, CAE-RBNN reduces the MAE and MSE of the single-time-step prediction task by 8.40% and 10.69%, respectively, compared with the suboptimal model and decreases them by 16.31% and 22.57%, respectively, in the continuous prediction task. More importantly, it effectively quantifies the uncertainty of different driving forces, thereby improving the reliability of island NDVI predictions influenced by the environment. Full article
Show Figures

Figure 1

23 pages, 871 KB  
Article
TLOA: A Power-Adaptive Algorithm Based on Air–Ground Cooperative Jamming
by Wenpeng Wu, Zhenhua Wei, Haiyang You, Zhaoguang Zhang, Chenxi Li, Jianwei Zhan and Shan Zhao
Future Internet 2026, 18(2), 81; https://doi.org/10.3390/fi18020081 - 2 Feb 2026
Viewed by 89
Abstract
Air–ground joint jamming enables three-dimensional, distributed jamming configurations, making it effective against air–ground communication networks with complex, dynamically adjustable links. Once the jamming layout is fixed, dynamic jamming power scheduling becomes essential to conserve energy and prolong jamming duration. However, existing methods suffer [...] Read more.
Air–ground joint jamming enables three-dimensional, distributed jamming configurations, making it effective against air–ground communication networks with complex, dynamically adjustable links. Once the jamming layout is fixed, dynamic jamming power scheduling becomes essential to conserve energy and prolong jamming duration. However, existing methods suffer from poor applicability in such scenarios, primarily due to their sparse deployment and adversarial nature. To address this limitation, this paper develops a set of mathematical models and a dedicated algorithm for air–ground communication countermeasures. Specifically, we (1) randomly select communication nodes to determine the jammer operation sequence; (2) schedule the number of active jammers by sorting transmission path losses in ascending order; and (3) estimate jamming effects using electromagnetic wave propagation characteristics to adjust jamming power dynamically. This approach formally converts the original dynamic, stochastic jamming resource scheduling problem into a static, deterministic one via cognitive certainty of dynamic parameters and deterministic modeling of stochastic factors—enabling rapid adaptation to unknown, dynamic communication power strategies and resolving the coordination challenge in air–ground joint jamming. Experimental results demonstrate that the proposed Transmission Loss Ordering Algorithm (TLOA) extends the system operating duration by up to 41.6% compared to benchmark methods (e.g., genetic algorithm). Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security)
19 pages, 26642 KB  
Article
The Road to Decarbonization—The Case of the Polish Passenger Car Market
by Sebastian Wójcik and Małgorzata Rataj
Energies 2026, 19(3), 785; https://doi.org/10.3390/en19030785 - 2 Feb 2026
Viewed by 138
Abstract
Road transport is a significant contributor to greenhouse gas emissions within the European Union, with Poland showing one of the most pronounced increases since 1990. Motivated by gaps in national inventories (absence of vehicle-level mileage, limited fuel/age/spatial breakdowns, and scarce real-world hybrid performance [...] Read more.
Road transport is a significant contributor to greenhouse gas emissions within the European Union, with Poland showing one of the most pronounced increases since 1990. Motivated by gaps in national inventories (absence of vehicle-level mileage, limited fuel/age/spatial breakdowns, and scarce real-world hybrid performance data), we develop a novel vehicle-level integration method that links administrative vehicle records with scraped online car-sale listings via deterministic and probabilistic record linkage, imputes missing mileage, and applies age- and fuel type-adjusted emission multipliers to estimate per-vehicle emissions. The approach produces high-resolution breakdowns by fuel type, vehicle age and spatial units while explicitly accounting for hybrid vehicle behavior. This study introduces a novel methodology and analytical product that can be used to estimate emissions from the entire Polish passenger car fleet and monitor decarbonization progress. Applying this method to the Polish passenger-car fleet yields total 2024 passenger-car CO2 emissions of ≈41.4 million tonnes (our estimate aligns closely with independent national figures), with diesel, gasoline and LPG/CNG accounting for roughly 38%, 47% and 12% of CO2, respectively. We find that hybrids and EVs currently cut fleet emissions by ~2.6% (CO2), but their higher utilization magnifies their effect, while vehicle ageing increases total emissions by ~1.2% per year. These results demonstrate that integrating microdata substantially improves the monitoring of decarbonization progress. Full article
(This article belongs to the Special Issue Energy Economics and Management, Energy Efficiency, Renewable Energy)
Show Figures

Figure 1

34 pages, 12750 KB  
Article
Nexus: A Modular Open-Source Multichannel Data Logger—Architecture and Proof of Concept
by Marcio Luis Munhoz Amorim, Oswaldo Hideo Ando Junior, Mario Gazziro and João Paulo Pereira do Carmo
Automation 2026, 7(1), 25; https://doi.org/10.3390/automation7010025 - 2 Feb 2026
Viewed by 160
Abstract
This paper presents Nexus, a proof-of-concept low-cost, modular, and reprogrammable multichannel data logger aimed at validating the architectural feasibility of an open and scalable acquisition platform for scientific instrumentation. The system was conceived to address common limitations of commercial data loggers, such as [...] Read more.
This paper presents Nexus, a proof-of-concept low-cost, modular, and reprogrammable multichannel data logger aimed at validating the architectural feasibility of an open and scalable acquisition platform for scientific instrumentation. The system was conceived to address common limitations of commercial data loggers, such as high cost, restricted configurability, and limited autonomy, by relying exclusively on widely available components and open hardware/software resources, thereby facilitating reproducibility and adoption in resource-constrained academic and industrial environments. The proposed architecture supports up to six interchangeable acquisition modules, enabling the integration of up to 20 analog channels with heterogeneous resolutions (24-bit, 12-bit, and 10-bit ADCs), as well as digital acquisition through multiple communication interfaces, including I2C (two independent buses), SPI (two buses), and UART (three interfaces). Quantitative validation was performed using representative acquisition configurations, including a 24-bit ADS1256 stage operating at sampling rates of up to 30 kSPS, 12-bit microcontroller-based stages operating at approximately 1 kSPS, and 10-bit operating at 100 SPS, consistent with stable real-time acquisition and visualization under proof-of-concept constraints. SPI communication was configured with an effective clock frequency of 2 MHz, ensuring deterministic data transfer across the tested acquisition modules. A hybrid data management strategy is implemented, combining high-capacity local storage via USB 3.0 solid-state drives, optional cloud synchronization, and a 7-inch touchscreen human–machine interface based on Raspberry Pi OS for system control and visualization. Power continuity is addressed through an integrated smart uninterruptible power supply, which provides telemetry, automatic source switching, and limited backup operation during power interruptions. As a proof of concept, the system was functionally validated through architectural and interface-level tests, demonstrating stable communication across all supported protocols and reliable acquisition of synthetic and biosignal-like waveforms. The results confirm the feasibility of the proposed modular architecture and its ability to integrate heterogeneous acquisition, storage, and interface subsystems within a unified open-source platform. While not intended as a finalized commercial product, Nexus establishes a validated foundation for future developments in modular data logging, embedded intelligence, and application-specific instrumentation. Full article
(This article belongs to the Section Automation in Energy Systems)
Show Figures

Figure 1

24 pages, 6948 KB  
Article
Industrial Process Control Based on Reinforcement Learning: Taking Tin Smelting Parameter Optimization as an Example
by Yingli Liu, Zheng Xiong, Haibin Yuan, Hang Yan and Ling Yang
Appl. Sci. 2026, 16(3), 1429; https://doi.org/10.3390/app16031429 - 30 Jan 2026
Viewed by 149
Abstract
To address the issues of parameter setting, reliance on human experience, and the limitations of traditional model-driven control methods in handling complex nonlinear dynamics in the tin smelting industrial process, this paper proposes a data-driven control approach based on improved deep reinforcement learning [...] Read more.
To address the issues of parameter setting, reliance on human experience, and the limitations of traditional model-driven control methods in handling complex nonlinear dynamics in the tin smelting industrial process, this paper proposes a data-driven control approach based on improved deep reinforcement learning (RL). Aiming to reduce the tin entrainment rate in smelting slag and CO emissions in exhaust gas, we construct a data-driven environment model with an 8-dimensional state space (including furnace temperature, pressure, gas composition, etc.) and an 8-dimensional action space (including lance parameters such as material flow, oxygen content, backpressure, etc.). We innovatively design a Dual-Action Discriminative Deep Deterministic Policy Gradient (DADDPG) algorithm. This method employs an online Actor network to simultaneously generate deterministic and exploratory random actions, with the Critic network selecting high-value actions for execution, consistently enhancing policy exploration efficiency. Combined with a composite reward function (integrating real-time Sn/CO content, their variations, and continuous penalty mechanisms for safety constraints), the approach achieves multi-objective dynamic optimization. Experiments based on real tin smelting production line data validate the environment model, with results demonstrating that the tin content in slag is reduced to between 3.5% and 4%, and CO content in exhaust gas is decreased to between 2000 and 2700 ppm. Full article
Show Figures

Figure 1

20 pages, 1858 KB  
Article
Comparative Analysis of AutoML Platforms for Forecasting Raw Material Requirements
by Damian Grajewski, Anna Dudkowiak, Ewa Dostatni and Jakub Cichocki
Appl. Sci. 2026, 16(3), 1389; https://doi.org/10.3390/app16031389 - 29 Jan 2026
Viewed by 111
Abstract
Automated machine learning (AutoML) platforms are increasingly adopted in manufacturing to support data-driven decision-making. However, systematic and reproducible evaluations of their practical applicability remain limited. This study presents a controlled benchmarking framework for comparing three selected cloud-based AutoML platforms: Google Vertex AI, Microsoft [...] Read more.
Automated machine learning (AutoML) platforms are increasingly adopted in manufacturing to support data-driven decision-making. However, systematic and reproducible evaluations of their practical applicability remain limited. This study presents a controlled benchmarking framework for comparing three selected cloud-based AutoML platforms: Google Vertex AI, Microsoft Azure ML and IBM Watsonx, in the context of raw material demand forecasting for mold manufacturing. A synthetic dataset was generated to reflect essential operational characteristics of industrial production, including batch-based manufacturing, inventory-triggered replenishment and delivery lead times. While the underlying bill of materials logic is deterministic, the interaction of production variability and inventory dynamics introduces nonlinear and time-dependent behavior. All platforms were evaluated using identical data splits, chronological cross-validation and consistent performance metrics to ensure fair comparison and prevent information leakage. Results indicate moderate predictive performance, which is attributed to embedded operational complexity. Performance differences between platforms are marginal, highlighting that practical considerations such as feature handling, deployment readiness and computational effort may be more influential than raw accuracy. Although synthetic data limit external validity, the proposed framework provides a reproducible and transparent basis for applied evaluation of AutoML platforms. Future work will incorporate real industrial data and robustness testing under non-stationary and disrupted production conditions. Full article
Show Figures

Figure 1

21 pages, 6750 KB  
Article
Machine Learning-Based Energy Consumption and Carbon Footprint Forecasting in Urban Rail Transit Systems
by Sertaç Savaş and Kamber Külahcı
Appl. Sci. 2026, 16(3), 1369; https://doi.org/10.3390/app16031369 - 29 Jan 2026
Viewed by 140
Abstract
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately [...] Read more.
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately forecasting the energy consumption of these systems is vital for sustainable urban planning, energy supply management, and the development of carbon balancing strategies. In this study, forecasting models are designed using five different machine learning (ML) algorithms, and their performances in predicting the energy consumption and carbon footprint of urban rail transit systems are comprehensively compared. For five distribution-center substations, 10 years of monthly energy consumption data and the total carbon footprint data of these substations are used. Support Vector Regression (SVR), Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Nonlinear Autoregressive Neural Network (NAR-NN) models are developed to forecast these data. Model hyperparameters are optimized using a 20-iteration Random Search algorithm, and the stochastic models are run 10 times with the optimized parameters. Results reveal that the SVR model consistently exhibits the highest forecasting performance across all datasets. For carbon footprint forecasting, the SVR model yields the best results, with an R2 of 0.942 and a MAPE of 3.51%. The ensemble method XGBoost also demonstrates the second-best performance (R2=0.648). Accordingly, while deterministic traditional ML models exhibit superior performance, the neural network-based stochastic models, such as LSTM, ANFIS, and NAR-NN, show insufficient generalization capability under limited data conditions. These findings indicate that, in small- and medium-scale time-series forecasting problems, traditional machine learning methods are more effective than neural network-based methods that require large datasets. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 17966 KB  
Article
Sealing Performance of Phenyl-Silicone Rubber Based on Constitutive Model Under Thermo-Oxidative Aging
by Haiqiang Shi, Jian Wu, Zhihao Chen, Pengtao Cao, Tianxiao Zhou, Benlong Su and Youshan Wang
Polymers 2026, 18(3), 350; https://doi.org/10.3390/polym18030350 - 28 Jan 2026
Viewed by 212
Abstract
Phenyl-silicone rubber is the elastomer of choice for cryogenic and high-temperature static seals, yet quantitative links between thermo-oxidative aging and sealing reliability are still lacking. Here, sub-ambient (−70 °C to 25 °C) and room-temperature mechanical tests, compression set aging, SEM, FT-IR, and finite-element [...] Read more.
Phenyl-silicone rubber is the elastomer of choice for cryogenic and high-temperature static seals, yet quantitative links between thermo-oxidative aging and sealing reliability are still lacking. Here, sub-ambient (−70 °C to 25 °C) and room-temperature mechanical tests, compression set aging, SEM, FT-IR, and finite-element simulations are integrated to trace how aging translates into contact-pressure decay of an Omega-profile gasket. Compression set rises monotonically with time and temperature; an Arrhenius model derived from 80 to 140 °C data predicts 34 d (10% set) and 286 d (45% set) of storage life at 25 °C. SEM reveals a progressive shift from ductile dimple fracture to brittle, honeycomb porosity, while FT-IR confirms limited surface oxidation without bulk chain scission. Finite element analyses show that contact pressure always peaks at the two lateral necks; short-term aging increases in the shear modulus C10 from 1.87 to 2.27 MPa, raising CPRESS by 8~21%, yet this benefit is ultimately offset by displacement loss from compression set (8.0 mm to 6.1 mm), yielding a net pressure reduction of 0.006 MPa. Critically, even under the most severe coupled condition (56 days aging with compression set), the predicted CPRESS remains above the 0.1 MPa leak-tightness criterion across the entire cryogenic service envelope. This framework provides deterministic boundaries for temperature, aging duration, and allowable preload relaxation, enabling risk-informed maintenance and replacement scheduling for safety-critical phenyl-silicone seals. Full article
(This article belongs to the Special Issue Constitutive Modeling of Polymer Matrix Composites)
Show Figures

Graphical abstract

19 pages, 4190 KB  
Article
A Novel DOA Estimation Method for a Far-Field Narrow-Band Point Source via the Conventional Beamformer
by Xuejie Dai and Shuai Yao
J. Mar. Sci. Eng. 2026, 14(3), 271; https://doi.org/10.3390/jmse14030271 - 28 Jan 2026
Viewed by 161
Abstract
Far-field narrow-band Direction-of-Arrival (DOA) estimation is a practical challenge in passive and active sonar applications. While the Conventional Beamformer (CBF) is a robust Maximum Likelihood Estimator (MLE), its precision is inherently constrained by the discrete scanning interval. To overcome this limitation, this paper [...] Read more.
Far-field narrow-band Direction-of-Arrival (DOA) estimation is a practical challenge in passive and active sonar applications. While the Conventional Beamformer (CBF) is a robust Maximum Likelihood Estimator (MLE), its precision is inherently constrained by the discrete scanning interval. To overcome this limitation, this paper proposes a novel Model Solution Algorithm (MSA estimator that leverages the exact theoretical beam pattern of the array to resolve the DOA. Unlike the classical Parabolic Interpolation Algorithm (PIA) estimator, which exhibits significant estimation bias due to polynomial approximation errors, the proposed MSA estimator numerically solves the deterministic beam pattern equation to eliminate such model mismatch. Quantitative simulation results demonstrate that the MSA estimator approaches the Cramér-Rao Lower Bound (CRLB) with a stable RMSE of approximately 0.12° under sensor position errors and a frequency-invariant precision of ~0.23°, significantly outperforming the PIA estimator, which suffers from systematic errors reaching 1.1° and 0.75°, respectively. Furthermore, the proposed method exhibits superior noise resilience by extending the operational range to −24 dB, surpassing the −15 dB breakdown threshold of Multiple Signal Classification (MUSIC). Additionally, complexity analysis and geometric evaluations confirm that the method retains a low computational burden suitable for real-time deployment and can be effectively generalized to arbitrary array geometries without accuracy loss. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop