Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (754)

Search Parameters:
Keywords = polynomial-time algorithms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4190 KB  
Article
A Novel DOA Estimation Method for a Far-Field Narrow-Band Point Source via the Conventional Beamformer
by Xuejie Dai and Shuai Yao
J. Mar. Sci. Eng. 2026, 14(3), 271; https://doi.org/10.3390/jmse14030271 - 28 Jan 2026
Abstract
Far-field narrow-band Direction-of-Arrival (DOA) estimation is a practical challenge in passive and active sonar applications. While the Conventional Beamformer (CBF) is a robust Maximum Likelihood Estimator (MLE), its precision is inherently constrained by the discrete scanning interval. To overcome this limitation, this paper [...] Read more.
Far-field narrow-band Direction-of-Arrival (DOA) estimation is a practical challenge in passive and active sonar applications. While the Conventional Beamformer (CBF) is a robust Maximum Likelihood Estimator (MLE), its precision is inherently constrained by the discrete scanning interval. To overcome this limitation, this paper proposes a novel Model Solution Algorithm (MSA estimator that leverages the exact theoretical beam pattern of the array to resolve the DOA. Unlike the classical Parabolic Interpolation Algorithm (PIA) estimator, which exhibits significant estimation bias due to polynomial approximation errors, the proposed MSA estimator numerically solves the deterministic beam pattern equation to eliminate such model mismatch. Quantitative simulation results demonstrate that the MSA estimator approaches the Cramér-Rao Lower Bound (CRLB) with a stable RMSE of approximately 0.12° under sensor position errors and a frequency-invariant precision of ~0.23°, significantly outperforming the PIA estimator, which suffers from systematic errors reaching 1.1° and 0.75°, respectively. Furthermore, the proposed method exhibits superior noise resilience by extending the operational range to −24 dB, surpassing the −15 dB breakdown threshold of Multiple Signal Classification (MUSIC). Additionally, complexity analysis and geometric evaluations confirm that the method retains a low computational burden suitable for real-time deployment and can be effectively generalized to arbitrary array geometries without accuracy loss. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 7941 KB  
Article
A Multi-Stage Algorithm of Fringe Map Reconstruction for Fiber-End Surface Analysis and Non-Phase-Shifting Interferometry
by Ilya Galaktionov and Vladimir Toporovsky
Appl. Syst. Innov. 2026, 9(2), 31; https://doi.org/10.3390/asi9020031 - 27 Jan 2026
Abstract
Interferometers are essential tools for quality control of optical surfaces. While interferometric techniques like phase-shifting interferometry offer high accuracy, they involve complex setups, require stringent calibration, and are sensitive to phase shift errors, noise, and surface inhomogeneities. In this research, we introduce an [...] Read more.
Interferometers are essential tools for quality control of optical surfaces. While interferometric techniques like phase-shifting interferometry offer high accuracy, they involve complex setups, require stringent calibration, and are sensitive to phase shift errors, noise, and surface inhomogeneities. In this research, we introduce an alternative algorithm that integrates Moving Average and Fast Fourier Transform (MAFFT) techniques with Polynomial Fitting. The proposed method achieves results comparable to a Zygo interferometer under standard conditions, with an error margin under 2%. It also maintains measurement stability in noisy environments and in the presence of significant local inhomogeneities, operating in real-time to enable wavefront measurements at 30 Hz. We have validated the algorithm through simulations assessing noise-induced errors and through experimental comparisons with a Zygo interferometer. Full article
(This article belongs to the Section Information Systems)
23 pages, 1195 KB  
Article
Deeply Pipelined NTT Accelerator with Ping-Pong Memory and LUT-Only Barrett Reduction for Post-Quantum Cryptography
by Omar S. Sonbul, Muhammad Rashid, Muhammad I. Masud, Mohammed Aman and Amar Y. Jaffar
Electronics 2026, 15(3), 513; https://doi.org/10.3390/electronics15030513 - 25 Jan 2026
Viewed by 69
Abstract
Lattice-based post-quantum cryptography relies on fast polynomial multiplication. The Number-Theoretic Transform (NTT) is the key operation that enables this acceleration. To provide high throughput and low latency while keeping the area overhead small, hardware implementations of the NTT is essential. This is particularly [...] Read more.
Lattice-based post-quantum cryptography relies on fast polynomial multiplication. The Number-Theoretic Transform (NTT) is the key operation that enables this acceleration. To provide high throughput and low latency while keeping the area overhead small, hardware implementations of the NTT is essential. This is particularly true for resource-constrained devices. However, existing NTT accelerators either achieve high throughput at the cost of large area overhead or provide compact designs with limited pipelining and low operating frequency. Therefore, this article presents a compact, seven-stage pipelined NTT accelerator architecture for post-quantum cryptography, using the CRYSTALS–Kyber algorithm as a case study. The CRYSTALS–Kyber algorithm is selected due to its NIST standardization, strong security guarantees, and suitability for hardware acceleration. Specifically, a unified three-stage pipelined butterfly unit is designed using a single DSP48E1 block for the required integer multiplication. In contrast, the modular reduction stage is implemented using a four-stage pipelined, lookup-table (LUT)-only Barrett reduction unit. The term “LUT-only” refers strictly to the reduction logic and not to the butterfly multiplication. Furthermore, two dual-port BRAM18 blocks are used in a ping-pong manner to hold intermediate and final coefficients. In addition, a simple finite-state machine controller is implemented, which manages all forward NTT (FNTT) and inverse NTT (INTT) stages. For validation, the proposed design is realized on a Xilinx Artix-7 FPGA. It uses only 503 LUTs, 545 flip-flops, 1 DSP48E1 block, and 2 BRAM18 blocks. The complete FNTT and INTT with final rescaling require 1029 and 1285 clock cycles, respectively. At 200 MHz, these correspond to execution times of 5.14 µs for the FNTT and 6.42 µs for the INTT. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 2579 KB  
Article
An Integrated Approach for Generating Reduced Order Models of the Effective Thermal Conductivity of Nuclear Fuels
by Fergany Badry, Merve Gencturk and Karim Ahmed
J. Nucl. Eng. 2026, 7(1), 8; https://doi.org/10.3390/jne7010008 - 22 Jan 2026
Viewed by 37
Abstract
Accurate prediction of the effective thermal conductivity (ETC) of nuclear fuels is essential for optimizing fuel performance and ensuring reactor safety. However, the experimental determination of ETC is often limited by cost and complexity, while high-fidelity simulations are computationally intensive. This study presents [...] Read more.
Accurate prediction of the effective thermal conductivity (ETC) of nuclear fuels is essential for optimizing fuel performance and ensuring reactor safety. However, the experimental determination of ETC is often limited by cost and complexity, while high-fidelity simulations are computationally intensive. This study presents a novel hybrid framework that integrates experimental data, validated mesoscale finite element simulations, and machine-learning (ML) models to efficiently and accurately estimate ETC for advanced uranium-based nuclear fuels. The framework was demonstrated on three fuel systems: UO2-BeO composites, UO2-Mo composites, and U-10Zr metallic alloys. Mesoscale simulations incorporating microstructural features and interfacial thermal resistance were validated against experimental data, producing synthetic datasets for training and testing ML algorithms. Among the three regression methods evaluated, namely Bayesian Ridge, Random Forest, and Multi-Polynomial Regression, the latter showed the highest accuracy, with prediction errors below 10% across all fuel types. The selected multi-polynomial model was subsequently used to predict ETC over extended temperature and composition ranges, offering high computational efficiency and analytical convenience. The results closely matched those from the validated simulations, confirming the robustness of the model. This integrated approach not only reduces reliance on costly experiments and long simulation times but also provides an analytical form suitable for embedding in engineering-scale fuel performance codes. The framework represents a scalable and generalizable tool for thermal property prediction in nuclear materials. Full article
Show Figures

Figure 1

26 pages, 2427 KB  
Article
Alternating Optimization-Based Joint Power and Phase Design for RIS-Empowered FANETs
by Muhammad Shoaib Ayub, Renata Lopes Rosa and Insoo Koo
Drones 2026, 10(1), 66; https://doi.org/10.3390/drones10010066 - 19 Jan 2026
Viewed by 147
Abstract
The integration of reconfigurable intelligent surfaces (RISs) with flying ad hoc networks (FANETs) offers new opportunities to enhance performance in aerial communications. This paper proposes a novel FANET architecture in which each unmanned aerial vehicle (UAV) or drone is equipped with an RIS [...] Read more.
The integration of reconfigurable intelligent surfaces (RISs) with flying ad hoc networks (FANETs) offers new opportunities to enhance performance in aerial communications. This paper proposes a novel FANET architecture in which each unmanned aerial vehicle (UAV) or drone is equipped with an RIS comprising M passive elements, enabling dynamic manipulation of the wireless propagation environment. We address the joint power allocation and RIS configuration problem to maximize the sum spectral efficiency, subject to constraints on maximum transmit power and unit-modulus phase shifts. The formulated optimization problem is non-convex due to coupled variables and interference. We develop an alternating optimization-based joint power and phase shift (AO-JPPS) algorithm that decomposes the problem into two subproblems: power allocation via successive convex approximation and phase optimization via Riemannian manifold optimization. A key contribution is addressing the RIS coupling effect, where the configuration of each RIS simultaneously influences multiple communication links. Complexity analysis reveals polynomial-time scalability, while derived performance bounds provide theoretical insights. Numerical simulations demonstrate that our approach achieves significant spectral efficiency gains over conventional FANETs, establishing the effectiveness of RIS-assisted drone networks for future wireless applications. Full article
Show Figures

Figure 1

28 pages, 652 KB  
Article
A Generalized Fractional Legendre-Type Differential Equation Involving the Atangana–Baleanu–Caputo Derivative
by Muath Awadalla and Dalal Alhwikem
Fractal Fract. 2026, 10(1), 54; https://doi.org/10.3390/fractalfract10010054 - 13 Jan 2026
Viewed by 98
Abstract
This paper introduces a fractional generalization of the classical Legendre differential equation based on the Atangana–Baleanu–Caputo (ABC) derivative. A novel fractional Legendre-type operator is rigorously defined within a functional framework of continuously differentiable functions with absolutely continuous derivatives. The associated initial value problem [...] Read more.
This paper introduces a fractional generalization of the classical Legendre differential equation based on the Atangana–Baleanu–Caputo (ABC) derivative. A novel fractional Legendre-type operator is rigorously defined within a functional framework of continuously differentiable functions with absolutely continuous derivatives. The associated initial value problem is reformulated as an equivalent Volterra integral equation, and existence and uniqueness of classical solutions are established via the Banach fixed-point theorem, supported by a proved Lipschitz estimate for the ABC derivative. A constructive solution representation is obtained through a Volterra–Neumann series, explicitly revealing the role of Mittag–Leffler functions. We prove that the fractional solutions converge uniformly to the classical Legendre polynomials as the fractional order approaches unity, with a quantitative convergence rate of order O(1α) under mild regularity assumptions on the Volterra kernel. A fully reproducible quadrature-based numerical scheme is developed, with explicit kernel formulas and implementation algorithms provided in appendices. Numerical experiments for the quadratic Legendre mode confirm the theoretical convergence and illustrate the smooth interpolation between fractional and classical regimes. An application to time-fractional diffusion in spherical coordinates demonstrates that the operator arises naturally in physical models, providing a mathematically consistent tool for extending classical angular analysis to fractional settings with memory. Full article
Show Figures

Figure 1

23 pages, 1141 KB  
Article
Randomized Algorithms and Neural Networks for Communication-Free Multiagent Singleton Set Cover
by Guanchu He, Colton Hill, Joshua H. Seaton and Philip N. Brown
Games 2026, 17(1), 3; https://doi.org/10.3390/g17010003 - 12 Jan 2026
Viewed by 236
Abstract
This paper considers how a system designer can program a team of autonomous agents to coordinate with one another such that each agent selects (or covers) an individual resource with the goal that all agents collectively cover the maximum number of resources. Specifically, [...] Read more.
This paper considers how a system designer can program a team of autonomous agents to coordinate with one another such that each agent selects (or covers) an individual resource with the goal that all agents collectively cover the maximum number of resources. Specifically, we study how agents can formulate strategies without information about other agents’ actions so that system-level performance remains robust in the presence of communication failures. First, we use an algorithmic approach to study the scenario in which all agents lose the ability to communicate with one another, have a symmetric set of resources to choose from, and select actions independently according to a probability distribution over the resources. We show that the distribution that maximizes the expected system-level objective under this approach can be computed by solving a convex optimization problem, and we introduce a novel polynomial-time heuristic based on subset selection. Further, both of the methods are guaranteed to be within 11/e of the system’s optimal in expectation. Second, we use a learning-based approach to study how a system designer can employ neural networks to approximate optimal agent strategies in the presence of communication failures. The neural network, trained on system-level optimal outcomes obtained through brute-force enumeration, generates utility functions that enable agents to make decisions in a distributed manner. Empirical results indicate the neural network often outperforms greedy and randomized baseline algorithms. Collectively, these findings provide a broad study of optimal agent behavior and its impact on system-level performance when the information available to agents is extremely limited. Full article
(This article belongs to the Section Algorithmic and Computational Game Theory)
Show Figures

Figure 1

16 pages, 336 KB  
Article
An Exact Algorithm for Counting the Number of Independent Sets of a Graph
by Guillermo De Ita Luna, J. Raymundo Marcial-Romero, Pedro Bello López and Meliza Contreras González
Mathematics 2026, 14(2), 275; https://doi.org/10.3390/math14020275 - 12 Jan 2026
Viewed by 197
Abstract
For a graph G of a degree greater than or equal to 3, counting the number of independent sets (denoted as i(G)) is a classical #P-complete problem. Here, we establish a new worst-case upper bound time complexity for computing [...] Read more.
For a graph G of a degree greater than or equal to 3, counting the number of independent sets (denoted as i(G)) is a classical #P-complete problem. Here, we establish a new worst-case upper bound time complexity for computing i(G) for any non-constraint undirected graph. Our proposal applies the vertex division rule i(G)=i(G{x})+i(GN[x]) over a vertex x which satisfies some conditions, and considers cactus and outerplanar graphs as basic subgraphs.Our algorithm establishes a leading worst-case upper bound of O*(1.2321n), where n is the number of vertices in the graph and O* omits polynomial terms in n. Full article
(This article belongs to the Special Issue Computational Algorithms and Models for Graph Problems)
Show Figures

Figure 1

18 pages, 4180 KB  
Article
Machine Learning and SHapley Additive exPlanation-Based Interpretation for Predicting Mastitis in Dairy Cows
by Xiaojing Zhou, Yongli Qu, Chuang Xu, Hao Wang, Di Lang, Bin Jia and Nan Jiang
Animals 2026, 16(2), 204; https://doi.org/10.3390/ani16020204 - 9 Jan 2026
Viewed by 248
Abstract
SHapley Additive exPlanations (SHAP) analysis has been applied in disease diagnosis and treatment effect evaluation. However, its application in the prediction and diagnosis of dairy cow diseases remains limited. We investigated whether the variance and autocorrelation of deviations in daily activity, rumination time, [...] Read more.
SHapley Additive exPlanations (SHAP) analysis has been applied in disease diagnosis and treatment effect evaluation. However, its application in the prediction and diagnosis of dairy cow diseases remains limited. We investigated whether the variance and autocorrelation of deviations in daily activity, rumination time, and milk electrical conductivity, along with daily milk yield, could be used to predict clinical mastitis in dairy cows using popular machine learning (ML) algorithms and identifying key predictive features using SHAP analysis. Quantile regression (QR) with second- or third-order polynomial models with the median or upper quantiles was used to process raw data from mastitic and healthy cows. Nine variables from the 14-day period preceding mastitis onset were identified as significantly associated with mastitis through logistic regression. These variables were used to train and validate prediction models using eleven classical ML algorithms. Among them, the partial least squares model demonstrated superior performance, achieving an AUC of 0.789, sensitivity of 0.500, specificity of 0.947, accuracy of 0.793, precision of 0.833, and F1-score of 0.625. SHAP analysis results revealed positive contributions of three features to mastitis prediction, whereas two features had negative contributions. These findings provide a theoretical basis for developing clinical decision-support tools in commercial farming settings. Full article
(This article belongs to the Section Cattle)
Show Figures

Figure 1

24 pages, 3734 KB  
Article
Probabilistic Analysis of Rainfall-Induced Slope Stability Using KL Expansion and Polynomial Chaos Kriging Surrogate Model
by Binghao Zhou, Kepeng Hou, Huafen Sun, Qunzhi Cheng and Honglin Wang
Geosciences 2026, 16(1), 36; https://doi.org/10.3390/geosciences16010036 - 9 Jan 2026
Viewed by 302
Abstract
Rainfall infiltration is one of the main factors inducing slope instability, while the spatial heterogeneity and uncertainty of soil parameters have profound impacts on slope response characteristics and stability evolution. Traditional deterministic analysis methods struggle to reveal the dynamic risk evolution process of [...] Read more.
Rainfall infiltration is one of the main factors inducing slope instability, while the spatial heterogeneity and uncertainty of soil parameters have profound impacts on slope response characteristics and stability evolution. Traditional deterministic analysis methods struggle to reveal the dynamic risk evolution process of the system under heavy rainfall. Therefore, this paper proposes an uncertainty analysis framework combining Karhunen–Loève Expansion (KLE) random field theory, Polynomial Chaos Kriging (PCK) surrogate modeling, and Monte Carlo simulation to efficiently quantify the probabilistic characteristics and spatial risks of rainfall-induced slope instability. First, for key strength parameters such as cohesion and internal friction angle, a two-dimensional random field with spatial correlation is constructed to realistically depict the regional variability of soil mechanical properties. Second, a PCK surrogate model optimized by the LARS algorithm is developed to achieve high-precision replacement of finite element calculation results. Then, large-scale Monte Carlo simulations are conducted based on the surrogate model to obtain the probability distribution characteristics of slope safety factors and potential instability areas at different times. The research results show that the slope enters the most unstable stage during the middle of rainfall (36–54 h), with severe system response fluctuations and highly concentrated instability risks. Deterministic analysis generally overestimates slope safety and ignores extreme responses in tail samples. The proposed method can effectively identify the multi-source uncertainty effects of slope systems, providing theoretical support and technical pathways for risk early warning, zoning design, and protection optimization of slope engineering during rainfall periods. Full article
(This article belongs to the Special Issue New Advances in Landslide Mechanisms and Prediction Models)
Show Figures

Figure 1

19 pages, 512 KB  
Article
Limiting the Number of Possible CFG Derivative Trees During Grammar Induction with Catalan Numbers
by Aybeyan Selim, Muzafer Saracevic and Arsim Susuri
Mathematics 2026, 14(2), 249; https://doi.org/10.3390/math14020249 - 9 Jan 2026
Viewed by 288
Abstract
Grammar induction runs into a serious problem due to the exponential growth of the number of possible derivation trees as sentence length increases, which makes unsupervised parsing both computationally demanding and highly indeterminate. This paper proposes a mathematics-based approach that alleviates this combinatorial [...] Read more.
Grammar induction runs into a serious problem due to the exponential growth of the number of possible derivation trees as sentence length increases, which makes unsupervised parsing both computationally demanding and highly indeterminate. This paper proposes a mathematics-based approach that alleviates this combinatorial complexity by introducing structural constraints based on Catalan and Fuss–Catalan numbers. By limiting the depth of the tree, the degree of branching and the form of derivation, the method significantly narrows the search space, while retaining the full generative power of context-free grammars. A filtering algorithm guided by Catalan structures is developed that incorporates these combinatorial constraints directly into the execution process, with formal analysis showing that the search complexity, under realistic assumptions about depth and richness, decreases from exponential to approximately polynomial. Experimental results on synthetic and natural-language datasets show that the Catalan-constrained model reduces candidate derivation trees by approximately 60%, improves F1 accuracy over unconstrained and depth-bounded baselines, and nearly halves average parsing time. Qualitative evaluation further indicates that the induced grammars exhibit more balanced and linguistically plausible structures. These findings demonstrate that Catalan-based structural constraints provide an elegant and effective mechanism for controlling ambiguity in grammar induction, bridging formal combinatorics with practical syntactic learning. Full article
Show Figures

Figure 1

24 pages, 6216 KB  
Article
Three-Dimensional Surface High-Precision Modeling and Loss Mechanism Analysis of Motor Efficiency Map Based on Driving Cycles
by Jiayue He, Yan Sui, Qiao Liu, Zehui Cai and Nan Xu
Energies 2026, 19(2), 302; https://doi.org/10.3390/en19020302 - 7 Jan 2026
Viewed by 194
Abstract
Amid fossil-fuel depletion and worsening environmental impacts, battery electric vehicles (BEVs) are pivotal to the energy transition. Energy management in BEVs relies on accurate motor efficiency maps, yet real-time onboard control demands models that balance fidelity with computational cost. To address map inaccuracy [...] Read more.
Amid fossil-fuel depletion and worsening environmental impacts, battery electric vehicles (BEVs) are pivotal to the energy transition. Energy management in BEVs relies on accurate motor efficiency maps, yet real-time onboard control demands models that balance fidelity with computational cost. To address map inaccuracy under real driving and the high runtime cost of 2-D interpolation, we propose a driving-cycle-aware, physically interpretable quadratic polynomial-surface framework. We extract priority operating regions on the speed–torque plane from typical driving cycles and model electrical power Pe  as a function of motor speed n and mechanical power Pm. A nested model family (M3–M6) and three fitting strategies—global, local, and region-weighted—are assessed using R2, RMSE, a computational complexity index (CCI), and an Integrated Criterion for accuracy–complexity and stability (ICS). Simulations on the Worldwide Harmonized Light Vehicles Test Cycle, the China Light-Duty Vehicle Test Cycle, and the Urban Dynamometer Driving Schedule show that region-weighted fitting consistently achieves the best or near-best ICS; relative to Global fitting, mean ICS decreases by 49.0%, 46.4%, and 90.6%, with the smallest variance. Regarding model order, the four-term M4 +Pm2 offers the best accuracy–complexity trade-off. Finally, the region-weighted fitting M4 +Pm2 polynomial model was integrated into the vehicle-level economic speed planning model based on the dynamic programming algorithm. In simulations covering a 27 km driving distance, this model reduced computational time by approximately 87% compared to a linear interpolation method based on a two-dimensional lookup table, while achieving an energy consumption deviation of about 0.01% relative to the lookup table approach. Results demonstrate that the proposed model significantly alleviates computational burden while maintaining high energy consumption prediction accuracy, thereby providing robust support for real-time in-vehicle applications in whole-vehicle energy management. Full article
(This article belongs to the Special Issue Challenges and Research Trends of Energy Management)
Show Figures

Figure 1

20 pages, 3754 KB  
Article
Scheduling Intrees with Unavailability Constraints on Two Parallel Machines
by Khaoula Ben Abdellafou, Kamel Zidi and Wad Ghaban
Symmetry 2026, 18(1), 103; https://doi.org/10.3390/sym18010103 - 6 Jan 2026
Viewed by 131
Abstract
This paper considers the two parallel-machine scheduling problem with intree-precedence constraints where machines are subject to non-availability constraints. In the literature, this problem is considered to be an open problem of unknown complexity. The proposed solution proves that the problem under consideration has [...] Read more.
This paper considers the two parallel-machine scheduling problem with intree-precedence constraints where machines are subject to non-availability constraints. In the literature, this problem is considered to be an open problem of unknown complexity. The proposed solution proves that the problem under consideration has polynomial complexity. Periods of machine unavailability are predetermined, and both task execution and inter-task communication are modeled as requiring one unit of time. The optimization criterion central to this study is the minimization of the makespan. Such a scheduling challenge is directly applicable to manufacturing environments, where production equipment can be intermittently offline for reasons such as unscheduled repairs or planned preventative maintenance. Adopting a unit-time task model offers a valuable framework for subsequently scheduling larger, preemptable jobs.This work presents a new method, called Scheduling Intrees with Unavailability Constraints (SIwUC), which operates by aggregating tasks into distinct groups. The analysis establishes that the SIwUC algorithm produces optimal schedules and reveals how the underlying problem architecture and its solutions demonstrate a symmetrical property in the distribution of tasks across the two parallel machines. This paper demonstrates that the proposed SIwUC algorithm builds optimal schedules and highlight how the problem structure and its solutions exhibit a form of symmetry in balancing task allocation between the two parallel machines. Full article
(This article belongs to the Special Issue Symmetry in Process Optimization)
Show Figures

Figure 1

26 pages, 6799 KB  
Article
Research on Anomaly Detection and Correction Methods for Nuclear Power Plant Operation Data
by Ren Yu, Yudong Zhao, Shaoxuan Yin, Wei Mao, Chunyuan Wang and Kai Xiao
Processes 2026, 14(2), 192; https://doi.org/10.3390/pr14020192 - 6 Jan 2026
Viewed by 191
Abstract
The data collection and analytical capabilities of the Instrumentation and Control (I&C) system in nuclear power plants (NPPs) continue to advance, thereby enhancing operational state awareness and enabling more precise control. However, the data acquisition, transmission, and storage devices in nuclear power plant [...] Read more.
The data collection and analytical capabilities of the Instrumentation and Control (I&C) system in nuclear power plants (NPPs) continue to advance, thereby enhancing operational state awareness and enabling more precise control. However, the data acquisition, transmission, and storage devices in nuclear power plant (NPP) I&C systems typically operate in harsh environments. This exposure can lead to device failures and susceptibility to external interference, potentially resulting in data anomalies such as missing samples, signal skipping, and measurement drift. This paper presents a Gated Recurrent Unit and Multilayer Perceptron (GRU-MLP)-based method for anomaly detection and correction in NPP I&C system data. The goal is to improve operational data quality, thereby supplying more reliable input for system analysis and automatic controllers. Firstly, the short-term prediction algorithm of operation data based on the GRU model is studied to provide a reference for operation data anomaly detection. Secondly, the MLP model is connected to the GRU model to recognize the difference between the collected value and the prediction value so as to distinguish and correct the anomalies. Finally, a series of experiments were conducted using operational data from a pressurized water reactor (PWR) to evaluate the proposed method. The experiments were designed as follows: (1) These experiments assessed the model’s prediction performance across varying time horizons. Prediction steps of 1, 3, 5, 10, and 20 were configured to verify the accuracy and robustness of the data prediction capability over short and long terms. (2) The model’s effectiveness in identifying anomalies was validated using three typical patterns: random jump, fixed-value drift, and growth drift. The growth drift category was further subdivided into linear, polynomial, and logarithmic growth to comprehensively test detection performance. (3) A comparative analysis was performed to demonstrate the superiority of the proposed GRU-MLP algorithm. It was compared against the interactive window center value method and the ARIMA algorithm. The results confirm the advantages of the proposed method for anomaly detection, and the underlying reasons are analyzed. (4) Additional experiments were carried out to discuss and verify the mobility (or transferability) of the prediction algorithm, ensuring its applicability under different operational conditions. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

12 pages, 2357 KB  
Article
Real-Time Cr(VI) Concentration Monitoring in Chrome Plating Wastewater Using RGB Sensor and Machine Learning
by Hanui Yang and Donghee Park
Eng 2026, 7(1), 17; https://doi.org/10.3390/eng7010017 - 1 Jan 2026
Viewed by 223
Abstract
The transition to the 4th Industrial Revolution (4IR) in the electroplating industry necessitates intelligent, real-time monitoring systems to replace traditional, time-consuming offline analysis. In this study, we developed a cost-effective, automated measurement system for hexavalent chromium (Cr(VI)) in plating wastewater using an Arduino-based [...] Read more.
The transition to the 4th Industrial Revolution (4IR) in the electroplating industry necessitates intelligent, real-time monitoring systems to replace traditional, time-consuming offline analysis. In this study, we developed a cost-effective, automated measurement system for hexavalent chromium (Cr(VI)) in plating wastewater using an Arduino-based RGB sensor. Unlike conventional single-variable approaches, we conducted a comprehensive feature sensitivity analysis on multi-sensor data (including pH, ORP, and EC). While electrochemical sensors were found to be susceptible to pH interference, the analysis identified that the Red and Green optical channels are the most critical indicators due to the distinct chromatic characteristics of Cr(VI). Specifically, the combination of these two channels effectively functions as a dual-variable sensing mechanism, compensating for potential interferences. To optimize prediction accuracy, a systematic machine learning strategy was employed. While the Convolutional Neural Network (CNN) achieved the highest classification accuracy of 89% for initial screening, a polynomial regression algorithm was ultimately implemented to model the non-linear relationship between sensor outputs and concentration. The derived regression model achieved an excellent determination coefficient (R2 = 0.997), effectively compensating for optical saturation effects at high concentrations. Furthermore, by integrating this sensing model with the chemical stoichiometry of the reduction process, the proposed system enables the precise, automated dosing of reducing agents. This capability facilitates the establishment of a “Digital Twin” for wastewater treatment, offering a practical ICT (Information and Communication Technology)-based solution for autonomous process control and strict environmental compliance. Full article
(This article belongs to the Section Chemical, Civil and Environmental Engineering)
Show Figures

Figure 1

Back to TopTop