Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,158)

Search Parameters:
Keywords = linear benchmark

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2628 KB  
Article
From Binary Scores to Risk Tiers: An Interpretable Hybrid Stacking Model for Multi-Class Loan Default Prediction
by Ghazi Abbas, Zhou Ying and Muzaffar Iqbal
Systems 2026, 14(1), 78; https://doi.org/10.3390/systems14010078 (registering DOI) - 11 Jan 2026
Abstract
Accurate credit risk assessment for small firms and farmers is crucial for financial stability and inclusion; however, many models still rely on binary default labels, overlooking the continuum of borrower vulnerability. To address this, we propose Transformer–LightGBM–Stacked Logistic Regression (TL-StackLR), a hybrid stacking [...] Read more.
Accurate credit risk assessment for small firms and farmers is crucial for financial stability and inclusion; however, many models still rely on binary default labels, overlooking the continuum of borrower vulnerability. To address this, we propose Transformer–LightGBM–Stacked Logistic Regression (TL-StackLR), a hybrid stacking framework for multi-class loan default prediction. The framework combines three learners: a Feature Tokenizer Transformer (FT-Transformer) for feature interactions, LightGBM for non-linear pattern recognition, and a stacked LR meta-learner for calibrated probability fusion. We transform binary labels into three risk tiers, Low, Medium, and High, based on quantile-based stratification of default probabilities, aligning the model with real-world risk management. Evaluated on datasets from 3045 firms and 2044 farmers in China, TL-StackLR achieves state-of-the-art ROC-AUC scores of 0.986 (firms) and 0.972 (farmers), with superior calibration and discrimination across all risk classes, outperforming all standalone and partial-hybrid benchmarks. The framework provides SHapley Additive exPlanations (SHAP) interpretability, showing how key risk drivers, such as income, industry experience, and mortgage score for firms and loan purpose, Engel coefficient, and income for farmers, influence risk tiers. This transparency transforms TL-StackLR into a decision-support tool, enabling targeted interventions for inclusive lending, thus offering a practical foundation for equitable credit risk management. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
24 pages, 9522 KB  
Article
Precise Mapping of Linear Shelterbelt Forests in Agricultural Landscapes: A Deep Learning Benchmarking Study
by Wenjie Zhou, Lizhi Liu, Ruiqi Liu, Fei Chen, Liyu Yang, Linfeng Qin and Ruiheng Lyu
Forests 2026, 17(1), 91; https://doi.org/10.3390/f17010091 - 9 Jan 2026
Abstract
Farmland shelterbelts are crucial elements in safeguarding agricultural ecological security and sustainable development, with their precise extraction being vital for regional ecological monitoring and precision agriculture management. However, constrained by their narrow linear distribution, complex farmland backgrounds, and spectral confusion issues, traditional remote [...] Read more.
Farmland shelterbelts are crucial elements in safeguarding agricultural ecological security and sustainable development, with their precise extraction being vital for regional ecological monitoring and precision agriculture management. However, constrained by their narrow linear distribution, complex farmland backgrounds, and spectral confusion issues, traditional remote sensing methods encounter significant challenges in terms of accuracy and generalization capability. In this study, six representative deep learning semantic segmentation models—U-Net, Attention U-Net (AttU_Net), ResU-Net, U2-Net, SwinUNet, and TransUNet—were systematically evaluated for farmland shelterbelt extraction using high-resolution Gaofen-6 imagery. Model performance was assessed through four-fold cross-validation and independent test set validation. The results indicate that convolutional neural network (CNN)-based models show overall better performance than Transformer-based architectures; on the independent test set, the best-performing CNN model (U-Net) achieved a Dice Similarity Coefficient (DSC) of 91.45%, while the lowest DSC (88.86%) was obtained by the Transformer-based TransUNet model. Among the evaluated models, U-Net demonstrated a favorable balance between accuracy, stability, and computational efficiency. The trained U-Net was applied to large-scale farmland shelterbelt mapping in the study area (Alar City, Xinjiang), achieving a belt-level visual accuracy of 95.58% based on 385 manually interpreted samples. Qualitative demonstrations in Aksu City and Shaya County illustrated model transferability. This study provides empirical guidance for model selection in high-resolution agricultural remote sensing and offers a feasible technical solution for large-scale and precise farmland shelterbelt extraction. Full article
45 pages, 1032 KB  
Article
Linearization Strategies for Energy-Aware Optimization of Single-Truck, Multiple-Drone Last-Mile Delivery Systems
by Ornela Gordani, Eglantina Kalluci and Fatos Xhafa
Future Internet 2026, 18(1), 45; https://doi.org/10.3390/fi18010045 - 9 Jan 2026
Abstract
The increasing demand for rapid and sustainable parcel delivery has motivated the exploration of innovative logistics systems that integrate drones with traditional ground vehicles. Among these, the single-truck, multiple-drone last-mile delivery configuration has attracted significant attention due to its potential to reduce both [...] Read more.
The increasing demand for rapid and sustainable parcel delivery has motivated the exploration of innovative logistics systems that integrate drones with traditional ground vehicles. Among these, the single-truck, multiple-drone last-mile delivery configuration has attracted significant attention due to its potential to reduce both delivery time and environmental impact. However, optimizing such systems remains computationally challenging because of the nonlinear energy consumption behavior of drones, which depends on factors such as payload weight and travel time, among others. This study investigates the energy-aware optimization of truck–drone collaborative delivery systems, with a particular focus on the mathematical formulation as mixed-integer nonlinear problem (MINLP) formulations and linearization of drone energy consumption constraints. Building upon prior models proposed in the literature in the field, we analyze the MINLP computational complexity and introduce alternative linearization strategies that preserve model accuracy while improving performance solvability. The resulting linearized mixed-integer linear problem (MILP) formulations are solved using the PuLP software, a Python library solver, to evaluate the efficacy of linearization on computation time and solution quality across diverse problem instance sizes from a benchmark of instances in the literature. Thus, extensive computational results drawn from a standard dataset benchmark from the literature by running the solver in a cluster infrastructure demonstrated that the designed linearization methods can reduce optimization time of nonlinear solvers to several orders of magnitude without compromising energy estimation accuracy, enabling the model to handle larger problem instances effectively. This performance improvement opens the door to a real-time or near-real-time solution of the problem, allowing the delivery system to dynamically react to operational changes and uncertainties during delivery. Full article
20 pages, 3960 KB  
Article
Prediction and Performance of BDS Satellite Clock Bias Based on CNN-LSTM-Attention Model
by Junwei Ma, Jun Tang, Hanyang Teng and Xuequn Wu
Sensors 2026, 26(2), 422; https://doi.org/10.3390/s26020422 - 8 Jan 2026
Viewed by 136
Abstract
Satellite Clock Bias (SCB) is a major source of error in Precise Point Positioning (PPP). The real-time service products from the International GNSS Service (IGS) are susceptible to network interruptions. Such disruptions can compromise product availability and, consequently, degrade positioning accuracy. We introduce [...] Read more.
Satellite Clock Bias (SCB) is a major source of error in Precise Point Positioning (PPP). The real-time service products from the International GNSS Service (IGS) are susceptible to network interruptions. Such disruptions can compromise product availability and, consequently, degrade positioning accuracy. We introduce the CNN-LSTM-Attention model to address this challenge. The model enhances a Long Short-Term Memory (LSTM) network by integrating Convolutional Neural Networks (CNNs) and an Attention mechanism. The proposed model can efficiently extract data features and balance the weight allocation in the Attention mechanism, thereby improving both the accuracy and stability of predictions. Across various forecasting horizons (1, 2, 4, and 6 h), the CNN-LSTM-Attention model demonstrates prediction accuracy improvements of (76.95%, 66.84%, 65.92%, 84.33%, and 43.87%), (72.59%, 65.61%, 74.60%, 82.98%, and 51.13%), (70.45%, 68.52%, 81.63%, 88.44%, and 60.49%), and (70.26%, 70.51%, 84.28%, 93.66%, and 66.76%), respectively, across the five benchmark models: Linear Polynomial (LP), Quadratic Polynomial (QP), Autoregressive Integrated Moving Average (ARIMA), Backpropagation Neural Network (BP), and LSTM models. Furthermore, in dynamic PPP experiments utilizing IGS tracking stations, the model predictions achieve positioning accuracy comparable to that of post-processed products. This proves that the proposed model demonstrates superior accuracy and stability for predicting SCB, while also satisfying the demands of positioning applications. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

25 pages, 5517 KB  
Article
A Novel Online Real-Time Prediction Method for Copper Particle Content in the Oil of Mining Equipment Based on Neural Networks
by Long Yuan, Zibin Du, Xun Gao, Yukang Zhang, Liusong Yang, Yuehui Wang and Junzhe Lin
Machines 2026, 14(1), 76; https://doi.org/10.3390/machines14010076 - 8 Jan 2026
Viewed by 61
Abstract
For the problem of online real-time prediction of copper particle content in the lubricating oil of the main spindle-bearing system of mining equipment, the traditional direct detection method is costly and has insufficient real-time performance. To this end, this paper proposes an indirect [...] Read more.
For the problem of online real-time prediction of copper particle content in the lubricating oil of the main spindle-bearing system of mining equipment, the traditional direct detection method is costly and has insufficient real-time performance. To this end, this paper proposes an indirect prediction method based on data-driven neural networks. The proposal of this method is based on a core assumption: during the stable wear stage of the equipment, there exists a modelable statistical correlation between the copper particle content in the oil and the total amount of non-ferromagnetic particles that are easy to measure online. Based on this, a neural network prediction model was constructed, with the online metal abrasive particle sensor signal (non-ferromagnetic particle content) as the input and the copper particle content as the output. The experimental data are derived from 100 real oil samples collected on-site from the lubrication system of the main shaft bearing of a certain mine mill. To enhance the model’s performance in the case of small samples, data augmentation techniques were adopted in the study. The verification results show that the average prediction accuracy of the proposed neural network model reaches 95.66%, the coefficient of determination (R2) is 0.91, and the average absolute error (MAE) is 0.3398. Its performance is significantly superior to that of the linear regression model used as the benchmark (with an average accuracy of approximately 80%, R2 = 0.71, and the mean absolute error (MAE) = 1.5628). This comparison result not only preliminarily verified the validity of the relevant hypotheses of non-ferromagnetic particles and copper particles in specific scenarios, but also revealed the nonlinear nature of the relationship between them. This research explores and preliminarily validates a low-cost technical path for the online prediction of copper particle content in the stable wear stage of the main shaft bearing system, suggesting its potential for engineering application within specific, well-defined scenarios. Full article
Show Figures

Figure 1

38 pages, 2642 KB  
Article
Capturing Short- and Long-Term Temporal Dependencies Using Bahdanau-Enhanced Fused Attention Model for Financial Data—An Explainable AI Approach
by Rasmi Ranjan Khansama, Rojalina Priyadarshini, Surendra Kumar Nanda and Rabindra Kumar Barik
FinTech 2026, 5(1), 4; https://doi.org/10.3390/fintech5010004 - 7 Jan 2026
Viewed by 71
Abstract
Prediction of stock closing price plays a critical role in financial planning, risk management, and informed investment decision-making. In this study, we propose a novel model that synergistically amalgamates Bidirectional GRU (BiGRU) with three complementary attention techniques—Top-k Sparse, Global, and Bahdanau Attention—to tackle [...] Read more.
Prediction of stock closing price plays a critical role in financial planning, risk management, and informed investment decision-making. In this study, we propose a novel model that synergistically amalgamates Bidirectional GRU (BiGRU) with three complementary attention techniques—Top-k Sparse, Global, and Bahdanau Attention—to tackle the complex, intricate, and non-linear temporal dependencies in financial time series. The proposed Fused Attention Model is validated on two highly volatile, non-linear, and complex- patterned stock indices: NIFTY 50 and S&P 500, with 80% of the historical price data used for model learning and the remaining 20% for testing. A comprehensive analysis of the results, benchmarked against various baseline and hybrid deep learning architectures across multiple regression performance metrics such as Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), and R2 Score, demonstrates the superiority and noteworthiness of our proposed Fused Attention Model. Most significantly, the proposed model yields the highest prediction accuracy and generalization capability, with R2 scores of 0.9955 on NIFTY 50 and 0.9961 on S&P 500. Additionally, to mitigate the issues of interpretability and transparency of the deep learning model for financial forecasting, we utilized three different Explainable Artificial Intelligence (XAI) techniques, namely Integrated Gradients, SHapley Additive exPlanations (SHAP), and Attention Weight Analysis. The results of these three XAI techniques validated the utilization of three attention techniques along with the BiGRU model. The explainability of the proposed model named as BiGRU based Fused Attention (BiG-FA), in addition to its superior performance, thus offers a robust and interpretable deep learning model for time-series prediction, making it applicable beyond the financial domain. Full article
Show Figures

Figure 1

26 pages, 9426 KB  
Article
Advancing Concession-Scale Carbon Stock Prediction in Oil Palm Using Machine Learning and Multi-Sensor Satellite Indices
by Amir Noviyanto, Fadhlullah Ramadhani, Valensi Kautsar, Yovi Avianto, Sri Gunawan, Yohana Theresia Maria Astuti and Siti Maimunah
Resources 2026, 15(1), 12; https://doi.org/10.3390/resources15010012 - 6 Jan 2026
Viewed by 248
Abstract
Reliable estimation of oil palm carbon stock is essential for climate mitigation, concession management, and sustainability certification. While satellite-based approaches offer scalable solutions, redundancy among spectral indices and inter-sensor variability complicate model development. This study evaluates machine learning regressors for predicting oil palm [...] Read more.
Reliable estimation of oil palm carbon stock is essential for climate mitigation, concession management, and sustainability certification. While satellite-based approaches offer scalable solutions, redundancy among spectral indices and inter-sensor variability complicate model development. This study evaluates machine learning regressors for predicting oil palm carbon stock at tree (CO_tree, kg C tree−1) and hectare (CO_ha, Mg C ha−1) scales using spectral indices derived from Landsat-8, Landsat-9, and Sentinel-2. Fourteen vegetation indices were screened for multicollinearity, resulting in a lean feature set dominated by NDMI, EVI, MSI, NDWI, and sensor-specific indices such as NBR2 and ARVI. Ten regression algorithms were benchmarked through cross-validation. Ensemble models, particularly Random Forest, Gradient Boosting, and XGBoost, outperformed linear and kernel methods, achieving R2 values of 0.86–0.88 and RMSE of 59–64 kg tree−1 or 8–9 Mg ha−1. Feature importance analysis consistently identified NDMI as the strongest predictor of standing carbon. Spatial predictions showed stable carbon patterns across sensors, with CO_tree ranging from 200–500 kg C tree−1 and CO_ha from 20–70 Mg C ha−1, consistent with published values for mature plantations. The study demonstrates that ensemble learning with sensor-specific index sets provides accurate, dual-scale carbon monitoring for oil palm. Limitations include geographic scope, dependence on allometric equations, and omission of belowground carbon. Future work should integrate age dynamics, multi-year composites, and deep learning approaches for operational carbon accounting. Full article
Show Figures

Figure 1

25 pages, 14552 KB  
Article
TransferLearning-Driven Large-Scale CNN Benchmarking with Explainable AI for Image-Based Dust Detection on Solar Panels
by Hafeez Anwar
Information 2026, 17(1), 52; https://doi.org/10.3390/info17010052 - 6 Jan 2026
Viewed by 96
Abstract
Solar panel power plants are typically established in regions with maximum solar irradiation, yet these conditions result in heavy dust accumulation on the panels causing significant performance degradation and reduced power output. The paper addresses this issue via an image-based dust detection solution [...] Read more.
Solar panel power plants are typically established in regions with maximum solar irradiation, yet these conditions result in heavy dust accumulation on the panels causing significant performance degradation and reduced power output. The paper addresses this issue via an image-based dust detection solution powered by deep learning, particularly convolutional neural networks (CNNs). Most of such solutions use state-of-the-art CNNs either as backbones/features extractors, or propose custom models built upon them. Given such a reliance, future research requires a comprehensive benchmarking of CNN models to identify the ones that achieve superior performance on classifying clean vs. dusty solar panels both with respect to accuracy and efficiency. To this end, we evaluate 100 CNN models that belong to 16 families for image-based dust detection on solar panels, where the pre-trained models of these CNN architectures are used to encode solar panel images. Upon these image encodings, we then train and test a linear support vector machine (SVM) to determine the best-performing models in terms of classification accuracy and training time. The use of such a simple classifier ensures a fair comparison where the encodings do not benefit from the classifier itself and their performance reflects each CNN’s ability to capture the underlying image features. Experiments were conducted on a publicly available dust detection dataset, using stratified shuffle-split with 70–30, 80–20, and 90–10 splits, repeated 10 times. convnext_xxlarge and resnetv2_152 achieved the best classification rates of above 90%, with resnetv2_152 offering superior efficiency that is also supported by features analysis such as tSNE and UMAP, and explainableAI (XAI) such as LIME visualization. To prove their generalization capability, we tested the image encodings of resnetv2_152 on an unseen real-world image dataset captured via a drone camera, which achieved a remarkable accuracy of 96%. Consequently, our findings guide the selection of optimal CNN backbones for future image-based dust detection systems. Full article
Show Figures

Figure 1

23 pages, 701 KB  
Article
Improving Energy Efficiency and Reliability of Parallel Pump Systems Using Hybrid PSO–ADMM–LQR
by Samir Nassiri, Ahmed Abbou and Mohamed Cherkaoui
Processes 2026, 14(2), 186; https://doi.org/10.3390/pr14020186 - 6 Jan 2026
Viewed by 116
Abstract
This paper proposes a hybrid optimization–control framework that combines the Particle Swarm Optimization (PSO) algorithm, the Alternating Direction Method of Multipliers (ADMM), and a Linear–Quadratic Regulator (LQR) for energy-efficient and reliable operation of parallel pump systems. The PSO layer performs global exploration over [...] Read more.
This paper proposes a hybrid optimization–control framework that combines the Particle Swarm Optimization (PSO) algorithm, the Alternating Direction Method of Multipliers (ADMM), and a Linear–Quadratic Regulator (LQR) for energy-efficient and reliable operation of parallel pump systems. The PSO layer performs global exploration over mixed discrete–continuous design variables, while the ADMM layer coordinates distributed flows under head and reliability constraints, yielding hydraulically feasible operating points. The inner LQR controller achieves optimal speed tracking with guaranteed asymptotic stability and improved robustness against nonlinear load disturbances. The overall PSO–ADMM–LQR co-design minimizes a composite objective that accounts for steady-state efficiency, transient performance, and control effort. Simulation results on benchmark multi-pump systems demonstrate that the proposed framework outperforms conventional PSO- and PID-based methods in terms of energy savings, dynamic response, and robustness. The method exhibits low computational complexity, scalability to large systems, and practical suitability for real-time implementation in smart water distribution and industrial pumping applications. Full article
Show Figures

Figure 1

18 pages, 4244 KB  
Article
Semantic-Guided Kernel Low-Rank Sparse Preserving Projections for Hyperspectral Image Dimensionality Reduction and Classification
by Junjun Li, Jinyan Hu, Lin Huang, Chao Hu and Meinan Zheng
Appl. Sci. 2026, 16(1), 561; https://doi.org/10.3390/app16010561 - 5 Jan 2026
Viewed by 147
Abstract
Hyperspectral images present significant challenges for conventional dimensionality reduction methods due to their high dimensionality, spectral redundancy, and complex spatial–spatial dependencies. While kernel-based sparse representation methods have shown promise in handling spectral non-linearities, they often fail to preserve spatial consistency and semantic discriminability [...] Read more.
Hyperspectral images present significant challenges for conventional dimensionality reduction methods due to their high dimensionality, spectral redundancy, and complex spatial–spatial dependencies. While kernel-based sparse representation methods have shown promise in handling spectral non-linearities, they often fail to preserve spatial consistency and semantic discriminability during feature transformation. To address these limitations, we propose a novel semantic-guided kernel low-rank sparse preserving projection (SKLSPP) framework. Unlike previous approaches that primarily focus on spectral information, our method introduces three key innovations: a semantic-aware kernel representation that maintains discriminability through label constraints, a spatially adaptive manifold regularization term that preserves local pixel affinities in the reduced subspace, and an efficient optimization framework that jointly learns sparse codes and projection matrices. Extensive experiments on benchmark datasets demonstrate that SKLSPP achieves superior performance compared to state-of-the-art methods, showing enhanced feature discrimination, reduced redundancy, and improved robustness to noise while maintaining spatial coherence in the dimensionality-reduced features. Full article
Show Figures

Figure 1

32 pages, 5625 KB  
Article
Multi-Source Concurrent Renewable Energy Estimation: A Physics-Informed Spatio-Temporal CNN-LSTM Framework
by Razan Mohammed Aljohani and Amal Almansour
Sustainability 2026, 18(1), 533; https://doi.org/10.3390/su18010533 - 5 Jan 2026
Viewed by 163
Abstract
Accurate and reliable estimation of renewable energy generation is critical for modern power grid management, yet the inherent volatility and distinct physical drivers of multi-source renewables present significant modeling challenges. This paper proposes a unified deep learning framework for the concurrent estimation of [...] Read more.
Accurate and reliable estimation of renewable energy generation is critical for modern power grid management, yet the inherent volatility and distinct physical drivers of multi-source renewables present significant modeling challenges. This paper proposes a unified deep learning framework for the concurrent estimation of power generation from solar, wind, and hydro sources. This methodology, termed nowcasting, utilizes real-time weather inputs to estimate immediate power generation. We introduce a hybrid spatio-temporal CNN-LSTM architecture that leverages a two-branch design to process both sequential weather data and static, plant-specific attributes in parallel. A key innovation of our approach is the use of a physics-informed Capacity Factor as the normalized target variable, which is customized for each energy source and notably employs a non-linear, S-shaped tanh-based power curve to model wind generation. To ensure high-fidelity spatial feature integration, a cKDTree algorithm was implemented to accurately match each power plant with its nearest corresponding weather data. To guarantee methodological rigor and prevent look-ahead bias, the model was trained and validated using a strict chronological data splitting strategy and was rigorously benchmarked against Linear Regression and XGBoost models. The framework demonstrated exceptional robustness on a large-scale dataset of over 1.5 million records spanning five European countries, achieving R-squared (R2) values of 0.9967 for solar, 0.9993 for wind, and 0.9922 for hydro. While traditional ensemble models performed competitively on linear solar data, the proposed CNN-LSTM architecture demonstrated superior performance in capturing the complex, non-linear dynamics of wind energy, confirming its superiority in capturing intricate meteorological dependencies. This study validates the significant contribution of a spatio-temporal and physics-informed framework, establishing a foundational model for real-time energy assessment and enhanced grid sustainability. Full article
Show Figures

Figure 1

20 pages, 14945 KB  
Article
Study on the Transport Law and Corrosion Behavior of Sulfate Ions of a Solution Soaking FA-PMPC Paste
by Yuying Hou, Qiang Xu, Tao Li, Sha Sa, Yante Mao, Caiqiang Xiong, Xiamin Hu, Kan Xu and Jianming Yang
Materials 2026, 19(1), 202; https://doi.org/10.3390/ma19010202 - 5 Jan 2026
Viewed by 155
Abstract
To study the sulfate corrosion behavior of potassium magnesium phosphate cement (PMPC) paste, the sulfate content, strength, and length of PMPC specimens were measured at different corrosion ages under 5% Na2SO4 solution soaking conditions, and the phase composition and microstructure [...] Read more.
To study the sulfate corrosion behavior of potassium magnesium phosphate cement (PMPC) paste, the sulfate content, strength, and length of PMPC specimens were measured at different corrosion ages under 5% Na2SO4 solution soaking conditions, and the phase composition and microstructure were analyzed. The conclusion is as follows: In PMPC specimens subjected to one-dimensional SO42− corrosion, the relation between the diffusion depth of SO42− (h) and the SO42− concentration (c (h, t)) can be referred by a polynomial very well. The sulfate diffusion coefficient (D) of PMPC specimens was one order of magnitude lower than Portland cement concrete (on the order of 10−7 mm2/s). The surface SO42− concentration c (0, t), the SO42− computed corrosion depth h00, and D of FM2 specimen containing 20% fly ash (FA) were all less than those of the FM0 specimen (reference). At 360-day immersion ages, the c (0, 360 d) and h00 in FM2 were obviously smaller than those in FM0, and the D of FM2 was 64.2% of FM0. The strengths of FM2 specimens soaked for 2 days (the benchmark strength) were greater than those of FM0 specimens. At 360-day immersion ages, the residual flexural/compressive strength ratios (360-day strength/benchmark strength) of FM0 and FM2 specimens were all larger than 95%. The volume linear expansion rates (Sn) of PMPC specimens continued to increase with the immersion age, and Sn of FM2 specimen was only 49.5% of that of the FM0 specimen at 360-day immersion ages. The results provide an experimental basis for the application of PMPC-based materials. Full article
(This article belongs to the Topic Advanced Composite Materials)
Show Figures

Figure 1

40 pages, 1118 KB  
Article
FORCE: Fast Outlier-Robust Correlation Estimation via Streaming Quantile Approximation for High-Dimensional Data Streams
by Sooyoung Jang and Changbeom Choi
Mathematics 2026, 14(1), 191; https://doi.org/10.3390/math14010191 - 4 Jan 2026
Viewed by 159
Abstract
The estimation of correlation matrices in high-dimensional data streams presents a fundamental conflict between computational efficiency and statistical robustness. Moment-based estimators, such as Pearson’s correlation, offer linear O(N) complexity but lack robustness. In contrast, high-breakdown methods like the minimum covariance [...] Read more.
The estimation of correlation matrices in high-dimensional data streams presents a fundamental conflict between computational efficiency and statistical robustness. Moment-based estimators, such as Pearson’s correlation, offer linear O(N) complexity but lack robustness. In contrast, high-breakdown methods like the minimum covariance determinant (MCD) are computationally prohibitive (O(Np2+p3)) for real-time applications. This paper introduces Fast Outlier-Robust Correlation Estimation (FORCE), a streaming algorithm that performs adaptive coordinate-wise trimming using the P2 algorithm for streaming quantile approximation, requiring only O(p) memory independent of stream length. We evaluate FORCE against six baseline algorithms—including exact trimmed methods (TP-Exact, TP-TER) that use O(NlogN) sorting with O(Np) storage—across five benchmark datasets spanning synthetic, financial, medical, and genomic domains. FORCE achieves speedups of approximately 470× over FastMCD and 3.9× over Spearman’s rank correlation. On S&P 500 financial data, coordinate-wise trimmed methods substantially outperform FastMCD: TP-Exact achieves the best RMSE (0.0902), followed by TP-TER (0.0909) and FORCE (0.1186), compared to FastMCD’s 0.1606. This result demonstrates that coordinate-wise trimming better accommodates volatility clustering in financial time series than multivariate outlier exclusion. FORCE achieves 76% of TP-Exact’s accuracy while requiring 104× less memory, enabling robust estimation in true streaming environments where data cannot be retained for batch processing. We validate the 25% breakdown point shared by all IQR-based trimmed methods using the ODDS-satellite benchmark (31.7% contamination), confirming identical degradation for FORCE, TP-Exact, and TP-TER. For memory-constrained streaming applications with contamination below 25%, FORCE provides the only viable path to robust correlation estimation with bounded memory. Full article
(This article belongs to the Special Issue Modeling and Simulation for Optimizing Complex Dynamical Systems)
Show Figures

Graphical abstract

34 pages, 2671 KB  
Article
A Tuning-Free Constrained Team-Oriented Swarm Optimizer (CTOSO) for Engineering Problems
by Adel BenAbdennour and Abdulmajeed M. Alenezi
Mathematics 2026, 14(1), 176; https://doi.org/10.3390/math14010176 - 2 Jan 2026
Viewed by 162
Abstract
Constrained optimization problems (COPs) are frequent in engineering design yet remain challenging due to complex search spaces and strict feasibility requirements. Existing swarm-based optimizers often rely on penalty functions or algorithm-specific control parameters, whose performance is sensitive to problem-dependent tuning and may lead [...] Read more.
Constrained optimization problems (COPs) are frequent in engineering design yet remain challenging due to complex search spaces and strict feasibility requirements. Existing swarm-based optimizers often rely on penalty functions or algorithm-specific control parameters, whose performance is sensitive to problem-dependent tuning and may lead to premature convergence or infeasible solutions when feasible regions are narrow. This paper introduces the Constrained Team-Oriented Swarm Optimizer (CTOSO), a tuning-free metaheuristic that adapts the ETOSO framework by replacing linear exploiter movement with spiral search and integrating Deb’s feasibility rule. The population divides into Explorers, promoting diversity through neighbor-guided navigation, and Exploiters, performing intensified local search around the global best solution. Extensive evaluation on twelve constrained engineering benchmark problems shows that CTOSO achieves a 100% feasibility rate and attains the highest overall composite performance score among the compared algorithms under limited function-evaluation budgets. On the CEC 2017 constrained benchmark suite, CTOSO attains an average feasibility rate of 79.78%, generating feasible solutions on 14 out of 15 problems. Statistical analysis using Wilcoxon signed-rank tests and Friedman ranking with Nemenyi post hoc comparison indicates that CTOSO performs significantly better than several baseline optimizers, while exhibiting no statistically significant differences with leading evolutionary methods under the same experimental conditions. The algorithm’s design, requiring no tuning of algorithm-specific control parameters, makes it suitable for real-world engineering applications where tuning effort must be minimized. Full article
Show Figures

Figure 1

16 pages, 1797 KB  
Article
Intelligent Prediction of Subway Tunnel Settlement: A Novel Approach Using a Hybrid HO-GPR Model
by Jiangming Chai, Xinlin Yang and Wenbin Deng
Buildings 2026, 16(1), 192; https://doi.org/10.3390/buildings16010192 - 1 Jan 2026
Viewed by 138
Abstract
Precise prediction of structural settlement in subway tunnels is crucial for ensuring safety during both construction and operational phases; however, the non-linear characteristics of monitoring data pose a significant challenge to achieving this goal. To address this issue, this study proposes a hybrid [...] Read more.
Precise prediction of structural settlement in subway tunnels is crucial for ensuring safety during both construction and operational phases; however, the non-linear characteristics of monitoring data pose a significant challenge to achieving this goal. To address this issue, this study proposes a hybrid predictive model, termed HO-GPR. This model integrates the Hippopotamus Optimization (HO) algorithm—a novel bio-inspired meta-heuristic—with Gaussian Process Regression (GPR), a non-parametric probabilistic machine learning method. Specifically, HO is utilized to globally optimize the hyperparameters of GPR to enhance its adaptability to complex deformation patterns. The model was validated using 52 months of field settlement monitoring data collected from the Urumqi Metro Line 1 tunnel. Through a series of comparative and generalization experiments, the accuracy and adaptability of the model were systematically evaluated. The results demonstrate that the HO-GPR model is superior to five benchmark models—namely Gated Recurrent Unit (GRU), Support Vector Regression (SVR), HO-optimized Back Propagation Neural Network (HO-BP), standard GPR, and ARIMA—in terms of accuracy and stability. It achieved a Coefficient of Determination (R2) of 0.979, while the Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE) were as low as 0.318 mm, 0.240 mm, and 1.83%, respectively, proving its capability for effective prediction with non-linear data. The findings of this research can provide valuable technical support for the structural safety management of subway tunnels. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Back to TopTop