Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,737)

Search Parameters:
Keywords = error decomposition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 12711 KB  
Article
Evidentially Driven Uncertainty Decomposition for Weakly Supervised Point Cloud Semantic Segmentation
by Qingyan Wang, Yixin Wang, Junping Zhang, Yujing Wang and Shouqiang Kang
ISPRS Int. J. Geo-Inf. 2026, 15(4), 167; https://doi.org/10.3390/ijgi15040167 (registering DOI) - 12 Apr 2026
Abstract
Point cloud semantic segmentation is a core component in indoor scene understanding and autonomous driving. Under weak point-level supervision, only a small subset of points is annotated, making effective use of unlabeled points critical yet non-trivial. Many existing approaches rely on prediction confidence [...] Read more.
Point cloud semantic segmentation is a core component in indoor scene understanding and autonomous driving. Under weak point-level supervision, only a small subset of points is annotated, making effective use of unlabeled points critical yet non-trivial. Many existing approaches rely on prediction confidence to filter pseudo labels or enforce consistency, which can bias training toward easy points and amplify early mistakes. Consequently, confidently wrong predictions may be reinforced, while uncertain points around class boundaries or in geometrically complex regions are less utilized, limiting further gains. An evidential uncertainty decomposition framework is introduced for weakly supervised point cloud semantic segmentation. Network outputs are interpreted as evidential distributions, and uncertainty is decomposed to separate lack-of-knowledge uncertainty from boundary-related ambiguity, providing a more informative reliability signal for unlabeled points. Based on this signal, different constraints are applied to different subsets: reliable points are trained with pseudo labels together with prototype-based regularization to encourage intra-class compactness; boundary-ambiguous points are guided by evidential consistency to improve boundary learning; and points with high epistemic uncertainty are excluded from pseudo-label-based supervision to mitigate error reinforcement. In addition, an uncertainty calibration term on sparsely labeled points helps stabilize training. Experiments on S3DIS, ScanNet-V2, and SemanticKITTI yield 67.7%, 59.7%, and 53.3% mIoU, respectively, with only 0.1% labeled points, comparing favorably with prior weakly supervised point cloud segmentation methods. Full article
(This article belongs to the Special Issue Indoor Mobile Mapping and Location-Based Knowledge Services)
25 pages, 3222 KB  
Article
CoFiWaveMamba: A Coarse-to-Fine Wavelet-Guided Mamba Network for Single Image Dehazing
by Qiang Fu, Boyu Lu and Chongyao Yan
Electronics 2026, 15(8), 1599; https://doi.org/10.3390/electronics15081599 (registering DOI) - 11 Apr 2026
Abstract
Single image dehazing remains challenging because haze simultaneously distorts global illumination, scene structure, and fine textures, making rigid low–high frequency decoupling prone to error propagation and detail inconsistency. To address this issue, we propose CoFiWaveMamba, a coarse-to-fine wavelet-guided Mamba network for single image [...] Read more.
Single image dehazing remains challenging because haze simultaneously distorts global illumination, scene structure, and fine textures, making rigid low–high frequency decoupling prone to error propagation and detail inconsistency. To address this issue, we propose CoFiWaveMamba, a coarse-to-fine wavelet-guided Mamba network for single image dehazing. The proposed method first employs wavelet decomposition to separate low- and high-frequency components. For low-frequency restoration, a 2D selective-scan Mamba-based module is introduced to capture long-range dependencies, combined with lightweight high-frequency-guided spatial modulation and Shuffle-guided Sequence Attention, we design a progressive coarse-to-fine refinement strategy that combines Fourier-domain global spectral consistency with wavelet-domain directional detail representation, enabling more targeted recovery of edges and textures. Experiments on synthetic and real dehazing benchmarks, including Haze4K, RESIDE-6K, HSTS-SYNTHETIC, I-Haze, NH-Haze, Dense-Haze, and O-HAZE, as well as ablation studies, verify the effectiveness of the proposed design. Overall, CoFiWaveMamba provides a more coordinated solution for global haze removal and local detail reconstruction, helping suppress residual haze, ringing artifacts, oversharpening, and texture inconsistency while restoring clearer and more natural images. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 3rd Edition)
22 pages, 1240 KB  
Article
Single-Ended Fault Location Method for DC Distribution Network Based on Bi-LSTM
by Jiamin Lv, Ying Wang, Mingshen Wang, Qikai Zhao and Manqian Yu
Energies 2026, 19(8), 1866; https://doi.org/10.3390/en19081866 - 10 Apr 2026
Abstract
When a line short-circuit fault occurs in a DC distribution network, the fault current rises quickly and affects a wide range, jeopardizing the safe operation of the system. In order to locate the fault quickly and accurately, this study proposes a fault localization [...] Read more.
When a line short-circuit fault occurs in a DC distribution network, the fault current rises quickly and affects a wide range, jeopardizing the safe operation of the system. In order to locate the fault quickly and accurately, this study proposes a fault localization method based on the Variational Mode Decomposition (VMD) and Bidirectional Long Short-Term Memory (Bi-LSTM) networks. First, the nonlinear relationship between the intrinsic principal frequency and fault distance is analyzed; then, the intrinsic principal frequency of the faulty traveling wave is extracted by using VMD, and the nonlinear relationship between the spectral energy of the principal frequency of the intrinsic frequency and the fault distance is fitted by training the Bi-LSTM network incorporating the attention mechanism. Finally, in response to the issue that a small amount of fault data in practical engineering is difficult to support the amount of data required for deep learning, a transfer learning method is used to locate the fault in the target domain. A small sample test of the target domain is carried out using the migration learning method. The experimental results show that the proposed method has high localization accuracy and good resistance to over-resistance and noise; compared with the traditional network training, the localization error based on migration learning is smaller, and the network convergence effect is better. Full article
(This article belongs to the Section F1: Electrical Power System)
22 pages, 7572 KB  
Article
Spatial Heterogeneity and Drivers of Vertical Error in Global DEMs: An Explainable Machine Learning Approach in Complex Subtropical Coastal Zones
by Junhui Chen, Fei Tang, Heshan Lin, Bo Huang and Xueping Lin
Remote Sens. 2026, 18(8), 1125; https://doi.org/10.3390/rs18081125 - 10 Apr 2026
Viewed by 47
Abstract
Digital elevation models (DEMs) are foundational for critical tasks such as flood inundation simulation, disaster risk assessment, and ecosystem monitoring in coastal zones, yet their vertical accuracy is significantly compromised by complex terrain and surface characteristics. This study quantitatively decomposes the vertical errors [...] Read more.
Digital elevation models (DEMs) are foundational for critical tasks such as flood inundation simulation, disaster risk assessment, and ecosystem monitoring in coastal zones, yet their vertical accuracy is significantly compromised by complex terrain and surface characteristics. This study quantitatively decomposes the vertical errors of three 30 m global DEMs (COP30, NASADEM, and AW3D30) across the subtropical coastal region of Southeast China using ICESat-2 ATL08 data as a reference. By integrating an eXtreme Gradient Boosting (XGBoost) model with SHapley Additive exPlanations (SHAP), we successfully decoupled systematic biases from random noise. The results show that NASADEM achieved the lowest RMSE (7.775 m), followed by COP30 and AW3D30. While the Terrain Ruggedness Index (TRI) and categorically encoded Land Cover were identified as the universally dominant error drivers across all datasets, explainable analysis revealed distinct secondary mechanisms: X-band COP30 is notably susceptible to canopy height, exhibiting significant positive bias in forests exceeding 15 m; C-band NASADEM shows a systematic bias related to topographic position, typically overestimating ridges and underestimating valleys; and optical AW3D30 is significantly affected by stereo-matching errors. Furthermore, the analysis quantified a systematic error component of ~40%. These findings provide a data-driven basis for DEM selection and highlight that accuracy improvements should prioritize vegetation removal for radar DEMs and enhanced stereo-matching for optical models. Full article
Show Figures

Figure 1

35 pages, 856 KB  
Article
Stock Forecasting Based on Informational Complexity Representation: A Framework of Wavelet Entropy, Multiscale Entropy, and Dual-Branch Network
by Guisheng Tian, Chengjun Xu and Yiwen Yang
Entropy 2026, 28(4), 424; https://doi.org/10.3390/e28040424 - 10 Apr 2026
Viewed by 59
Abstract
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as [...] Read more.
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as well as difficulties in balancing prediction accuracy with model complexity. To address these challenges, we propose Wavelet Entropy and Cross-Attention Network (WECA-Net), which combines wavelet decomposition with a multimodal cross-attention mechanism. From an information-theoretic perspective, stock price dynamics reflect the time-varying uncertainty and informational complexity of the market. We employ wavelet entropy to quantify the dispersion and uncertainty of energy distribution across frequency bands, and multiscale entropy to measure the scale-dependent complexity and regularity of the time series. These entropy-derived descriptors provide an interpretable prior of “information content” for cross-modal attention fusion, thereby improving robustness and generalization under non-stationary market conditions. Experiments on Chinese stock indices, A-Share, and CSI 300 component stock datasets demonstrate that WECA-Net consistently outperforms mainstream models in Mean Absolute Error (MAE) and R2 across all datasets. Notably, on the CSI 300 dataset, WECA-Net achieves an R2 of 0.9895, underscoring its strong predictive accuracy and practical applicability. This framework is also well aligned with sensor data fusion and intelligent perception paradigms, offering a robust solution for financial signal processing and real-time market state awareness. Full article
(This article belongs to the Section Complexity)
23 pages, 557 KB  
Article
A Multi-Stage Decomposition and Hybrid Statistical Framework for Time Series Forecasting
by Swera Zeb Abbasi, Mahmoud M. Abdelwahab, Imam Hussain, Moiz Qureshi, Moeeba Rind, Paulo Canas Rodrigues, Ijaz Hussain and Mohamed A. Abdelkawy
Axioms 2026, 15(4), 273; https://doi.org/10.3390/axioms15040273 - 9 Apr 2026
Viewed by 218
Abstract
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates [...] Read more.
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates multi-level signal decomposition with structured parametric modeling to enhance predictive accuracy. The proposed hybrid architectures—EMD–EEMD–ARIMA, EMD–EEMD–GMDH, and EMD–EEMD–ETS—employ a hierarchical decomposition–reconstruction strategy before forecasting. In the first stage, Empirical Mode Decomposition (EMD) decomposes the observed series into intrinsic mode functions (IMFs) and a residual component. In the second stage, Ensemble Empirical Mode Decomposition (EEMD) is applied to further refine the extracted components, mitigating mode mixing and improving signal separability. In the final stage, each reconstructed component is modeled using ARIMA, Exponential Smoothing State Space (ETS), and Group Method of Data Handling (GMDH) frameworks, and the individual forecasts are aggregated to obtain the final prediction. Empirical evaluation based on a recursive one-step-ahead forecasting scheme demonstrates consistent numerical improvements across all standard accuracy measures. In particular, the proposed EMD–EEMD–ARIMA model achieves the lowest forecasting error, reducing the root-mean-square error (RMSE) by approximately 6–7% relative to the best-performing single-stage model and by about 3–4% relative to the two-stage EMD-based hybrids. Similar improvements are observed in mean squared error (MSE), mean absolute error (MAE), and mean absolute percentage error (MAPE), indicating enhanced stability and robustness of the three-stage architecture. The results provide strong numerical evidence that multi-level decomposition combined with structured statistical modeling yields superior predictive performance for complex nonlinear and nonstationary time series. The proposed framework offers a mathematically coherent, computationally tractable, and systematically structured hybrid modeling strategy that effectively integrates noise-assisted decomposition with parametric and data-driven forecasting techniques. Full article
Show Figures

Figure 1

26 pages, 2531 KB  
Article
Underwater Acoustic Source DOA Estimation for Non-Uniform Circular Arrays Based on EMD and PWLS Correction
by Chuang Han, Boyuan Zheng and Tao Shen
Symmetry 2026, 18(4), 627; https://doi.org/10.3390/sym18040627 - 9 Apr 2026
Viewed by 184
Abstract
Uniform circular arrays (UCAs) are widely used in underwater source localization due to their omnidirectional coverage. However, random sensor position errors caused by installation inaccuracies and environmental disturbances convert UCAs into non-uniform circular arrays (NCAs), severely degrading the performance of high-resolution direction of [...] Read more.
Uniform circular arrays (UCAs) are widely used in underwater source localization due to their omnidirectional coverage. However, random sensor position errors caused by installation inaccuracies and environmental disturbances convert UCAs into non-uniform circular arrays (NCAs), severely degrading the performance of high-resolution direction of arrival (DOA) estimation algorithms. To address this issue, this paper proposes a robust DOA estimation method that integrates empirical mode decomposition (EMD) denoising with prior-weighted iterative least squares (PWLS) correction. The method first applies EMD to adaptively denoise received signals by selecting intrinsic mode functions based on a combined energy-correlation criterion. An initial DOA estimate is then obtained using the MUSIC algorithm. Finally, a PWLS correction algorithm leverages prior knowledge of deviated sensors to iteratively fit the circle center and gradually pull sensor positions toward the ideal circumference, using a differentiated relaxation mechanism to suppress outliers while preserving geometric features. Systematic Monte Carlo simulations compare five correction algorithms under multi-frequency and wideband signals. The results show that both multi-frequency and wideband signals reduce estimation errors to below 0.1°, with the proposed PWLS achieving the best accuracy under multi-frequency signals, while all algorithms approach zero error under wideband signals. The PWLS algorithm converges in about 10 iterations with high computational efficiency, providing a reliable solution for practical underwater NCA applications. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

29 pages, 6506 KB  
Article
A Hybrid VMD–Informer Framework for Forecasting Volatile Pork Prices
by Xudong Lin, Guobao Liu, Zhiguo Du, Bin Wen, Zhihui Wu, Xianzhi Tu and Yongjie Zhang
Agriculture 2026, 16(8), 827; https://doi.org/10.3390/agriculture16080827 - 8 Apr 2026
Viewed by 125
Abstract
Accurate forecasting of pork prices is important yet challenging because pork price series are highly volatile and non-stationary. Existing hybrid forecasting models often rely on fixed-weight integration, which may limit their ability to adapt to multi-scale temporal variation and complex temporal dependencies. To [...] Read more.
Accurate forecasting of pork prices is important yet challenging because pork price series are highly volatile and non-stationary. Existing hybrid forecasting models often rely on fixed-weight integration, which may limit their ability to adapt to multi-scale temporal variation and complex temporal dependencies. To address these issues, this study proposes VMD–EMSA–HCTM–Informer, a hybrid forecasting framework that combines signal decomposition with an enhanced encoder–decoder architecture. Variational Mode Decomposition (VMD) is first used to reduce signal non-stationarity by extracting intrinsic mode functions. Within the Informer backbone, an Enhanced Multi-Scale Attention (EMSA) encoder is introduced to capture local fluctuations at different temporal scales, while a Hybrid Convolutional–Temporal Module (HCTM) decoder is used to strengthen temporal feature extraction and channel interaction modeling. Empirical evaluation was conducted on daily pork price data from the China Pig Industry Network and a large-scale intensive breeding enterprise in southern China over the period 2013–2025. Under the current experimental setting, the proposed framework achieved the lowest average errors among the compared baselines across five independent runs, with an average MAE of 0.4875 and an average MAPE of 3.0540%. These results suggest that the proposed framework provides a useful and relatively stable univariate forecasting approach for volatile pork prices. However, the findings should be interpreted within the scope of the present dataset and experimental design, and future work will extend the framework to multivariate forecasting with exogenous drivers and uncertainty quantification. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

30 pages, 1924 KB  
Article
TinyML for Sustainable Edge Intelligence: Practical Optimization Under Extreme Resource Constraints
by Mohamed Echchidmi and Anas Bouayad
Technologies 2026, 14(4), 215; https://doi.org/10.3390/technologies14040215 - 7 Apr 2026
Viewed by 134
Abstract
Deep learning has emerged as an effective tool for automatic waste classification, supporting cleaner cities and more sustainable recycling systems. Because environmental protection is central to the United Nations Sustainable Development Goals (SDGs), improving the sorting and processing of everyday waste is a [...] Read more.
Deep learning has emerged as an effective tool for automatic waste classification, supporting cleaner cities and more sustainable recycling systems. Because environmental protection is central to the United Nations Sustainable Development Goals (SDGs), improving the sorting and processing of everyday waste is a practical step toward this broader objective. In many real-world settings, however, waste is still sorted manually, which is slow, labor-intensive, and prone to human error. Although convolutional neural networks (CNNs) can automate this task with high accuracy, many state-of-the-art models remain too large and computationally demanding for low-cost edge devices intended for deployment in homes, schools, and small recycling facilities. In this work, we investigate lightweight waste-classification models suitable for TinyML deployment while preserving competitive accuracy. We first benchmark multiple CNN architectures to establish a strong baseline, then apply complementary compression strategies including quantization, pruning, singular value decomposition (SVD) low-rank approximation, and knowledge distillation. In addition, we evaluate an RL-guided multi-teacher selection benchmark that adaptively chooses one teacher per minibatch during distillation to improve student training stability, achieving up to 85% accuracy with only 0.496 M parameters (FP32 ≈ 1.89 MB; INT8 ≈ 0.47 MB). Across all experiments, the best accuracy–size trade-off is obtained by combining knowledge distillation with post-training quantization, reducing the model footprint from approximately 16 MB to 281 KB while maintaining 82% accuracy. The resulting model is feasible for deployment on mobile applications and resource-constrained embedded devices based on model size and TensorFlow Lite Micro compatibility. Full article
Show Figures

Figure 1

28 pages, 4886 KB  
Article
Equivariant Transition Matrices for Explainable Deep Learning: A Lie Group Linearization Approach
by Pavlo Radiuk, Oleksander Barmak, Leonid Bedratyuk and Iurii Krak
Mach. Learn. Knowl. Extr. 2026, 8(4), 92; https://doi.org/10.3390/make8040092 - 6 Apr 2026
Viewed by 165
Abstract
Deep learning systems deployed in regulated settings require explanations that are accurate and stable under nuisance transformations, yet classical post hoc transition matrices rely on fidelity-only fitting that fails to guarantee consistent explanations under spatial rotations or other group actions. In this work, [...] Read more.
Deep learning systems deployed in regulated settings require explanations that are accurate and stable under nuisance transformations, yet classical post hoc transition matrices rely on fidelity-only fitting that fails to guarantee consistent explanations under spatial rotations or other group actions. In this work, we propose Equivariant Transition Matrices, a post hoc approach that augments transition matrices with Lie-group-aware structural constraints to bridge this research gap. Our method estimates infinitesimal generators in the formal and mental feature spaces, enforces an approximate intertwining relation at the Lie algebra level, and solves the resulting convex Least-Squares problem via singular value decomposition for small networks or implicit operators for large systems. We introduce diagnostics for symmetry validation and an unsupervised strategy for regularization weight selection. On a controlled synthetic benchmark, our approach reduces the symmetry defect from 13,100 to 0.0425 while increasing the mean squared error marginally from 0.00367 to 0.00524. On the MNIST dataset, the symmetry defect decreases by 72.6 percent (141.19 to 38.65) with changes in structural similarity and peak signal-to-noise ratio below 0.03 percent and 0.06 percent, respectively. These results demonstrate that explanation-level equivariance can be reliably imposed post-training, providing geometrically consistent interpretations for fixed deep models. Full article
(This article belongs to the Special Issue Trustworthy AI: Integrating Knowledge, Retrieval, and Reasoning)
Show Figures

Figure 1

17 pages, 9423 KB  
Article
Photovoltaic Power Prediction Based on Multi-Source Environmental Information Fusion Using a VMD-ZOA-LSTM Hybrid Mode
by Zixiu Qin, Hai Wei, Xiaoning Deng, Yi Zhang and Xuecheng Wang
Processes 2026, 14(7), 1166; https://doi.org/10.3390/pr14071166 - 4 Apr 2026
Viewed by 270
Abstract
New energy power generation has become the first choice for low-carbon reform in the energy industry due to its emission reduction characteristics and environmental friendliness. However, due to the fluctuating nature of renewable energy, sustaining consistent reliability and secure performance within the power [...] Read more.
New energy power generation has become the first choice for low-carbon reform in the energy industry due to its emission reduction characteristics and environmental friendliness. However, due to the fluctuating nature of renewable energy, sustaining consistent reliability and secure performance within the power network has become increasingly challenging. A novel ensemble prediction scheme for photovoltaic (PV) output is presented, leveraging multi-source environmental data fusion to enhance forecast precision. The relationship between environmental variables and PV generation is quantitatively assessed using Pearson’s correlation coefficient to isolate the most influential factors. Subsequently, the PV time-series data are decomposed via variational mode decomposition (VMD) to extract multi-scale dynamic patterns. The refined features are then utilized within a long short-term memory (LSTM) network, whose parameters are adaptively optimized by the zebra optimization algorithm (ZOA). Historical datasets comprising environmental observations and corresponding PV generation records from a representative power station serve as the empirical basis. Results reveal that the VMD-ZOA-LSTM framework achieves the lowest RMSE and MAE, reducing errors by over 50% relative to comparative models. Furthermore, its R2 metric outperforms that of the baseline LSTM and VMD-LSTM configurations by 2.05% and 1.19%, respectively, thereby substantiating the efficiency and validity of the proposed modeling strategy. Full article
Show Figures

Figure 1

33 pages, 442 KB  
Article
Learning-Augmented Quasi-Gradient Operators for Constrained Optimization: A Contraction–Bias–Variance Decomposition
by Gilberto Pérez-Lechuga, Marco Antonio Coronel García and Ana Lidia Martínez Salazar
Mathematics 2026, 14(7), 1202; https://doi.org/10.3390/math14071202 - 3 Apr 2026
Viewed by 264
Abstract
This paper develops a rigorous operator-theoretic framework for learning-augmented quasi-gradient methods in constrained optimization. We consider the minimization of an objective function over a closed convex feasible set, where feasibility is enforced via projection and directional updates may incorporate data-driven corrections. Such settings [...] Read more.
This paper develops a rigorous operator-theoretic framework for learning-augmented quasi-gradient methods in constrained optimization. We consider the minimization of an objective function over a closed convex feasible set, where feasibility is enforced via projection and directional updates may incorporate data-driven corrections. Such settings arise naturally in modern optimization algorithms that integrate artificial intelligence components under structural constraints. The proposed formulation introduces an explicit contraction–bias–variance decomposition of the iterative dynamics. Curvature induces deterministic contraction, alignment distortion—quantified by a geometric parameter—modifies the effective contraction margin, and stochastic learning components inject controlled dispersion. Explicit error recursions yield convergence guarantees under strong convexity, the Polyak–Łojasiewicz condition, and smooth nonconvexity. The analysis establishes that stability regions and first-order complexity bounds are preserved whenever alignment distortion remains below unity and bounded second-moment conditions hold. A fully reproducible computational study provides quantitative validation: the empirically observed steady-state error closely matches the theoretical prediction proportional to σ2/μ(1η). Comparative experiments with gradient, stochastic gradient, and momentum methods confirm that the proposed operator retains classical stability margins and conditioning sensitivity while enabling principled integration of learned directional components. The results provide a transparent mathematical bridge between stochastic approximation theory and contemporary AI-enhanced constrained optimization. Full article
26 pages, 16222 KB  
Article
Comparative Performance of LSTM, ANN, and GAM in Predicting Precipitation and Temperature Anomalies Under Accelerated Warming: Evidence from Thohoyandou, South Africa (1990–2025)
by Mueletshedzi Mukhaninga, Caston Sigauke and Thakhani Ravele
Earth 2026, 7(2), 57; https://doi.org/10.3390/earth7020057 - 2 Apr 2026
Viewed by 342
Abstract
Accurate forecasting of local weather patterns is essential for climate resilience and sustainable planning. This study analysed 35 years (1990–2025) of hourly temperature and precipitation data from Thohoyandou, South Africa, to assess the impacts of climate change and improve anomaly prediction. Exploratory analysis [...] Read more.
Accurate forecasting of local weather patterns is essential for climate resilience and sustainable planning. This study analysed 35 years (1990–2025) of hourly temperature and precipitation data from Thohoyandou, South Africa, to assess the impacts of climate change and improve anomaly prediction. Exploratory analysis and Bayesian Estimator of Abrupt change, Seasonal change, and Trend (BEAST) decomposition revealed accelerated warming trends of 0.025 °C per year in temperature anomalies, alongside highly irregular rainfall patterns characterised by extreme events rather than systematic changes. Three models, Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM) networks, and a Generalised Additive Model (GAM), were evaluated for anomaly forecasting, with feature selection guided by LASSO regression. For temperature, the LSTM performed better than the ANN and GAM, with MSE = 0.458, MAE = 0.457, MBE = 0.087, and MASE = 0.510. For temperature anomalies, the LSTM model performed best, followed by the GAM and ANN models. For precipitation anomalies, the LSTM model also achieved the lowest prediction error, with MSE = 0.187, MAE = 0.111, MBE = −0.009, and MASE = 1.873; however, MASE values above 1 indicate that rainfall forecasting remains challenging. These results show the LSTM model’s ability to handle temperature anomalies and the difficulty of modelling rainfall. GAM performed less accurately but steadily in modelling precipitation. Full article
Show Figures

Figure 1

15 pages, 1434 KB  
Article
Two-Signal Set and Adaptive Spectral Decomposition Algorithm for Estimating the Phase Velocity of Dispersive Lamb Wave Mode
by Lina Draudvilienė, Asta Meškuotienė, Aušra Gadeikytė and Paulius Lapienis
Sensors 2026, 26(7), 2190; https://doi.org/10.3390/s26072190 - 1 Apr 2026
Viewed by 336
Abstract
This study introduces an automated computational tool to evaluate the phase velocity of the highly dispersive A0 mode using only two signals measured along the wave propagation path. The algorithm combines the zero-crossing technique with automated spectral decomposition, utilizing a bank of [...] Read more.
This study introduces an automated computational tool to evaluate the phase velocity of the highly dispersive A0 mode using only two signals measured along the wave propagation path. The algorithm combines the zero-crossing technique with automated spectral decomposition, utilizing a bank of bandpass filters with adaptive bandwidths. Validated through theoretical and experimental analysis of an aluminium plate near 300 kHz, the results demonstrate that using a two-signal set and variable filter widths significantly improves accuracy and extends the measurable frequency range of the dispersion curve. Experimental results demonstrate that by applying various filter widths, the phase velocity dispersion curve segment can be reconstructed over a frequency range exceeding 65% of the signal’s spectral width at the −40 dB level. The reconstruction yielded an average relative error of 0.8% ± 1.2%, while the best-case scenario showed an error of just 0.3% ± 0.4%. Implementing automated filter parameter selection on a signal pair offers a time-efficient alternative to traditional spatial scanning, significantly simplifying data collection while reducing labour and time requirements. Full article
Show Figures

Figure 1

22 pages, 5107 KB  
Article
Adaptive Filtering Method for Low-SNR Rock Mass Fracture Microseismic Signals in Deep-Buried Tunnels Considering Noise Intrusion Characteristics
by Tao Lin, Weiwei Tao, Yakang Xu and Wenjing Niu
Geosciences 2026, 16(4), 143; https://doi.org/10.3390/geosciences16040143 - 1 Apr 2026
Viewed by 230
Abstract
Aiming at the problems of microseismic signals from rock mass fracture in deep-buried tunnels with low signal-to-noise ratio (SNR) suffering from coupled interference of multi-source noise, and traditional filtering methods having fixed parameters and poor processing effects on spectral aliasing, this study proposes [...] Read more.
Aiming at the problems of microseismic signals from rock mass fracture in deep-buried tunnels with low signal-to-noise ratio (SNR) suffering from coupled interference of multi-source noise, and traditional filtering methods having fixed parameters and poor processing effects on spectral aliasing, this study proposes a ternary coupled adaptive filtering method integrating the Sparrow Search Algorithm, Variational Mode Decomposition and Wavelet Threshold Denoising (SSA-VMD-DWT). First, the noise intrusion characteristics of low-SNR microseismic signals in deep-buried tunnels were analyzed, and the filtering difficulties of white noise, low-frequency noise, high-frequency noise and non-stationary noise were clarified. Subsequently, a parameter optimization framework with the Sparrow Search Algorithm (SSA) as the core was constructed to optimize the key parameters, including the penalty factor α and modal number K of Variational Mode Decomposition (VMD), as well as the wavelet basis and decomposition layers of Wavelet Threshold Denoising (DWT), respectively. A dual-index threshold decision function based on kurtosis and correlation coefficient, and a wavelet packet entropy weighted reconstruction algorithm were designed to realize the collaborative adaptive adjustment of decomposition depth and threshold rules. Finally, the performance of the algorithm was verified through simulation signal experiments and an engineering case of a deep-buried tunnel in Southwest China. The results show that for the simulated signal with a low SNR of 2 dB, the SNR is increased to 12.43 dB, and the root mean square error is reduced to 2.36 × 10−7 after denoising by this algorithm, which is significantly superior to the Empirical Mode Decomposition (EMD) and traditional DWT methods. In the engineering case, the information entropy of the filtered signal is the lowest among all methods, which can effectively suppress multi-band noise and retain the core characteristics of microseismic signals from rock mass fracture, solving the problems of spectral aliasing, detail loss and empirical parameter setting of traditional methods. This method provides a new technical paradigm for the processing of low-quality microseismic signals in deep tunnel engineering and can improve the accuracy of monitoring and early warning for rock mass dynamic disasters. Full article
(This article belongs to the Special Issue New Trends in Numerical Methods in Rock Mechanics)
Show Figures

Figure 1

Back to TopTop