Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,211)

Search Parameters:
Keywords = robust feature extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 6026 KB  
Article
CNN-LSTM Assisted Multi-Objective Aerodynamic Optimization Method for Low-Reynolds-Number Micro-UAV Airfoils
by Jinzhao Peng, Enying Li and Hu Wang
Aerospace 2026, 13(1), 78; https://doi.org/10.3390/aerospace13010078 (registering DOI) - 11 Jan 2026
Abstract
The optimization of low-Reynolds-number airfoils for micro unmanned aerial vehicles (UAVs) is challenging due to strong geometric nonlinearities, tight endurance requirements, and the need to maintain performance across multiple operating conditions. Classical surrogate-assisted optimization (SAO) methods combined with genetic algorithms become increasingly expensive [...] Read more.
The optimization of low-Reynolds-number airfoils for micro unmanned aerial vehicles (UAVs) is challenging due to strong geometric nonlinearities, tight endurance requirements, and the need to maintain performance across multiple operating conditions. Classical surrogate-assisted optimization (SAO) methods combined with genetic algorithms become increasingly expensive and less reliable when class–shape transformation (CST)-based geometries are coupled with several flight conditions. Although deep learning surrogates have higher expressive power, their use in this context is often limited by insufficient local feature extraction, weak adaptation to changes in operating conditions, and a lack of robustness analysis. In this study, we construct a task-specific convolutional neural network–long short-term memory (CNN–LSTM) surrogate that jointly predicts the power factor, lift, and drag coefficients at three representative operating conditions (cruise, forward flight, and maneuver) for the same CST-parameterized airfoil and integrate it into an Non-dominated Sorting Genetic Algorithm II (NSGA-II)-based three-objective optimization framework. The CNN encoder captures local geometric sensitivities, while the LSTM aggregates dependencies across operating conditions, forming a compact encoder–aggregator tailored to low-Re micro-UAV design. Trained on a computational fluid dynamics (CFD) dataset from a validated SD7032-based pipeline, the proposed surrogate achieves substantially lower prediction errors than several fully connected and single-condition baselines and maintains more favorable error distributions on CST-family parameter-range extrapolation samples (±40%, geometry-valid) under the same CFD setup, while being about three orders of magnitude faster than conventional CFD during inference. When embedded in NSGA-II under thickness and pitching-moment constraints, the surrogate enables efficient exploration of the design space and yields an optimized airfoil that simultaneously improves power factor, reduces drag, and increases lift compared with the baseline SD7032. This work therefore contributes a three-condition surrogate–optimizer workflow and physically interpretable low-Re micro-UAV design insights, rather than introducing a new generic learning or optimization algorithm. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

23 pages, 5900 KB  
Article
Hybrid Attention Mechanism Combined with U-Net for Extracting Vascular Branching Points in Intracavitary Images
by Kaiyang Xu, Haibin Wu, Liang Yu and Xin He
Electronics 2026, 15(2), 322; https://doi.org/10.3390/electronics15020322 (registering DOI) - 11 Jan 2026
Abstract
To address the application requirements of Visual Simultaneous Localization and Mapping (VSLAM) in intracavitary environments and the scarcity of gold-standard datasets for deep learning methods, this study proposes a hybrid attention mechanism combined with U-Net for vascular branch point extraction in endoluminal images [...] Read more.
To address the application requirements of Visual Simultaneous Localization and Mapping (VSLAM) in intracavitary environments and the scarcity of gold-standard datasets for deep learning methods, this study proposes a hybrid attention mechanism combined with U-Net for vascular branch point extraction in endoluminal images (SuperVessel). The network is initialized via transfer learning with pre-trained SuperRetina model parameters and integrated with a vascular feature detection and matching method based on dual branch fusion and structure enhancement, generating a pseudo-gold-standard vascular branch point dataset. The framework employs a dual-decoder architecture, incorporates a dynamic up-sampling module (CBAM-Dysample) to refine local vessel features through hybrid attention mechanisms, designs a Dice-Det loss function weighted by branching features to prioritize vessel junctions, and introduces a dynamically weighted Triplet-Des loss function optimized for descriptor discrimination. Experiments on the Vivo test set demonstrate that the proposed method achieves an average Area Under Curve (AUC) of 0.760, with mean feature points, accuracy, and repeatability scores of 42,795, 0.5294, and 0.46, respectively. Compared to SuperRetina, the method maintains matching stability while exhibiting superior repeatability, feature point density, and robustness in low-texture/deformation scenarios. Ablation studies confirm the CBAM-Dysample module’s efficacy in enhancing feature expression and convergence speed, offering a robust solution for intracavitary SLAM systems. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

28 pages, 5634 KB  
Article
UCA-Net: A Transformer-Based U-Shaped Underwater Enhancement Network with a Compound Attention Mechanism
by Cheng Yu, Jian Zhou, Lin Wang, Guizhen Liu and Zhongjun Ding
Electronics 2026, 15(2), 318; https://doi.org/10.3390/electronics15020318 (registering DOI) - 11 Jan 2026
Abstract
Images captured underwater frequently suffer from color casts, blurring, and distortion, which are mainly attributable to the unique optical characteristics of water. Although conventional UIE methods rooted in physics are available, their effectiveness is often constrained, particularly in challenging aquatic and illumination conditions. [...] Read more.
Images captured underwater frequently suffer from color casts, blurring, and distortion, which are mainly attributable to the unique optical characteristics of water. Although conventional UIE methods rooted in physics are available, their effectiveness is often constrained, particularly in challenging aquatic and illumination conditions. More recently, deep learning has become a leading paradigm for UIE, recognized for its superior performance and operational efficiency. This paper proposes UCA-Net, a lightweight CNN-Transformer hybrid network. It incorporates multiple attention mechanisms and utilizes composite attention to effectively enhance textures, reduce blur, and correct color. A novel adaptive sparse self-attention module is introduced to jointly restore global color consistency and fine local details. The model employs a U-shaped encoder–decoder architecture with three-stage up- and down-sampling, facilitating multi-scale feature extraction and global context fusion for high-quality enhancement. Experimental results on multiple public datasets demonstrate UCA-Net’s superior performance, achieving a PSNR of 24.75 dB and an SSIM of 0.89 on the UIEB dataset, while maintaining an extremely low computational cost with only 1.44M parameters. Its effectiveness is further validated by improvements in various downstream image tasks. UCA-Net achieves an optimal balance between performance and efficiency, offering a robust and practical solution for underwater vision applications. Full article
22 pages, 1453 KB  
Article
SER-YOLOv8: An Early Forest Fire Detection Model Integrating Multi-Path Attention and NWD
by Juan Liu, Jiaxin Feng, Shujie Wang, Yian Ding, Jianghua Guo, Yuhang Li, Wenxuan Xue and Jie Hu
Forests 2026, 17(1), 93; https://doi.org/10.3390/f17010093 (registering DOI) - 10 Jan 2026
Abstract
Forest ecosystems, as vital natural resources, are increasingly endangered by wildfires. Effective forest fire management relies on the accurate and early detection of small–scale flames and smoke. However, the complex and dynamic forest environment, along with the small size and irregular shape of [...] Read more.
Forest ecosystems, as vital natural resources, are increasingly endangered by wildfires. Effective forest fire management relies on the accurate and early detection of small–scale flames and smoke. However, the complex and dynamic forest environment, along with the small size and irregular shape of early fire indicators, poses significant challenges to reliable early warning systems. To address these issues, this paper introduces SER–YOLOv8, an enhanced detection model based on the YOLOv8 architecture. The model incorporates the RepNCSPELAN4 module and an SPPELAN structure to strengthen multi-scale feature representation. Furthermore, to improve small target localization, the Normalized Wasserstein Distance (NWD) loss is adopted, providing a more robust similarity measure than traditional IoU–based losses. The newly designed SERDet module deeply integrates a multi–scale feature extraction mechanism with a multi-path fused attention mechanism, significantly enhancing the recognition capability for flame targets under complex backgrounds. Depthwise separable convolution (DWConv) is utilized to reduce parameters and boost inference efficiency. Experiments on the M4SFWD dataset show that the proposed method improves mAP50 by 1.2% for flames and 2.4% for smoke, with a 1.5% overall gain in mAP50–95 over the baseline YOLOv8, outperforming existing mainstream models and offering a reliable solution for forest fire prevention. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
26 pages, 3990 KB  
Article
Neural Vessel Segmentation and Gaussian Splatting for 3D Reconstruction of Cerebral Angiography
by Oleh Kryvoshei, Patrik Kamencay and Ladislav Polak
AI 2026, 7(1), 22; https://doi.org/10.3390/ai7010022 (registering DOI) - 10 Jan 2026
Abstract
Cerebrovascular diseases are a leading cause of global mortality, underscoring the need for objective and quantitative 3D visualization of cerebral vasculature from dynamic imaging modalities. Conventional analysis is often labor-intensive, subjective, and prone to errors due to image noise and subtraction artifacts. This [...] Read more.
Cerebrovascular diseases are a leading cause of global mortality, underscoring the need for objective and quantitative 3D visualization of cerebral vasculature from dynamic imaging modalities. Conventional analysis is often labor-intensive, subjective, and prone to errors due to image noise and subtraction artifacts. This study tackles the challenge of achieving fast and accurate volumetric reconstruction from angiography sequences. We propose a multi-stage pipeline that begins with image restoration to enhance input quality, followed by neural segmentation to extract vascular structures. Camera poses and sparse geometry are estimated through Structure-from-Motion, and these reconstructions are refined by leveraging the segmentation maps to isolate vessel-specific features. The resulting data are then used to initialize and optimize a 3D Gaussian Splatting model, enabling anatomically precise representation of cerebral vasculature. The integration of deep neural segmentation priors with explicit geometric initialization yields highly detailed 3D reconstructions of cerebral angiography. The resulting models leverage the computational efficiency of 3D Gaussian Splatting, achieving near-real-time rendering performance competitive with state-of-the-art reconstruction methods. The segmentation of brain vessels using nnU-Net and our trained model achieved an accuracy of 84.21%, highlighting the improvement in the performance of the proposed approach. Overall, our pipeline significantly improves both the efficiency and accuracy of volumetric cerebral vasculature reconstruction, providing a robust foundation for quantitative clinical analysis and enhanced guidance during endovascular procedures. Full article
17 pages, 6045 KB  
Article
Estimation of Citrus Leaf Relative Water Content Using CWT Combined with Chlorophyll-Sensitive Bands
by Xiangqian Qi, Yanfang Li, Shiqing Dou, Wei Li, Yanqing Yang and Mingchao Wei
Sensors 2026, 26(2), 467; https://doi.org/10.3390/s26020467 (registering DOI) - 10 Jan 2026
Abstract
In citrus cultivation practice, regular monitoring of leaf leaf relative water content (RWC) can effectively guide water management, thereby improving fruit quality and yield. When applying hyperspectral technology to citrus leaf moisture monitoring, the precise quantification of RWC still needs to address issues [...] Read more.
In citrus cultivation practice, regular monitoring of leaf leaf relative water content (RWC) can effectively guide water management, thereby improving fruit quality and yield. When applying hyperspectral technology to citrus leaf moisture monitoring, the precise quantification of RWC still needs to address issues such as data noise and algorithm adaptability. The noise interference and spectral aliasing in RWC sensitive bands lead to a decrease in the accuracy of moisture inversion in hyperspectral data, and the combined sensitive bands of chlorophyll (LCC) in citrus leaves can affect its estimation accuracy. In order to explore the optimal prediction model for RWC of citrus leaves and accurately control irrigation to improve citrus quality and yield, this study is based on 401–2400 nm spectral data and extracts noise robust features through continuous wavelet transform (CWT) multi-scale decomposition. A high-precision estimation model for citrus leaf RWC is established, and the potential of CWT in RWC quantitative inversion is systematically evaluated. This study is based on the multi-scale analysis characteristics of CWT to probe the time–frequency characteristic patterns associated with RWC and LCC in citrus leaf spectra. Pearson correlation analysis is used to evaluate the effectiveness of features at different decomposition scales, and the successive projections algorithm (SPA) is further used to eliminate band collinearity and extract the optimal sensitive band combination. Finally, based on the selected RWC and LCC-sensitive bands, a high-precision predictive model for citrus leaf RWC was established using partial least squares regression (PLSR). The results revealed that (1) CWT preprocessing markedly boosts the estimation accuracy of RWC and LCC relative to the original spectrum (max improvements: 6% and 3%), proving it enhances spectral sensitivity to these two indices in citrus leaves. (2) Combining CWT and SPA, the resulting predictive model showed higher inversion accuracy than the original spectra. (3) Integrating RWC Scale7 and LCC Scale5-2224/2308 features, the CWT-SPA fusion model showed optimal predictive performance (R2 = 0.756, RMSE = 0.0214), confirming the value of multi-scale feature joint modeling. Overall, CWT-SPA coupled with LCC spectral traits can boost the spectral response signal of citrus leaf RWC, enhancing its prediction capability and stability. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

26 pages, 2173 KB  
Article
Multi-Scale and Interpretable Daily Runoff Forecasting with IEWT and ModernTCN
by Qing Li, Yunwei Zhou, Yongshun Zheng, Chu Zhang and Tian Peng
Water 2026, 18(2), 183; https://doi.org/10.3390/w18020183 - 9 Jan 2026
Abstract
Daily runoff series exhibit high complexity and significant fluctuations, which often lead to large prediction errors and limit the scientific basis of water resource scheduling and management. This study proposes a runoff prediction framework that incorporates upstream–downstream hydrological correlation information and integrates Improved [...] Read more.
Daily runoff series exhibit high complexity and significant fluctuations, which often lead to large prediction errors and limit the scientific basis of water resource scheduling and management. This study proposes a runoff prediction framework that incorporates upstream–downstream hydrological correlation information and integrates Improved Empirical Wavelet Transform (IEWT), SHAP-based interpretable feature selection, Improved Population-Based Training (IPBT), and the Modern Temporal Convolutional Network (ModernTCN) to enhance forecasting accuracy and model robustness. First, IEWT is employed to perform multi-scale decomposition of the daily runoff sequence, extracting structural features at different temporal scales. Then, upstream–downstream hydrological correlation information is introduced, and the SHAP method is used to evaluate the importance of multi-source basin features, eliminating redundant variables to improve input quality and training efficiency. Finally, IPBT is applied to optimize ModernTCN hyperparameters, thereby constructing a high-performance forecasting model. Case studies at the Hankou station demonstrate that the proposed IPBT-IEWT-SHAP-ModernTCN model significantly outperforms benchmark methods such as LSTM, iTransformer, and TCN in terms of accuracy, stability, and generalization. Specifically, the model achieves a root mean square error of 342.14, a mean absolute error of 251.01, and a Nash–Sutcliffe efficiency of 0.9992. These results indicate that the proposed method can effectively capture the nonlinear correlation characteristics between upstream and downstream hydrological processes, thus providing an efficient and widely adaptable framework for daily runoff prediction and scientific water resources management. Full article
22 pages, 3391 KB  
Article
Artificial Neural Network-Based Conveying Object Measurement Automation System Using Distance Sensor
by Hyo Beom Heo and Seung Hwan Park
Sensors 2026, 26(2), 455; https://doi.org/10.3390/s26020455 - 9 Jan 2026
Abstract
Measuring technology is used in various ways in the logistics industry for defect inspection and loading optimization. Recently, in the context of the fourth industrial revolution, research has focused on measurement automation combining AI, IoT technologies, and measuring equipment. The 3D scanner used [...] Read more.
Measuring technology is used in various ways in the logistics industry for defect inspection and loading optimization. Recently, in the context of the fourth industrial revolution, research has focused on measurement automation combining AI, IoT technologies, and measuring equipment. The 3D scanner used for field logistics measurements offers high performance and can handle large volumes quickly; however, its high unit price limits adoption across all lines. Entry-level sensors are challenging to use due to measurement reliability issues: their performance varies with changes in object location, shape, and logistics environment. To bridge this gap, this study proposes a systematic framework for geometry measurement that enables reliable length and width estimation using only a single entry-level distance sensor. We design and build a conveyor-belt-based data acquisition setup that emulates realistic logistics transfer scenarios and systematically varies transfer conditions to capture representative measurement disturbances. Based on the collected data, we perform robust feature extraction tailored to noisy, condition-dependent signals and train an artificial neural network to map sensor observations to geometric dimensions. We then verified the model’s performance in measuring object length and width using test data. The experimental results show that the proposed method provides reliable measurement results even under varying transfer conditions. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

29 pages, 6321 KB  
Article
Pose-Perceptive Convolution: Learning Geometry-Adaptive Receptive Fields for Robust 6D Pose Estimation
by Yi Lai, Yaqing Song, Qixian Zhang, Yue Wang, Kang An and Hui Zhang
Sensors 2026, 26(2), 453; https://doi.org/10.3390/s26020453 - 9 Jan 2026
Abstract
6D object pose estimation is crucial for applications such as robotic manipulation and augmented reality, yet it remains highly challenging when dealing with objects of significantly different aspect ratios or the drastic appearance variations of a single object caused by pose changes. Most [...] Read more.
6D object pose estimation is crucial for applications such as robotic manipulation and augmented reality, yet it remains highly challenging when dealing with objects of significantly different aspect ratios or the drastic appearance variations of a single object caused by pose changes. Most existing methods focus on designing more complex backend fusion modules, while largely overlooking a fundamental problem at the feature extraction frontend: the geometric mismatch between the fixed, square receptive fields of standard convolutions and the varied projected morphologies of objects. This mismatch, along with noise in fused features and ambiguity in regression, limits the performance ceiling of current methods. To this end, this paper proposes a novel Pose-Perceptive Convolution (PPC) and constructs a new Pose-Perceptive Fusion Network (PPF-Net). Its core component, the Pose-Perceptive Convolution, fundamentally resolves the aforementioned geometric mismatch by dynamically adapting the shape and sampling density of its receptive field. Experiments on four benchmarks show that PPF-Net improves the VSD score by 19.4% over FFB6D on MP6D, and achieves 96.7% ADD-S on YCB-Video, approaching state-of-the-art accuracy. Crucially, these gains are realized with minimal computational overhead, avoiding the heavy latency of backend-intensive approaches. This validates that frontend feature extraction is an efficient strategy for robust 6D pose estimation. Full article
31 pages, 4648 KB  
Article
GF-NGB: A Graph-Fusion Natural Gradient Boosting Framework for Pavement Roughness Prediction Using Multi-Source Data
by Yuanjiao Hu, Mengyuan Niu, Liumei Zhang, Lili Pei, Zhenzhen Fan and Yang Yang
Symmetry 2026, 18(1), 134; https://doi.org/10.3390/sym18010134 - 9 Jan 2026
Abstract
Pavement roughness is a critical indicator for road maintenance decisions and driving safety assessment. Existing methods primarily rely on multi-source explicit features, which have limited capability in capturing implicit information such as spatial topology between road segments. Furthermore, their accuracy and stability remain [...] Read more.
Pavement roughness is a critical indicator for road maintenance decisions and driving safety assessment. Existing methods primarily rely on multi-source explicit features, which have limited capability in capturing implicit information such as spatial topology between road segments. Furthermore, their accuracy and stability remain insufficient in cross-regional and small-sample prediction scenarios. To address these limitations, we propose a Graph-Fused Natural Gradient Boosting framework (GF-NGB), which combines the spatial topology modeling capability of graph neural networks with the small-sample robustness of natural gradient boosting for high-precision cross-regional roughness prediction. The method first extracts an 18-dimensional set of multi-source features from the U.S. Long-Term Pavement Performance (LTPP) database and derives an 8-dimensional set of implicit spatial features using a graph neural network. These features are then concatenated and fed into a natural gradient boosting model, which is optimized by Optuna, to predict the dual objectives of left and right wheel-track roughness. To evaluate the generalization capability of the proposed method, we employ a spatially partitioned data split: the training set includes 1648 segments from Arizona, California, Florida, Ontario, and Missouri, while the test set comprises 330 segments from Manitoba and Nevada with distinct geographic and climatic conditions. Experimental results show that GF-NGB achieves the best performance on cross-regional tests, with average prediction accuracy improved by 1.7% and 3.6% compared to Natural Gradient Boosting (NGBoost) and a Graph Neural Network–Multilayer Perceptron hybrid model (GNN-MLP), respectively. This study reveals the synergistic effect of multi-source texture features and spatial topology information, providing a generalizable framework and technical pathway for cross-regional, small-sample intelligent pavement monitoring and smart maintenance. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Intelligent Transportation)
Show Figures

Figure 1

31 pages, 10745 KB  
Article
CNN-GCN Coordinated Multimodal Frequency Network for Hyperspectral Image and LiDAR Classification
by Haibin Wu, Haoran Lv, Aili Wang, Siqi Yan, Gabor Molnar, Liang Yu and Minhui Wang
Remote Sens. 2026, 18(2), 216; https://doi.org/10.3390/rs18020216 - 9 Jan 2026
Abstract
The existing multimodal image classification methods often suffer from several key limitations: difficulty in effectively balancing local detail and global topological relationships in hyperspectral image (HSI) feature extraction; insufficient multi-scale characterization of terrain features from light detection and ranging (LiDAR) elevation data; and [...] Read more.
The existing multimodal image classification methods often suffer from several key limitations: difficulty in effectively balancing local detail and global topological relationships in hyperspectral image (HSI) feature extraction; insufficient multi-scale characterization of terrain features from light detection and ranging (LiDAR) elevation data; and neglect of deep inter-modal interactions in traditional fusion methods, often accompanied by high computational complexity. To address these issues, this paper proposes a comprehensive deep learning framework combining convolutional neural network (CNN), a graph convolutional network (GCN), and wavelet transform for the joint classification of HSI and LiDAR data, including several novel components: a Spectral Graph Mixer Block (SGMB), where a CNN branch captures fine-grained spectral–spatial features by multi-scale convolutions, while a parallel GCN branch models long-range contextual features through an enhanced gated graph network. This dual-path design enables simultaneous extraction of local detail and global topological features from HSI data; a Spatial Coordinate Block (SCB) to enhance spatial awareness and improve the perception of object contours and distribution patterns; a Multi-Scale Elevation Feature Extraction Block (MSFE) for capturing terrain representations across varying scales; and a Bidirectional Frequency Attention Encoder (BiFAE) to enable efficient and deep interaction between multimodal features. These modules are intricately designed to work in concert, forming a cohesive end-to-end framework, which not only achieves a more effective balance between local details and global contexts but also enables deep yet computationally efficient interaction across features, significantly strengthening the discriminability and robustness of the learned representation. To evaluate the proposed method, we conducted experiments on three multimodal remote sensing datasets: Houston2013, Augsburg, and Trento. Quantitative results demonstrate that our framework outperforms state-of-the-art methods, achieving OA values of 98.93%, 88.05%, and 99.59% on the respective datasets. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

26 pages, 8324 KB  
Article
Two-Stage Harmonic Optimization-Gram Based on Spectral Amplitude Modulation for Rolling Bearing Fault Diagnosis
by Qihui Feng, Qinge Dai, Jun Wang, Yongqi Chen, Jiqiang Hu, Linqiang Wu and Rui Qin
Machines 2026, 14(1), 83; https://doi.org/10.3390/machines14010083 - 9 Jan 2026
Abstract
To address the challenge of effectively extracting early-stage failure features in rolling bearings, this paper proposes a two-stage harmonic optimization-gram method based on spectral amplitude modulation (SAM-TSHOgram). The method first employs amplitude spectra with varying weighting exponents to preprocess the signal, performing nonlinear [...] Read more.
To address the challenge of effectively extracting early-stage failure features in rolling bearings, this paper proposes a two-stage harmonic optimization-gram method based on spectral amplitude modulation (SAM-TSHOgram). The method first employs amplitude spectra with varying weighting exponents to preprocess the signal, performing nonlinear adjustments to the vibration signal’s spectrum to enhance weak periodic impact characteristics. Subsequently, a two-stage evaluation strategy based on spectral coherence (SCoh) was designed to adaptively identify the optimal frequency band (OFB). The first stage employs the Periodic Harmonic Correlation Strength (PHCS) metric, based on autocorrelation, to coarsely screen candidate bands with strong periodic structures. The second stage utilizes the Sparse Harmonic Significance (SHS) metric, based on spectral negative entropy, to refine the candidate set, selecting bands with the most prominent harmonic features. Finally, SCoh is integrated over the selected OFB to generate an Improved Envelope Spectrum (IES). The proposed method was validated using both simulated and experimental vibration signals from bearings and gearboxes. The results demonstrate that SAM-TSHOgram significantly outperforms conventional approaches such as EES, Fast Kurtogram, and IESFOgram in terms of signal-to-noise ratio (SNR) enhancement, harmonic clarity, and diagnostic robustness. These findings confirm its potential for reliable early fault detection in rolling bearings. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

19 pages, 36644 KB  
Article
Global Lunar FeO Mapping via Wavelet–Autoencoder Feature Learning from M3 Hyperspectral Data
by Julia Fernández–Díaz, Fernando Sánchez Lasheras, Javier Gracia Rodríguez, Santiago Iglesias Álvarez, Antonio Luis Marqués Sierra and Francisco Javier de Cos Juez
Mathematics 2026, 14(2), 254; https://doi.org/10.3390/math14020254 - 9 Jan 2026
Abstract
Accurate global mapping of lunar iron oxide (FeO) abundance is essential for understanding the Moon’s geological evolution and for supporting future in situ resource utilization (ISRU). While hyperspectral data from the Moon Mineralogy Mapper (M3) provide a unique combination of high spectral dimensionality, [...] Read more.
Accurate global mapping of lunar iron oxide (FeO) abundance is essential for understanding the Moon’s geological evolution and for supporting future in situ resource utilization (ISRU). While hyperspectral data from the Moon Mineralogy Mapper (M3) provide a unique combination of high spectral dimensionality, hectometre-scale spatial resolution, and near-global coverage, existing FeO retrieval approaches struggle to fully exploit the high dimensionality, nonlinear spectral variability, and planetary-scale volume of the Global Mode dataset. To address these limitations, we present an integrated machine learning pipeline for estimating lunar FeO abundance from M3 hyperspectral observations. Unlike traditional methods based on raw reflectance or empirical spectral indices, the proposed framework combines Discrete Wavelet Transform (DWT), deep autoencoder-based feature compression, and ensemble regression to achieve robust and scalable FeO prediction. M3 spectra (83 bands, 475–3000 nm) are transformed using a Daubechies-4 (db4) DWT to extract 42 representative coefficients per pixel, capturing the dominant spectral information while filtering high-frequency noise. These features are further compressed into a six-dimensional latent space via a deep autoencoder and used as input to a Random Forest regressor, which outperforms kernel-based and linear Support Vector Regression (SVR) as well as Lasso regression in predictive accuracy and stability. The proposed model achieves an average prediction error of 1.204 wt.% FeO and demonstrates consistent performance across diverse lunar geological units. Applied to 806 orbital tracks (approximately 3.5×109 pixels), covering more than 95% of the lunar surface, the pipeline produces a global FeO abundance map at 150 m per pixel resolution. These results demonstrate the potential of integrating multiscale wavelet representations with nonlinear feature learning to enable large-scale, geochemically constrained planetary mineral mapping. Full article
37 pages, 7151 KB  
Review
A Review of In Situ Quality Monitoring in Additive Manufacturing Using Acoustic Emission Technology
by Wenbiao Chang, Qifei Zhang, Wei Chen, Yuan Gao, Bin Liu, Zhonghua Li and Changying Dang
Sensors 2026, 26(2), 438; https://doi.org/10.3390/s26020438 - 9 Jan 2026
Viewed by 19
Abstract
Additive manufacturing (AM) has emerged as a pivotal technology in component fabrication, renowned for its capabilities in freeform fabrication, material efficiency, and integrated design-to-manufacturing processes. As a critical branch of AM, metal additive manufacturing (MAM) has garnered significant attention for producing metal parts. [...] Read more.
Additive manufacturing (AM) has emerged as a pivotal technology in component fabrication, renowned for its capabilities in freeform fabrication, material efficiency, and integrated design-to-manufacturing processes. As a critical branch of AM, metal additive manufacturing (MAM) has garnered significant attention for producing metal parts. However, process anomalies during MAM can pose safety risks, while internal defects in as-built parts detrimentally affect their service performance. These concerns underscore the necessity for robust in-process monitoring of both the MAM process and the quality of the resulting components. This review first delineates common MAM techniques and popular in-process monitoring methods. It then elaborates on the fundamental principles of acoustic emission (AE), including the configuration of AE systems and methods for extracting characteristic AE parameters. The core of the review synthesizes applications of AE technology in MAM, categorizing them into three key aspects: (1) hardware setup, which involves a comparative analysis of sensor selection, mounting strategies, and noise suppression techniques; (2) parametric characterization, which establishes correlations between AE features and process dynamics (e.g., process parameter deviations, spattering, melting/pool stability) as well as defect formation (e.g., porosity and cracking); and (3) intelligent monitoring, which focuses on the development of classification models and the integration of feedback control systems. By providing a systematic overview, this review aims to highlight the potential of AE as a powerful tool for real-time quality assurance in MAM. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
20 pages, 2616 KB  
Article
MS-TSEFNet: Multi-Scale Spatiotemporal Efficient Feature Fusion Network
by Weijie Wu, Lifei Liu, Weijie Chen, Yixin Chen, Xingyu Wang, Andrzej Cichocki, Yunhe Lu and Jing Jin
Sensors 2026, 26(2), 437; https://doi.org/10.3390/s26020437 - 9 Jan 2026
Viewed by 33
Abstract
Motor imagery signal decoding is an important research direction in the field of brain–computer interfaces, which aim to judge the motor imagery state of an individual by analyzing electroencephalogram (EEG) signals. Deep learning technology has been gradually applied to EEG classification, which can [...] Read more.
Motor imagery signal decoding is an important research direction in the field of brain–computer interfaces, which aim to judge the motor imagery state of an individual by analyzing electroencephalogram (EEG) signals. Deep learning technology has been gradually applied to EEG classification, which can automatically extract features. However, when processing complex EEG signals, the existing decoding models cannot effectively fuse features at different levels, resulting in limited classification performance. This study proposes a multi-scale spatiotemporal efficient feature fusion network (MS-TSEFNet), which learns the dynamic changes in EEG signals at different time scales through multi-scale convolution modules and combines the spatial attention mechanism to efficiently capture the spatial correlation between electrodes in EEG signals. In addition, the network adopts an efficient feature fusion strategy to deeply fuse features at different levels, thereby improving the expression ability of the model. In the task of motor imagery signal decoding, MS-TSEFNet shows higher accuracy and robustness. We use the public BCIC-IV2a, BCIC-IV2b and ECUST datasets for evaluation. The experimental results show that the average classification accuracy of MS-TSEFNet reaches 80.31%, 86.69% and 71.14%, respectively, which is better than the current state-of-the-art algorithms. We conducted an ablation experiment to further verify the effectiveness of the model. The experimental results showed that each module played an important role in improving the final performance. In particular, the combination of the multi-scale convolution module and the feature fusion module significantly improved the model’s ability to extract the spatiotemporal features of EEG signals. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Show Figures

Figure 1

Back to TopTop