Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (120)

Search Parameters:
Keywords = Gaussian white noise process

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 6100 KB  
Article
UAV Image Denoising and Its Impact on Performance of Object Localization and Classification in UAV Images
by Rostyslav Tsekhmystro, Vladimir Lukin and Dmytro Krytskyi
Computation 2025, 13(10), 234; https://doi.org/10.3390/computation13100234 - 3 Oct 2025
Viewed by 354
Abstract
Unmanned aerial vehicles (UAVs) have become a tool for solving numerous practical tasks. UAV sensors provide images and videos for on-line or off-line data processing for object localization, classification, and tracking due to the use of trained convolutional neural networks (CNNs) and artificial [...] Read more.
Unmanned aerial vehicles (UAVs) have become a tool for solving numerous practical tasks. UAV sensors provide images and videos for on-line or off-line data processing for object localization, classification, and tracking due to the use of trained convolutional neural networks (CNNs) and artificial intelligence. However, quality of images acquired by UAV-based sensors is not always perfect due to many factors. One of them could be noise arising because of several reasons. Its presence, especially if noise is intensive, can make significantly worse the performance characteristics of CNN-based techniques of object localization and classification. We analyze such degradation for a set of eleven modern CNNs for additive white Gaussian noise model and study when (for what noise intensity and for what CNN) the performance reduction becomes essential and, thus, special means to improve it become desired. Representatives of two most popular families, namely the block matching 3-dimensional (BM3D) filter and DRUNet denoiser, are employed to enhance images under condition of a priori known noise properties. It is shown that, due to preliminary denoising, the CNN performance characteristics can be significantly improved up to almost the same level as for the noise-free images without CNN retraining. Performance is analyzed using several criteria typical for image denoising, object localization and classification. Examples of object localization and classification are presented demonstrating possible object missing due to noise. Computational efficiency is also taken into account. Using a large set of test data, it is demonstrated that: (1) the best results are usually provided for SSD Mobilenet V2 and VGG16 networks; (2) the performance characteristics for cases of applying BM3D filter and DRUNet denoiser are similar but the use of DRUNet is preferable since it provides slightly better results. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

13 pages, 793 KB  
Article
Red Noise Suppression in Pulsar Timing Array Data Using Adaptive Splines
by Yi-Qian Qian, Yan Wang and Soumya D. Mohanty
Universe 2025, 11(8), 268; https://doi.org/10.3390/universe11080268 - 15 Aug 2025
Viewed by 436
Abstract
Noise in Pulsar Timing Array (PTA) data is commonly modeled as a mixture of white and red noise components. While the former is related to the receivers, and easily characterized by three parameters (EFAC, EQUAD and ECORR), the latter arises from a mix [...] Read more.
Noise in Pulsar Timing Array (PTA) data is commonly modeled as a mixture of white and red noise components. While the former is related to the receivers, and easily characterized by three parameters (EFAC, EQUAD and ECORR), the latter arises from a mix of hard to model sources and, potentially, a stochastic gravitational wave background (GWB). Since their frequency ranges overlap, GWB search methods must model the non-GWB red noise component in PTA data explicitly, typically as a set of mutually independent Gaussian stationary processes having power-law power spectral densities. However, in searches for continuous wave (CW) signals from resolvable sources, the red noise is simply a component that must be filtered out, either explicitly or implicitly (via the definition of the matched filtering inner product). Due to the technical difficulties associated with irregular sampling, CW searches have generally used implicit filtering with the same power law model as GWB searches. This creates the data analysis burden of fitting the power-law parameters, which increase in number with the size of the PTA and hamper the scaling up of CW searches to large PTAs. Here, we present an explicit filtering approach that overcomes the technical issues associated with irregular sampling. The method uses adaptive splines, where the spline knots are included in the fitted model. Besides illustrating its application on real data, the effectiveness of this approach is investigated on synthetic data that has the same red noise characteristics as the NANOGrav 15-year dataset and contains a single non-evolving CW signal. Full article
(This article belongs to the Special Issue Supermassive Black Hole Mass Measurements)
Show Figures

Figure 1

17 pages, 3854 KB  
Article
Research on Signal Processing Algorithms Based on Wearable Laser Doppler Devices
by Yonglong Zhu, Yinpeng Fang, Jinjiang Cui, Jiangen Xu, Minghang Lv, Tongqing Tang, Jinlong Ma and Chengyao Cai
Electronics 2025, 14(14), 2761; https://doi.org/10.3390/electronics14142761 - 9 Jul 2025
Viewed by 500
Abstract
Wearable laser Doppler devices are susceptible to complex noise interferences, such as Gaussian white noise, baseline drift, and motion artifacts, with motion artifacts significantly impacting clinical diagnostic accuracy. Addressing the limitations of existing denoising methods—including traditional adaptive filtering that relies on prior noise [...] Read more.
Wearable laser Doppler devices are susceptible to complex noise interferences, such as Gaussian white noise, baseline drift, and motion artifacts, with motion artifacts significantly impacting clinical diagnostic accuracy. Addressing the limitations of existing denoising methods—including traditional adaptive filtering that relies on prior noise information, modal decomposition techniques that depend on empirical parameter optimization and are prone to modal aliasing, wavelet threshold functions that struggle to balance signal preservation with smoothness, and the high computational complexity of deep learning approaches—this paper proposes an ISSA-VMD-AWPTD denoising algorithm. This innovative approach integrates an improved sparrow search algorithm (ISSA), variational mode decomposition (VMD), and adaptive wavelet packet threshold denoising (AWPTD). The ISSA is enhanced through cubic chaotic mapping, butterfly optimization, and sine–cosine search strategies, targeting the minimization of the envelope entropy of modal components for adaptive optimization of VMD’s decomposition levels and penalty factors. A correlation coefficient-based selection mechanism is employed to separate target and mixed modes effectively, allowing for the efficient removal of noise components. Additionally, an exponential adaptive threshold function is introduced, combining wavelet packet node energy proportion analysis to achieve efficient signal reconstruction. By leveraging the rapid convergence property of ISSA (completing parameter optimization within five iterations), the computational load of traditional VMD is reduced while maintaining the denoising accuracy. Experimental results demonstrate that for a 200 Hz test signal, the proposed algorithm achieves a signal-to-noise ratio (SNR) of 24.47 dB, an improvement of 18.8% over the VMD method (20.63 dB), and a root-mean-square-error (RMSE) of 0.0023, a reduction of 69.3% compared to the VMD method (0.0075). The processing results for measured human blood flow signals achieve an SNR of 24.11 dB, a RMSE of 0.0023, and a correlation coefficient (R) of 0.92, all outperforming other algorithms, such as VMD and WPTD. This study effectively addresses issues related to parameter sensitivity and incomplete noise separation in traditional methods, providing a high-precision and low-complexity real-time signal processing solution for wearable devices. However, the parameter optimization still needs improvement when dealing with large datasets. Full article
Show Figures

Figure 1

28 pages, 6459 KB  
Article
Soil Porosity Detection Method Based on Ultrasound and Multi-Scale Feature Extraction
by Hang Xing, Zeyang Zhong, Wenhao Zhang, Yu Jiang, Xinyu Jiang, Xiuli Yang, Weizi Cai, Shuanglong Wu and Long Qi
Sensors 2025, 25(10), 3223; https://doi.org/10.3390/s25103223 - 20 May 2025
Viewed by 851
Abstract
Soil porosity, as an essential indicator for assessing soil quality, plays a key role in guiding agricultural production, so it is beneficial to detect soil porosity. However, the currently available methods do not apply to high-precision and rapid detection of soil with a [...] Read more.
Soil porosity, as an essential indicator for assessing soil quality, plays a key role in guiding agricultural production, so it is beneficial to detect soil porosity. However, the currently available methods do not apply to high-precision and rapid detection of soil with a black-box nature in the field, so this paper proposes a soil porosity detection method based on ultrasound and multi-scale CNN-LSTM. Firstly, a series of ring cutter soil samples with different porosities were prepared manually to simulate soil collected in the field using a ring cutter, followed by ultrasonic signal acquisition of the soil samples. The acquired signals were subjected to three kinds of data augmentation processes to enrich the dataset: adding Gaussian white noise, time shift transformation, and random perturbation. Since the collected ultrasonic signals belong to long-time series data and there are different frequency and sequence features, this study constructs a multi-scale CNN-LSTM deep neural network model using large convolution kernels based on the idea of multi-scale feature extraction, which uses multiple large convolution kernels of different sizes to downsize the collected ultra-long time series data and extract local features in the sequences, and combining the ability of LSTM to capture global and long-term dependent features enhances the feature expression ability of the model. The multi-head self-attention mechanism is added at the end of the model to infer the before-and-after relationship of the sequence data to improve the degradation of the model performance caused by waveform distortion. Finally, the model was trained, validated, and tested using ultrasonic signal data collected from soil samples to demonstrate the accuracy of the detection method. The model has a coefficient of determination of 0.9990 for detecting soil porosity, with a percentage root mean square error of only 0.66%. It outperforms other advanced comparative models, making it very promising for application. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

24 pages, 977 KB  
Article
On the Design of Kalman Filter with Low Complexity for 6G-Based ISAC: Alpha and Alpha–Beta Filter Perspectives
by Jung-Beom Kim and Sang-Won Choi
Electronics 2025, 14(10), 1938; https://doi.org/10.3390/electronics14101938 - 9 May 2025
Viewed by 679
Abstract
This study addresses the performance degradation of α and α-β filters in 6G Integrated Sensing and Communications (ISAC) scenarios, attributed to violations of linearity and steady-state assumptions. These filters are originally designed with time-invariant gains derived under such assumptions to ensure [...] Read more.
This study addresses the performance degradation of α and α-β filters in 6G Integrated Sensing and Communications (ISAC) scenarios, attributed to violations of linearity and steady-state assumptions. These filters are originally designed with time-invariant gains derived under such assumptions to ensure low computational complexity. However, deviations from ideal conditions—such as non-white, biased, or non-Gaussian process noise—necessitate corrective mechanisms. We propose a weighted process noise approach that accounts for increased uncertainty due to assumption violations while preserving the filters’ closed-form structure and computational efficiency. By integrating uncertainty into the conventional gain formulation, the proposed method achieves performance closer to the optimal filter. Numerical results demonstrate superior accuracy over conventional filters across various noise variances and scenarios, without requiring parameter tuning. Notably, performance improvements become more pronounced as the measurement interval decreases. Full article
Show Figures

Figure 1

26 pages, 8765 KB  
Article
Precision in Brief: The Bayesian Hurst–Kolmogorov Method for the Assessment of Long-Range Temporal Correlations in Short Behavioral Time Series
by Madhur Mangalam and Aaron D. Likens
Entropy 2025, 27(5), 500; https://doi.org/10.3390/e27050500 - 6 May 2025
Cited by 2 | Viewed by 875
Abstract
Various fields within biological and psychological inquiry recognize the significance of exploring long-range temporal correlations to study phenomena. However, these fields face challenges during this transition, primarily stemming from the impracticality of acquiring the considerably longer time series demanded by canonical methods. The [...] Read more.
Various fields within biological and psychological inquiry recognize the significance of exploring long-range temporal correlations to study phenomena. However, these fields face challenges during this transition, primarily stemming from the impracticality of acquiring the considerably longer time series demanded by canonical methods. The Bayesian Hurst–Kolmogorov (HK) method estimates the Hurst exponents of time series—quantifying the strength of long-range temporal correlations or “fractality”—more accurately than the canonical detrended fluctuation analysis (DFA), especially when the time series is short. Therefore, the systematic application of the HK method has been encouraged to assess the strength of long-range temporal correlations in empirical time series in behavioral sciences. However, the Bayesian foundation of the HK method fuels reservations about its performance when artifacts corrupt time series. Here, we compare the HK method’s and DFA’s performance in estimating the Hurst exponents of synthetic long-range correlated time series in the presence of additive white Gaussian noise, fractional Gaussian noise, short-range correlations, and various periodic and non-periodic trends. These artifacts can affect the accuracy and variability of the Hurst exponent and, therefore, the interpretation and generalizability of behavioral research findings. We show that the HK method outperforms DFA in most contexts—while both processes break down for anti-persistent time series, the HK method continues to provide reasonably accurate H values for persistent time series as short as N=64 samples. Not only can the HK method detect long-range temporal correlations accurately, show minimal dispersion around the central tendency, and not be affected by the time series length, but it is also more immune to artifacts than DFA. This information becomes particularly valuable in favor of choosing the HK method over DFA, especially when acquiring a longer time series proves challenging due to methodological constraints, such as in studies involving psychological phenomena that rely on self-reports. Moreover, it holds significance when the researcher foreknows that the empirical time series may be susceptible to contamination from these processes. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

11 pages, 42434 KB  
Proceeding Paper
Enhanced Adaptive Wiener Filtering for Frequency-Varying Noise with Convolutional Neural Network-Based Feature Extraction
by Chun-Lin Liao, Jian-Jiun Ding and De-Yan Lu
Eng. Proc. 2025, 92(1), 47; https://doi.org/10.3390/engproc2025092047 - 2 May 2025
Viewed by 791
Abstract
Denoising has long been a challenge in image processing. Noise appears in various forms, such as additive white Gaussian noise (AWGN) and Poisson noise across different frequencies. This study aims to denoise images without prior knowledge of the noise distribution. First, we estimate [...] Read more.
Denoising has long been a challenge in image processing. Noise appears in various forms, such as additive white Gaussian noise (AWGN) and Poisson noise across different frequencies. This study aims to denoise images without prior knowledge of the noise distribution. First, we estimate the noise power in the frequency domain to approximate the local signal-to-noise ratio (SNR) and guide an adaptive Wiener filter. The initial denoised result is obtained by assembling the locally filtered patches. However, since the Wiener filter is a low-pass filter, it can remove fine details along with the noise. To overcome this limitation, we post-process the noise and interpolate it between the denoised and original noisy patches to enhance the denoised image. We also mask the frequency domain to avoid grid-like artifacts. Additionally, we introduce a convolutional neural network-based refinement technique to the spatial domain to recover latent textures lost during denoising. The method presents the effectiveness of masking and feature extraction. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

40 pages, 8617 KB  
Article
Research on Stochastic Evolutionary Game and Simulation of Carbon Emission Reduction Among Participants in Prefabricated Building Supply Chains
by Heyi Wang, Lihong Li, Chunbing Guo and Rui Zhu
Appl. Sci. 2025, 15(9), 4982; https://doi.org/10.3390/app15094982 - 30 Apr 2025
Cited by 2 | Viewed by 712
Abstract
Developing prefabricated buildings (PBs) and optimizing the construction supply chain represent effective strategies for reducing carbon emissions in the construction industry. Prefabricated building supply chain (PBSC) carbon reduction suffers from synergistic difficulties, limited rationality, and environmental complexity. Therefore, investigating carbon emission reduction in [...] Read more.
Developing prefabricated buildings (PBs) and optimizing the construction supply chain represent effective strategies for reducing carbon emissions in the construction industry. Prefabricated building supply chain (PBSC) carbon reduction suffers from synergistic difficulties, limited rationality, and environmental complexity. Therefore, investigating carbon emission reduction in PBSC is essential. In this study, PBSC participants are divided into four categories according to the operation process. Gaussian white noise is introduced to simulate the random perturbation factors, and a four-way stochastic evolutionary game model is constructed and numerically simulated. The study found the following: Stochastic perturbation factors play a prominent role in the evolution speed of the agent; the emission reduction benefit and cost of the participant significantly affect the strategy selection; the operation status of the PBSC is the key to strategy selection, and it is important to pay attention to the synergy of the participants at the first and the last end of the PBSC; the influence of the external environment on strategies is mainly manifested in the loss caused and the assistance provided; and the information on emission reduction is an important factor influencing strategies. Finally, we provide suggestions for promoting carbon emission reduction by participants in the PBSC from the perspective of resisting stochastic perturbation, enhancing participants’ ability, and strengthening PBSC management; external punishment and establishing a cross-industry information sharing platform is more important than the reward. Full article
Show Figures

Figure 1

29 pages, 2381 KB  
Article
Direction-of-Arrival Estimation Based on Variational Bayesian Inference Under Model Errors
by Can Wang, Kun Guo, Jiarong Zhang, Xiaoying Fu and Hai Liu
Remote Sens. 2025, 17(7), 1319; https://doi.org/10.3390/rs17071319 - 7 Apr 2025
Viewed by 716
Abstract
The current self-calibration approaches based on sparse Bayesian learning (SBL) demonstrate robust performance under uniform white noise conditions. However, their efficacy degrades significantly in non-uniform noise environments due to acute sensitivity to noise power estimation inaccuracies. To address this limitation, this paper proposes [...] Read more.
The current self-calibration approaches based on sparse Bayesian learning (SBL) demonstrate robust performance under uniform white noise conditions. However, their efficacy degrades significantly in non-uniform noise environments due to acute sensitivity to noise power estimation inaccuracies. To address this limitation, this paper proposes an orientation estimation method based on variational Bayesian inference to combat non-uniform noise and gain/phase error. The gain and phase errors of the array are modeled separately for calibration purposes, with the objective of improving the accuracy of the fit during the iterative process. Subsequently, the noise of each element of the array is characterized via independent Gaussian distributions, and the correlation between the array gain deviation and the noise power is incorporated to enhance the robustness of this method when operating in non-uniform noise environments. Furthermore, the Cramér–Rao Lower Bound (CRLB) under non-uniform noise and gain-phase deviation is presented. Numerical simulations and experimental results are provided to validate the superiority of this proposed method. Full article
Show Figures

Figure 1

30 pages, 5284 KB  
Article
Blind Interference Suppression with Uncalibrated Phased-Array Processing
by Lauren O. Lusk and Joseph D. Gaeddert
Sensors 2025, 25(7), 2125; https://doi.org/10.3390/s25072125 - 27 Mar 2025
Viewed by 575
Abstract
As the number of devices using wireless communications increases, the amount of usable radio frequency spectrum becomes increasingly congested. As a result, the need for robust, adaptive communications to improve spectral efficiency and ensure reliable communication in the presence of interference is apparent. [...] Read more.
As the number of devices using wireless communications increases, the amount of usable radio frequency spectrum becomes increasingly congested. As a result, the need for robust, adaptive communications to improve spectral efficiency and ensure reliable communication in the presence of interference is apparent. One solution is using beamforming techniques on digital phased-array receivers to maximize the energy in a desired direction and steer nulls to remove interference; however, traditional phased-array beamforming techniques used for interference removal rely on perfect calibration between antenna elements and precise knowledge of the array configuration. Consequently, if the exact array configuration is not known (unknown or imperfect assumption of element locations, unknown mutual coupling between elements, etc.), these traditional beamforming techniques are not viable, so a beamforming approach with relaxed requirements (blind beamforming) is required. This paper proposes a novel blind beamforming approach to address complex narrowband interference in spectrally congested environments where the precise array configuration is unknown. The resulting process is shown to suppress numerous interference sources, all without any knowledge of the primary signal of interest. The results are validated through wireless laboratory experimentation conducted with a two-element array, verifying that the proposed beamforming approach achieves a similar performance to the theoretical performance bound of receiving packets in additive white Gaussian noise (AWGN) with no interference present. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

29 pages, 15339 KB  
Article
A Noise Reduction Algorithm for White Noise and Periodic Narrowband Interference Noise in Partial Discharge Signals
by Jiyuan Cao, Yanwen Wang, Weixiong Zhu and Yihe Zhang
Appl. Sci. 2025, 15(4), 1760; https://doi.org/10.3390/app15041760 - 9 Feb 2025
Cited by 3 | Viewed by 1460
Abstract
Partial discharge (PD) detection plays an important role in online condition monitoring of electrical equipment and power cables. However, the noise of PD measurement will significantly reduce the performance of the detection algorithm. In this paper, we focus on the study of a [...] Read more.
Partial discharge (PD) detection plays an important role in online condition monitoring of electrical equipment and power cables. However, the noise of PD measurement will significantly reduce the performance of the detection algorithm. In this paper, we focus on the study of a PD noise reduction algorithm based on improved singular value decomposition (SVD) and multivariate variational mode decomposition (MVMD) for white Gaussian noise (WGN) and periodic narrowband interference signal noise. The specific noise reduction algorithm is divided into three noise reduction processes: The first noise reduction completes the suppression of narrowband interference in the noisy PD signal by the SVD algorithm with the guidance signal. The guidance signal is composed of a sinusoidal signal of the accurately estimated narrowband interference frequency component, and the amplitude is twice the maximum amplitude of the noisy PD signal. The second noise reduction decomposes the noisy PD signal after filtering the narrowband interference signal into k optimal intrinsic mode function by the MVMD after parameter optimization. By calculating the kurtosis value of each intrinsic mode function, it is determined whether it is the PD dominant component or the noise dominant component, and the noise dominant component is subjected to 3σ filtering to obtain the reconstructed PD signal. The third noise reduction uses a new wavelet threshold algorithm to denoise the reconstructed PD signal to obtain the denoised PD signal. The overall noise reduction algorithm proposed in this paper is compared with some existing methods. The results show that this method has a good effect on reducing the noise of PD signals measured in simulation and field. Full article
Show Figures

Figure 1

20 pages, 6712 KB  
Article
A Parallel Image Denoising Network Based on Nonparametric Attention and Multiscale Feature Fusion
by Jing Mao, Lianming Sun, Jie Chen and Shunyuan Yu
Sensors 2025, 25(2), 317; https://doi.org/10.3390/s25020317 - 7 Jan 2025
Cited by 1 | Viewed by 1137
Abstract
Convolutional neural networks have achieved excellent results in image denoising; however, there are still some problems: (1) The majority of single-branch models cannot fully exploit the image features and often suffer from the loss of information. (2) Most of the deep CNNs have [...] Read more.
Convolutional neural networks have achieved excellent results in image denoising; however, there are still some problems: (1) The majority of single-branch models cannot fully exploit the image features and often suffer from the loss of information. (2) Most of the deep CNNs have inadequate edge feature extraction and saturated performance problems. To solve these problems, this paper proposes a two-branch convolutional image denoising network based on nonparametric attention and multiscale feature fusion, aiming to improve the denoising performance while better recovering the image edge and texture information. Firstly, ordinary convolutional layers were used to extract shallow features of noise in the image. Then, a combination of two-branch networks with different and complementary structures was used to extract deep features from the noise information in the image to solve the problem of insufficient feature extraction by the single-branch network model. The upper branch network used densely connected blocks to extract local features of the noise in the image. The lower branch network used multiple dilation convolution residual blocks with different dilation rates to increase the receptive field and extend more contextual information to obtain the global features of the noise in the image. It not only solved the problem of insufficient edge feature extraction but also solved the problem of the saturation of deep CNN performance. In this paper, a nonparametric attention mechanism is introduced in the two-branch feature extraction module, which enabled the network to pay attention to and learn the key information in the feature map, and improved the learning performance of the network. The enhanced features were then processed through the multiscale feature fusion module to obtain multiscale image feature information at different depths to obtain more robust fused features. Finally, the shallow features and deep features were summed using a long jump join and were processed through an ordinary convolutional layer and output to obtain a residual image. In this paper, Set12, BSD68, Set5, CBSD68, and SIDD are used as a test dataset to which different intensities of Gaussian white noise were added for testing and compared with several mainstream denoising methods currently available. The experimental results showed that this paper’s algorithm had better objective indexes on all test sets and outperformed the comparison algorithms. The method in this paper not only achieved a good denoising effect but also effectively retained the edge and texture information of the original image. The proposed method provided a new idea for the study of deep neural networks in the field of image denoising. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 4229 KB  
Article
Self-Adaptive Moving Least Squares Measurement Based on Digital Image Correlation
by Hengsi Zhu, Yurong Guo and Xiao Tan
Optics 2024, 5(4), 566-580; https://doi.org/10.3390/opt5040042 - 2 Dec 2024
Cited by 1 | Viewed by 1833
Abstract
Digital image correlation (DIC) is a non-contact measurement technique used to evaluate surface deformation of objects. Typically, pointwise moving least squares (PMLS) fitting is applied to process the noisy data from DIC to obtain an accurate strain field. In this study, a self-adaptive [...] Read more.
Digital image correlation (DIC) is a non-contact measurement technique used to evaluate surface deformation of objects. Typically, pointwise moving least squares (PMLS) fitting is applied to process the noisy data from DIC to obtain an accurate strain field. In this study, a self-adaptive pointwise moving least squares (SPMLS) method was developed to optimize the process of window size selection, thereby attaining superior accuracy in measurements. The premise of this method is that the noise in the displacement field follows white Gaussian noise. Under this assumption, it analyses the random errors and systematic errors of the PMLS method under different calculation window sizes. The optimal size of the calculation window is determined by minimizing the errors. Subsequently, the strain field is computed based on the optimized calculation window. The results were compared with a typical PMLS method. Whether calculating low-gradient strain fields or high-gradient strain fields, the computational accuracy of SPMLS is close to the optimal accuracy of PMLS. This study effectively addresses the inherent challenge of manually selecting window size in the PMLS method. Full article
Show Figures

Figure 1

25 pages, 4001 KB  
Article
CASSAD: Chroma-Augmented Semi-Supervised Anomaly Detection for Conveyor Belt Idlers
by Fahad Alharbi, Suhuai Luo, Abdullah Alsaedi, Sipei Zhao and Guang Yang
Sensors 2024, 24(23), 7569; https://doi.org/10.3390/s24237569 - 27 Nov 2024
Cited by 6 | Viewed by 2012
Abstract
Idlers are essential to conveyor systems, as well as supporting and guiding belts to ensure production efficiency. Proper idler maintenance prevents failures, reduces downtime, cuts costs, and improves reliability. Most studies on idler fault detection rely on supervised methods, which depend on large [...] Read more.
Idlers are essential to conveyor systems, as well as supporting and guiding belts to ensure production efficiency. Proper idler maintenance prevents failures, reduces downtime, cuts costs, and improves reliability. Most studies on idler fault detection rely on supervised methods, which depend on large labelled datasets for training. However, acquiring such labelled data is often challenging in industrial environments due to the rarity of faults and the labour-intensive nature of the labelling process. To address this, we propose the chroma-augmented semi-supervised anomaly detection (CASSAD) method, designed to perform effectively with limited labelled data. At the core of CASSAD is the one-class SVM (OC-SVM), a model specifically developed for anomaly detection in cases where labelled anomalies are scarce. We also compare CASSAD’s performance with other common models like the local outlier factor (LOF) and isolation forest (iForest), evaluating each with the area under the curve (AUC) to assess their ability to distinguish between normal and anomalous data. CASSAD introduces chroma features, such as chroma energy normalised statistics (CENS), the constant-Q transform (CQT), and the chroma short-time Fourier transform (STFT), enhanced through filtering to capture rich harmonic information from idler sounds. To reduce feature complexity, we utilize the mean and standard deviation (std) across chroma features. The dataset is further augmented using additive white Gaussian noise (AWGN). Testing on an industrial dataset of idler sounds, CASSAD achieved an AUC of 96% and an accuracy of 91%, surpassing a baseline autoencoder and other traditional models. These results demonstrate the model’s robustness in detecting anomalies with minimal dependence on labelled data, offering a practical solution for industries with limited labelled datasets. Full article
Show Figures

Figure 1

15 pages, 70999 KB  
Article
Lightweight Infrared Image Denoising Method Based on Adversarial Transfer Learning
by Wen Guo, Yugang Fan and Guanghui Zhang
Sensors 2024, 24(20), 6677; https://doi.org/10.3390/s24206677 - 17 Oct 2024
Cited by 4 | Viewed by 2266
Abstract
A lightweight infrared image denoising method based on adversarial transfer learning is proposed. The method adopts a generative adversarial network (GAN) framework and optimizes the model through a phased transfer learning strategy. In the initial stage, the generator is pre-trained using a large-scale [...] Read more.
A lightweight infrared image denoising method based on adversarial transfer learning is proposed. The method adopts a generative adversarial network (GAN) framework and optimizes the model through a phased transfer learning strategy. In the initial stage, the generator is pre-trained using a large-scale grayscale visible light image dataset. Subsequently, the generator is fine-tuned on an infrared image dataset using feature transfer techniques. This phased transfer strategy helps address the problem of insufficient sample quantity and variety in infrared images. Through the adversarial process of the GAN, the generator is continuously optimized to enhance its feature extraction capabilities in environments with limited data. Moreover, the generator structure incorporates structural reparameterization technology, edge convolution modules, and progressive multi-scale attention block (PMAB), significantly improving the model’s ability to recognize edge and texture features. During the inference stage, structural reparameterization further optimizes the network architecture, significantly reducing model parameters and complexity and thereby improving denoising efficiency. The experimental results of public and real-world datasets demonstrate that this method effectively removes additive white Gaussian noise from infrared images, showing outstanding denoising performance. Full article
Show Figures

Figure 1

Back to TopTop