E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Sensor Signal and Information Processing II"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 June 2019).

Special Issue Editors

Guest Editor
Prof. Dr. Wai Lok Woo

Faculty of Engineering and Environment, Northumbria University, England, UK
Website | E-Mail
Interests: statistical signal processing; machine learning; pattern recognition and analysis; computational intelligence
Guest Editor
Prof. Dr. Bin Gao

School of Automation Engineering, University of Electronic Science and Technology of China, China
Website | E-Mail
Interests: audio and image processing; social signal processing; multi-physics mathematical modeling; non-destructive evaluation

Special Issue Information

Dear Colleagues,

Sensor Signal and Information Processing (SSIP) is an overarching field of research focusing on the mathematical foundations and practical applications of signal processing algorithms that learn, reason and act. It bridges the boundary between theory and application, developing novel theoretically-inspired methodologies targeting both longstanding and emergent signal processing applications. The core of SSIP lies in its use of nonlinear and non-Gaussian signal processing methodologies combined with convex and non-convex optimization. SSIP encompasses new theoretical frameworks for statistical signal processing (e.g., Hidden Markov Model, latent component analysis, tensor factorization, Bayesian methods) coupled with information theoretic learning, and novel developments in these areas specialized to the processing of a variety of signal modalities including audio, bio-signals, multi-physics signals, images, multispectral, and video among others. In recent years, many signal processing algorithms have incorporated some forms of computational intelligence as part of its core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learning to learn new information whenever unseen data is captured.

The focus of the Special Issue will be on a broad range of sensors, signal and information processing involving the introduction and development of new advanced theoretical and practical algorithms. Potential topics include, but are not limited to:

  • Biomedical signal processing and instrumentation
  • Pattern recognition and analysis
  • Machine learning for signal and image processing
  • Multimodality sensor fusion techniques
  • Compressed sensing and sparsity aware processing
  • Data science and analytics for big data
  • Deep learning: Theory, algorithms and applications
  • Multi-objective signal processing optimization
  • Multimodal information processing for healthcare, monitoring and surveillance
  • Computer vision and 3D reconstruction with multimodal data fusion
  • Wearable sensors and IoT for personalized health monitoring and social computing
  • Non-destructive testing and evaluation for material characterization, structural integrity, defect detection and identification, stress and lifecycle assessment
  • Signal processing for smart grid, load forecasting and energy management
  • Precision farming combining sensors and imaging with real-time data analytics
  • Other emerging applications of signal and information processing

Prof. Dr. Wai Lok Woo
Prof. Dr. Bin Gao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensors
  • Signal processing
  • Image processing
  • Video processing
  • Information fusion
  • Machine learning
  • Compressive sensing
  • Latent component analysis
  • Low-rank sparse decomposition
  • Deep learning neural network
  • Computational intelligence
  • Social signal processing
  • Non-destructive testing and evaluation

Published Papers (23 papers)

View options order results:
result details:
Displaying articles 1-23
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle
The Effect of the Color Filter Array Layout Choice on State-of-the-Art Demosaicing
Sensors 2019, 19(14), 3215; https://doi.org/10.3390/s19143215
Received: 20 June 2019 / Revised: 18 July 2019 / Accepted: 18 July 2019 / Published: 21 July 2019
PDF Full-text (2839 KB) | HTML Full-text | XML Full-text
Abstract
Interpolation from a Color Filter Array (CFA) is the most common method for obtaining full color image data. Its success relies on the smart combination of a CFA and a demosaicing algorithm. Demosaicing on the one hand has been extensively studied. Algorithmic development [...] Read more.
Interpolation from a Color Filter Array (CFA) is the most common method for obtaining full color image data. Its success relies on the smart combination of a CFA and a demosaicing algorithm. Demosaicing on the one hand has been extensively studied. Algorithmic development in the past 20 years ranges from simple linear interpolation to modern neural-network-based (NN) approaches that encode the prior knowledge of millions of training images to fill in missing data in an inconspicious way. CFA design, on the other hand, is less well studied, although still recognized to strongly impact demosaicing performance. This is because demosaicing algorithms are typically limited to one particular CFA pattern, impeding straightforward CFA comparison. This is starting to change with newer classes of demosaicing that may be considered generic or CFA-agnostic. In this study, by comparing performance of two state-of-the-art generic algorithms, we evaluate the potential of modern CFA-demosaicing. We test the hypothesis that, with the increasing power of NN-based demosaicing, the influence of optimal CFA design on system performance decreases. This hypothesis is supported with the experimental results. Such a finding would herald the possibility of relaxing CFA requirements, providing more freedom in the CFA design choice and producing high-quality cameras. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Differential Run-Length Encryption in Sensor Networks
Sensors 2019, 19(14), 3190; https://doi.org/10.3390/s19143190
Received: 28 May 2019 / Revised: 4 July 2019 / Accepted: 17 July 2019 / Published: 19 July 2019
PDF Full-text (896 KB) | HTML Full-text | XML Full-text
Abstract
Energy is a main concern in the design and deployment of Wireless Sensor Networks because sensor nodes are constrained by limitations of battery, memory, and a processing unit. A number of techniques have been presented to solve this power problem. Among the proposed [...] Read more.
Energy is a main concern in the design and deployment of Wireless Sensor Networks because sensor nodes are constrained by limitations of battery, memory, and a processing unit. A number of techniques have been presented to solve this power problem. Among the proposed solutions, the data compression scheme is one that can be used to reduce the volume of data for transmission. This article presents a data compression algorithm called Differential Run Length Encryption (D-RLE) consisting of three steps. First, reading values are divided into groups by using a threshold of Chauvenet’s criterion. Second, each group is subdivided into subgroups whose consecutive member values are determined by a subtraction scheme under a K-RLE based threshold. Third, the member values are then encoded to binary based on our ad hoc scheme to compress the data. The experimental results show that the data rate savings by D-RLE can be up to 90% and energy usage can be saved more than 90% compared to data transmission without compression. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Signal Amplification Gains of Compressive Sampling for Photocurrent Response Mapping of Optoelectronic Devices
Sensors 2019, 19(13), 2870; https://doi.org/10.3390/s19132870
Received: 17 May 2019 / Revised: 24 June 2019 / Accepted: 25 June 2019 / Published: 28 June 2019
PDF Full-text (6078 KB) | HTML Full-text | XML Full-text
Abstract
Spatial characterisation methods for photodetectors and other optoelectronic devices are necessary for determining local performance, as well as detecting local defects and the non-uniformities of devices. Light beam induced current measurements provide local performance information about devices at their actual operating conditions. Compressed [...] Read more.
Spatial characterisation methods for photodetectors and other optoelectronic devices are necessary for determining local performance, as well as detecting local defects and the non-uniformities of devices. Light beam induced current measurements provide local performance information about devices at their actual operating conditions. Compressed sensing current mapping offers additional specific advantages, such as high speed without the use of complicated experimental layouts or lock-in amplifiers. In this work, the signal amplification advantages of compressed sensing current mapping are presented. It is demonstrated that the sparsity of the patterns used for compressive sampling can be controlled to achieve significant signal amplification of at least two orders of magnitude, while maintaining or increasing the accuracy of measurements. Accurate measurements can be acquired even when a point-by-point scan yields high noise levels, which distort the accuracy of measurements. Pixel-by-pixel comparisons of photocurrent maps are realised using different sensing matrices and reconstruction algorithms for different samples. The results additionally demonstrate that such an optical system would be ideal for investigating compressed sensing procedures for other optical measurement applications, where experimental noise is included. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Time Difference of Arrival (TDoA) Localization Combining Weighted Least Squares and Firefly Algorithm
Sensors 2019, 19(11), 2554; https://doi.org/10.3390/s19112554
Received: 11 April 2019 / Revised: 28 May 2019 / Accepted: 2 June 2019 / Published: 4 June 2019
PDF Full-text (4785 KB) | HTML Full-text | XML Full-text
Abstract
Time difference of arrival (TDoA) based on a group of sensor nodes with known locations has been widely used to locate targets. Two-step weighted least squares (TSWLS), constrained weighted least squares (CWLS), and Newton–Raphson (NR) iteration are commonly used passive location methods, among [...] Read more.
Time difference of arrival (TDoA) based on a group of sensor nodes with known locations has been widely used to locate targets. Two-step weighted least squares (TSWLS), constrained weighted least squares (CWLS), and Newton–Raphson (NR) iteration are commonly used passive location methods, among which the initial position is needed and the complexity is high. This paper proposes a hybrid firefly algorithm (hybrid-FA) method, combining the weighted least squares (WLS) algorithm and FA, which can reduce computation as well as achieve high accuracy. The WLS algorithm is performed first, the result of which is used to restrict the search region for the FA method. Simulations showed that the hybrid-FA method required far fewer iterations than the FA method alone to achieve the same accuracy. Additionally, two experiments were conducted to compare the results of hybrid-FA with other methods. The findings indicated that the root-mean-square error (RMSE) and mean distance error of the hybrid-FA method were lower than that of the NR, TSWLS, and genetic algorithm (GA). On the whole, the hybrid-FA outperformed the NR, TSWLS, and GA for TDoA measurement. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
A Weak Selection Stochastic Gradient Matching Pursuit Algorithm
Sensors 2019, 19(10), 2343; https://doi.org/10.3390/s19102343
Received: 20 April 2019 / Revised: 15 May 2019 / Accepted: 15 May 2019 / Published: 21 May 2019
PDF Full-text (5873 KB) | HTML Full-text | XML Full-text
Abstract
In the existing stochastic gradient matching pursuit algorithm, the preliminary atomic set includes atoms that do not fully match the original signal. This weakens the reconstruction capability and increases the computational complexity. To solve these two problems, a new method is proposed. Firstly, [...] Read more.
In the existing stochastic gradient matching pursuit algorithm, the preliminary atomic set includes atoms that do not fully match the original signal. This weakens the reconstruction capability and increases the computational complexity. To solve these two problems, a new method is proposed. Firstly, a weak selection threshold method is proposed to select the atoms that best match the original signal. If the absolute gradient coefficients were greater than the product of the maximum absolute gradient coefficient and the threshold that was set according to the experiments, then we selected the atoms that corresponded to the absolute gradient coefficients as the preliminary atoms. Secondly, if the scale of the current candidate atomic set was equal to the previous support atomic set, then the loop was exited; otherwise, the loop was continued. Finally, before the transition estimation of the original signal was calculated, we determined whether the number of columns of the candidate atomic set was smaller than the number of rows of the measurement matrix. If this condition was satisfied, then the current candidate atomic set could be regarded as the support atomic set and the loop was continued; otherwise, the loop was exited. The simulation results showed that the proposed method has better reconstruction performance than the stochastic gradient algorithms when the original signals were a one-dimensional sparse signal, a two-dimensional image signal, and a low-rank matrix signal. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Variational Bayesian Based Adaptive Shifted Rayleigh Filter for Bearings-Only Tracking in Clutters
Sensors 2019, 19(7), 1512; https://doi.org/10.3390/s19071512
Received: 11 February 2019 / Revised: 23 March 2019 / Accepted: 26 March 2019 / Published: 28 March 2019
Cited by 2 | PDF Full-text (827 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper considers bearings-only target tracking in clutters with uncertain clutter probability. The traditional shifted Rayleigh filter (SRF), which assumes known clutter probability, may have degraded performance in challenging scenarios. To improve the tracking performance, a variational Bayesian-based adaptive shifted Rayleigh filter (VB-SRF) [...] Read more.
This paper considers bearings-only target tracking in clutters with uncertain clutter probability. The traditional shifted Rayleigh filter (SRF), which assumes known clutter probability, may have degraded performance in challenging scenarios. To improve the tracking performance, a variational Bayesian-based adaptive shifted Rayleigh filter (VB-SRF) is proposed in this paper. The target state and the clutter probability are jointly estimated to account for the uncertainty in clutter probability. Performance of the proposed filter is evaluated by comparing with SRF and the probability data association (PDA)-based filters in two scenarios. Simulation results show that the proposed VB-SRF algorithm outperforms the traditional SRF and PDA-based filters especially in complex adverse scenarios in terms of track continuity, track accuracy and robustness with a little higher computation complexity. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Detail Preserved Surface Reconstruction from Point Cloud
Sensors 2019, 19(6), 1278; https://doi.org/10.3390/s19061278
Received: 18 February 2019 / Revised: 9 March 2019 / Accepted: 11 March 2019 / Published: 13 March 2019
PDF Full-text (12542 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we put forward a new method for surface reconstruction from image-based point clouds. In particular, we introduce a new visibility model for each line of sight to preserve scene details without decreasing the noise filtering ability. To make the proposed [...] Read more.
In this paper, we put forward a new method for surface reconstruction from image-based point clouds. In particular, we introduce a new visibility model for each line of sight to preserve scene details without decreasing the noise filtering ability. To make the proposed method suitable for point clouds with heavy noise, we introduce a new likelihood energy term to the total energy of the binary labeling problem of Delaunay tetrahedra, and we give its s-t graph implementation. Besides, we further improve the performance of the proposed method with the dense visibility technique, which helps to keep the object edge sharp. The experimental result shows that the proposed method rivalled the state-of-the-art methods in terms of accuracy and completeness, and performed better with reference to detail preservation. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Collision Detection and Identification on Robot Manipulators Based on Vibration Analysis
Sensors 2019, 19(5), 1080; https://doi.org/10.3390/s19051080
Received: 25 December 2018 / Revised: 11 February 2019 / Accepted: 27 February 2019 / Published: 3 March 2019
PDF Full-text (2144 KB) | HTML Full-text | XML Full-text
Abstract
Robot manipulators should be able to quickly detect collisions to limit damage due to physical contact. Traditional model-based detection methods in robotics are mainly concentrated on the difference between the estimated and actual applied torque. In this paper, a model independent collision detection [...] Read more.
Robot manipulators should be able to quickly detect collisions to limit damage due to physical contact. Traditional model-based detection methods in robotics are mainly concentrated on the difference between the estimated and actual applied torque. In this paper, a model independent collision detection method is presented, based on the vibration features generated by collisions. Firstly, the natural frequencies and vibration modal features of the manipulator under collisions are extracted with illustrative examples. Then, a peak frequency based method is developed for the estimation of the vibration modal along the manipulator structure. The vibration modal features are utilized for the construction and training of the artificial neural network for the collision detection task. Furthermore, the proposed networks also generate the location and direction information about contact. The experimental results show the validity of the collision detection and identification scheme, and that it can achieve considerable accuracy. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
DOA Estimation and Self-Calibration under Unknown Mutual Coupling
Sensors 2019, 19(4), 978; https://doi.org/10.3390/s19040978
Received: 22 January 2019 / Revised: 18 February 2019 / Accepted: 21 February 2019 / Published: 25 February 2019
PDF Full-text (2386 KB) | HTML Full-text | XML Full-text
Abstract
In practical applications, the assumption of omnidirectional elements is not effective in general, which leads to the direction-dependent mutual coupling (MC). Under this condition, the performance of traditional calibration algorithms suffers. This paper proposes a new self-calibration method based on the time-frequency distributions [...] Read more.
In practical applications, the assumption of omnidirectional elements is not effective in general, which leads to the direction-dependent mutual coupling (MC). Under this condition, the performance of traditional calibration algorithms suffers. This paper proposes a new self-calibration method based on the time-frequency distributions (TFDs) in the presence of direction-dependent MC. Firstly, the time-frequency (TF) transformation is used to calculate the space-time-frequency distributions (STFDs) matrix of received signals. After that, the estimated steering vector and corresponding noise subspace are estimated by the steps of noise removing, single-source TF points extracting and clustering. Then according to the transformation relationship between the MC coefficients, steering vector and MC matrix, we deduce a set of linear equations. Finally, with two-step alternating iteration, the equations are solved by least square method in order to estimate DOA and MC coefficients. Simulations results show that the proposed algorithm can achieve direction-dependent MC self-calibration and outperforms the existing algorithms. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Improved Bound Fit Algorithm for Fine Delay Scheduling in a Multi-Group Scan of Ultrasonic Phased Arrays
Sensors 2019, 19(4), 906; https://doi.org/10.3390/s19040906
Received: 19 December 2018 / Revised: 15 February 2019 / Accepted: 16 February 2019 / Published: 21 February 2019
Cited by 1 | PDF Full-text (3878 KB) | HTML Full-text | XML Full-text
Abstract
Multi-group scanning of ultrasonic phased arrays (UPAs) is a research field in distributed sensor technology. Interpolation filters intended for fine delay modules can provide high-accuracy time delays during the multi-group scanning of large-number-array elements in UPA instruments. However, increasing focus precision requires a [...] Read more.
Multi-group scanning of ultrasonic phased arrays (UPAs) is a research field in distributed sensor technology. Interpolation filters intended for fine delay modules can provide high-accuracy time delays during the multi-group scanning of large-number-array elements in UPA instruments. However, increasing focus precision requires a large increase in the number of fine delay modules. In this paper, an architecture with fine delay modules for time division scheduling is explained in detail. An improved bound fit (IBF) algorithm is proposed, and an analysis of its mathematical model and time complexity is provided. The IBF algorithm was verified by experiment, wherein the performances of list, longest processing time, bound fit, and IBF algorithms were compared in terms of frame data scheduling in the multi-group scan. The experimental results prove that the scheduling algorithm decreased the makespan by 8.76–21.48%, and achieved the frame rate at 78 fps. The architecture reduced resource consumption by 30–40%. Therefore, the proposed architecture, model, and algorithm can reduce makespan, improve real-time performance, and decrease resource consumption. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
A Novel Method for Early Gear Pitting Fault Diagnosis Using Stacked SAE and GBRBM
Sensors 2019, 19(4), 758; https://doi.org/10.3390/s19040758
Received: 16 January 2019 / Revised: 3 February 2019 / Accepted: 9 February 2019 / Published: 13 February 2019
PDF Full-text (5484 KB) | HTML Full-text | XML Full-text
Abstract
Research on data-driven fault diagnosis methods has received much attention in recent years. The deep belief network (DBN) is a commonly used deep learning method for fault diagnosis. In the past, when people used DBN to diagnose gear pitting faults, it was found [...] Read more.
Research on data-driven fault diagnosis methods has received much attention in recent years. The deep belief network (DBN) is a commonly used deep learning method for fault diagnosis. In the past, when people used DBN to diagnose gear pitting faults, it was found that the diagnosis result was not good with continuous time domain vibration signals as direct inputs into DBN. Therefore, most researchers extracted features from time domain vibration signals as inputs into DBN. However, it is desirable to use raw vibration signals as direct inputs to achieve good fault diagnosis results. Therefore, this paper proposes a novel method by stacking spare autoencoder (SAE) and Gauss-Binary restricted Boltzmann machine (GBRBM) for early gear pitting faults diagnosis with raw vibration signals as direct inputs. The SAE layer is used to compress the raw vibration data and the GBRBM layer is used to effectively process continuous time domain vibration signals. Vibration signals of seven early gear pitting faults collected from a gear test rig are used to validate the proposed method. The validation results show that the proposed method maintains a good diagnosis performance under different working conditions and gives higher diagnosis accuracy compared to other traditional methods. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Ultrasonic Flaw Echo Enhancement Based on Empirical Mode Decomposition
Sensors 2019, 19(2), 236; https://doi.org/10.3390/s19020236
Received: 3 December 2018 / Revised: 29 December 2018 / Accepted: 6 January 2019 / Published: 9 January 2019
PDF Full-text (3530 KB) | HTML Full-text | XML Full-text
Abstract
The detection of flaw echoes in backscattered signals in ultrasonic nondestructive testing can be challenging due to the existence of backscattering noise and electronic noise. In this article, an empirical mode decomposition (EMD) methodology is proposed for flaw echo enhancement. The backscattered signal [...] Read more.
The detection of flaw echoes in backscattered signals in ultrasonic nondestructive testing can be challenging due to the existence of backscattering noise and electronic noise. In this article, an empirical mode decomposition (EMD) methodology is proposed for flaw echo enhancement. The backscattered signal was first decomposed into several intrinsic mode functions (IMFs) using EMD or ensemble EMD (EEMD). The sample entropies (SampEn) of all IMFs were used to select the relevant modes. Otsu’s method was used for interval thresholding of the first relevant mode, and a window was used to separate the flaw echoes in the relevant modes. The flaw echo was reconstructed by adding the residue and the separated flaw echoes. The established methodology was successfully employed for simulated signal and experimental signal processing. For the simulated signals, an improvement of 9.42 dB in the signal-to-noise ratio (SNR) and an improvement of 0.0099 in the modified correlation coefficient (MCC) were achieved. For experimental signals obtained from two cracks at different depths, the flaw echoes were also significantly enhanced. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Adaptive Multiclass Mahalanobis Taguchi System for Bearing Fault Diagnosis under Variable Conditions
Sensors 2019, 19(1), 26; https://doi.org/10.3390/s19010026
Received: 16 November 2018 / Revised: 18 December 2018 / Accepted: 19 December 2018 / Published: 21 December 2018
PDF Full-text (2657 KB) | HTML Full-text | XML Full-text
Abstract
Bearings are vital components in industrial machines. Diagnosing the fault of rolling element bearings and ensuring normal operation is essential. However, the faults of rolling element bearings under variable conditions and the adaptive feature selection has rarely been discussed until now. Thus, it [...] Read more.
Bearings are vital components in industrial machines. Diagnosing the fault of rolling element bearings and ensuring normal operation is essential. However, the faults of rolling element bearings under variable conditions and the adaptive feature selection has rarely been discussed until now. Thus, it is essential to develop a practicable method to put forward the disposal of the fault under variable conditions. Considering these issues, this paper uses the method based on the Mahalanobis Taguchi System (MTS), and overcomes two shortcomings of MTS: (1) MTS is an effective tool to classify faults and has strong robustness to operating conditions, but it can only handle binary classification problems, and this paper constructs the multiclass measurement scale to deal with multi-classification problems. (2) MTS can determine important features, but uses the hard threshold to select the features, and this paper selects the proper feature sequence instead of the threshold to overcome the lesser adaptivity of the threshold configuration for signal-to-noise gain. Hence, this method proposes a novel method named adaptive Multiclass Mahalanobis Taguchi system (aMMTS), in conjunction with variational mode decomposition (VMD) and singular value decomposition (SVD), and is employed to diagnose the faults under the variable conditions. Finally, this method is verified by using the signal data collected from Case Western Reserve University Bearing Data Center. The result shows that it is accurate for bearings fault diagnosis under variable conditions. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Integration of Terrestrial Laser Scanning and NURBS Modeling for the Deformation Monitoring of an Earth-Rock Dam
Sensors 2019, 19(1), 22; https://doi.org/10.3390/s19010022
Received: 9 November 2018 / Revised: 17 December 2018 / Accepted: 19 December 2018 / Published: 21 December 2018
Cited by 2 | PDF Full-text (7712 KB) | HTML Full-text | XML Full-text
Abstract
A complete picture of the deformation characteristics (distribution and evolution) of the geotechnical infrastructures serves as superior information for understanding their potential instability mechanism. How to monitor more completely and accurately the deformation of these infrastructures (either artificial or natural) in the field [...] Read more.
A complete picture of the deformation characteristics (distribution and evolution) of the geotechnical infrastructures serves as superior information for understanding their potential instability mechanism. How to monitor more completely and accurately the deformation of these infrastructures (either artificial or natural) in the field expediently and roundly remains a scientific topic. The conventional deformation monitoring methods are mostly carried out at a limited number of discrete points and cannot acquire the deformation data of the whole structure. In this paper, a new monitoring methodology of dam deformation and associated results interpretation is presented by taking the advantages of the terrestrial laser scanning (TLS), which, in contrast with most of the conventional methods, is capable of capturing the geometric information at a huge amount of points over an object in a relatively fast manner. By employing the non-uniform rational B-splines (NURBS) technology, the high spatial resolution models of the monitored geotechnical objects can be created with sufficient accuracy based on these point cloud data obtained from application of the TLS. Finally, the characteristics of deformation, to which the geotechnical infrastructures have been subjected, are interpreted more completely according to the models created based on a series of consecutive monitoring exercises at different times. The present methodology is applied to the Changheba earth-rock dam, which allows the visualization of deformation over the entire dam during different periods. Results from analysis of the surface deformation distribution show that the surface deformations in the middle are generally larger than those on both sides near the bank, and the deformations increase with the increase of the elevations. The results from the present application highlight that the adhibition of the TLS and NURBS technology permits a better understanding of deformation behavior of geotechnical objects of large size in the field. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Efficient Fiducial Point Detection of ECG QRS Complex Based on Polygonal Approximation
Sensors 2018, 18(12), 4502; https://doi.org/10.3390/s18124502
Received: 20 November 2018 / Revised: 10 December 2018 / Accepted: 13 December 2018 / Published: 19 December 2018
PDF Full-text (1948 KB) | HTML Full-text | XML Full-text
Abstract
Electrocardiogram signal analysis is based on detecting a fiducial point consisting of the onset, offset, and peak of each waveform. The accurate diagnosis of arrhythmias depends on the accuracy of fiducial point detection. Detecting the onset and offset fiducial points is ambiguous because [...] Read more.
Electrocardiogram signal analysis is based on detecting a fiducial point consisting of the onset, offset, and peak of each waveform. The accurate diagnosis of arrhythmias depends on the accuracy of fiducial point detection. Detecting the onset and offset fiducial points is ambiguous because the feature values are similar to those of the surrounding sample. To improve the accuracy of this paper’s fiducial point detection, the signal is represented by a small number of vertices through a curvature-based vertex selection technique using polygonal approximation. The proposed method minimizes the number of candidate samples for fiducial point detection and emphasizes these sample’s feature values to enable reliable detection. It is also sensitive to the morphological changes of various QRS complexes by generating an accumulated signal of the amplitude change rate between vertices as an auxiliary signal. To verify the superiority of the proposed algorithm, error distribution is measured through comparison with the QT-DB annotation provided by Physionet. The mean and standard deviation of the onset and the offset were stable as 4.02 ± 7.99 ms and 5.45 ± 8.04 ms, respectively. The results show that proposed method using small number of vertices is acceptable in practical applications. We also confirmed that the proposed method is effective through the clustering of the QRS complex. Experiments on the arrhythmia data of MIT-BIH ADB confirmed reliable fiducial point detection results for various types of QRS complexes. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
A Regularized Weighted Smoothed L0 Norm Minimization Method for Underdetermined Blind Source Separation
Sensors 2018, 18(12), 4260; https://doi.org/10.3390/s18124260
Received: 22 October 2018 / Revised: 28 November 2018 / Accepted: 30 November 2018 / Published: 4 December 2018
Cited by 5 | PDF Full-text (30948 KB) | HTML Full-text | XML Full-text
Abstract
Compressed sensing (CS) theory has attracted widespread attention in recent years and has been widely used in signal and image processing, such as underdetermined blind source separation (UBSS), magnetic resonance imaging (MRI), etc. As the main link of CS, the goal of sparse [...] Read more.
Compressed sensing (CS) theory has attracted widespread attention in recent years and has been widely used in signal and image processing, such as underdetermined blind source separation (UBSS), magnetic resonance imaging (MRI), etc. As the main link of CS, the goal of sparse signal reconstruction is how to recover accurately and effectively the original signal from an underdetermined linear system of equations (ULSE). For this problem, we propose a new algorithm called the weighted regularized smoothed L 0 -norm minimization algorithm (WReSL0). Under the framework of this algorithm, we have done three things: (1) proposed a new smoothed function called the compound inverse proportional function (CIPF); (2) proposed a new weighted function; and (3) a new regularization form is derived and constructed. In this algorithm, the weighted function and the new smoothed function are combined as the sparsity-promoting object, and a new regularization form is derived and constructed to enhance de-noising performance. Performance simulation experiments on both the real signal and real images show that the proposed WReSL0 algorithm outperforms other popular approaches, such as SL0, BPDN, NSL0, and L p -RLSand achieves better performances when it is used for UBSS. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Less Data Same Information for Event-Based Sensors: A Bioinspired Filtering and Data Reduction Algorithm
Sensors 2018, 18(12), 4122; https://doi.org/10.3390/s18124122
Received: 18 September 2018 / Revised: 21 November 2018 / Accepted: 22 November 2018 / Published: 24 November 2018
Cited by 2 | PDF Full-text (978 KB) | HTML Full-text | XML Full-text
Abstract
Sensors provide data which need to be processed after acquisition to remove noise and extract relevant information. When the sensor is a network node and acquired data are to be transmitted to other nodes (e.g., through Ethernet), the amount of generated data from [...] Read more.
Sensors provide data which need to be processed after acquisition to remove noise and extract relevant information. When the sensor is a network node and acquired data are to be transmitted to other nodes (e.g., through Ethernet), the amount of generated data from multiple nodes can overload the communication channel. The reduction of generated data implies the possibility of lower hardware requirements and less power consumption for the hardware devices. This work proposes a filtering algorithm (LDSI—Less Data Same Information) which reduces the generated data from event-based sensors without loss of relevant information. It is a bioinspired filter, i.e., event data are processed using a structure resembling biological neuronal information processing. The filter is fully configurable, from a “transparent mode” to a very restrictive mode. Based on an analysis of configuration parameters, three main configurations are given: weak, medium and restrictive. Using data from a DVS event camera, results for a similarity detection algorithm show that event data can be reduced up to 30% while maintaining the same similarity index when compared to unfiltered data. Data reduction can reach 85% with a penalty of 15% in similarity index compared to the original data. An object tracking algorithm was also used to compare results of the proposed filter with other existing filter. The LDSI filter provides less error (4.86 ± 1.87) when compared to the background activity filter (5.01 ± 1.93). The algorithm was tested under a PC using pre-recorded datasets, and its FPGA implementation was also carried out. A Xilinx Virtex6 FPGA received data from a 128 × 128 DVS camera, applied the LDSI algorithm, created a AER dataflow and sent the data to the PC for data analysis and visualization. The FPGA could run at 177 MHz clock speed with a low resource usage (671 LUT and 40 Block RAM for the whole system), showing real time operation capabilities and very low resource usage. The results show that, using an adequate filter parameter tuning, the relevant information from the scene is kept while fewer events are generated (i.e., fewer generated data). Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
An Intelligent Fault Diagnosis Method for Bearings with Variable Rotating Speed Based on Pythagorean Spatial Pyramid Pooling CNN
Sensors 2018, 18(11), 3857; https://doi.org/10.3390/s18113857
Received: 16 October 2018 / Revised: 4 November 2018 / Accepted: 7 November 2018 / Published: 9 November 2018
Cited by 3 | PDF Full-text (2369 KB) | HTML Full-text | XML Full-text
Abstract
Deep learning methods have been introduced for fault diagnosis of rotating machinery. Most methods have good performance when processing bearing data at a certain rotating speed. However, most rotating machinery in industrial practice has variable working speed. When processing the bearing data with [...] Read more.
Deep learning methods have been introduced for fault diagnosis of rotating machinery. Most methods have good performance when processing bearing data at a certain rotating speed. However, most rotating machinery in industrial practice has variable working speed. When processing the bearing data with variable rotating speed, the existing methods have low accuracies, or need complex parameter adjustments. To solve this problem, a fault diagnosis method based on continuous wavelet transform scalogram (CWTS) and Pythagorean spatial pyramid pooling convolutional neural network (PSPP-CNN) is proposed in this paper. In this method, continuous wavelet transform is used to decompose vibration signals into CWTSs with different scale ranges according to the rotating speed. By adding a PSPP layer, CNN can process CWTSs in different sizes. Then the fault diagnosis of variable rotating speed bearing can be carried out by a single CNN model without complex parameter adjustment. Compared with a spatial pyramid pooling (SPP) layer that has been used in CNN, a PSPP layer locates as front layer of CNN. Thus, the features obtained by PSPP layer can be delivered to convolutional layers for further feature extraction. According to experiment results, this method has higher diagnosis accuracy for variable rotating speed bearing than other methods. In addition, the PSPP-CNN model trained by data at some rotating speeds can be used to diagnose bearing fault at full working speed. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
Self-Adaptive Spectrum Analysis Based Bearing Fault Diagnosis
Sensors 2018, 18(10), 3312; https://doi.org/10.3390/s18103312
Received: 16 September 2018 / Revised: 30 September 2018 / Accepted: 1 October 2018 / Published: 2 October 2018
Cited by 1 | PDF Full-text (2701 KB) | HTML Full-text | XML Full-text
Abstract
Bearings are critical parts of rotating machines, making bearing fault diagnosis based on signals a research hotspot through the ages. In real application scenarios, bearing signals are normally non-linear and unstable, and thus difficult to analyze in the time or frequency domain only. [...] Read more.
Bearings are critical parts of rotating machines, making bearing fault diagnosis based on signals a research hotspot through the ages. In real application scenarios, bearing signals are normally non-linear and unstable, and thus difficult to analyze in the time or frequency domain only. Meanwhile, fault feature vectors extracted conventionally with fixed dimensions may cause insufficiency or redundancy of diagnostic information and result in poor diagnostic performance. In this paper, Self-adaptive Spectrum Analysis (SSA) and a SSA-based diagnosis framework are proposed to solve these problems. Firstly, signals are decomposed into components with better analyzability. Then, SSA is developed to extract fault features adaptively and construct non-fixed dimension feature vectors. Finally, Support Vector Machine (SVM) is applied to classify different fault features. Data collected under different working conditions are selected for experiments. Results show that the diagnosis method based on the proposed diagnostic framework has better performance. In conclusion, combined with signal decomposition methods, the SSA method proposed in this paper achieves higher reliability and robustness than other tested feature extraction methods. Simultaneously, the diagnosis methods based on SSA achieve higher accuracy and stability under different working conditions with different sample division schemes. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessArticle
A Non-Linear Filtering Algorithm Based on Alpha-Divergence Minimization
Sensors 2018, 18(10), 3217; https://doi.org/10.3390/s18103217
Received: 25 August 2018 / Revised: 15 September 2018 / Accepted: 16 September 2018 / Published: 24 September 2018
PDF Full-text (405 KB) | HTML Full-text | XML Full-text
Abstract
A non-linear filtering algorithm based on the alpha-divergence is proposed, which uses the exponential family distribution to approximate the actual state distribution and the alpha-divergence to measure the approximation degree between the two distributions; thus, it provides more choices for similarity measurement by [...] Read more.
A non-linear filtering algorithm based on the alpha-divergence is proposed, which uses the exponential family distribution to approximate the actual state distribution and the alpha-divergence to measure the approximation degree between the two distributions; thus, it provides more choices for similarity measurement by adjusting the value of α during the updating process of the equation of state and the measurement equation in the non-linear dynamic systems. Firstly, an α -mixed probability density function that satisfies the normalization condition is defined, and the properties of the mean and variance are analyzed when the probability density functions p ( x ) and q ( x ) are one-dimensional normal distributions. Secondly, the sufficient condition of the alpha-divergence taking the minimum value is proven, that is when α 1 , the natural statistical vector’s expectations of the exponential family distribution are equal to the natural statistical vector’s expectations of the α -mixed probability state density function. Finally, the conclusion is applied to non-linear filtering, and the non-linear filtering algorithm based on alpha-divergence minimization is proposed, providing more non-linear processing strategies for non-linear filtering. Furthermore, the algorithm’s validity is verified by the experimental results, and a better filtering effect is achieved for non-linear filtering by adjusting the value of α . Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Review

Jump to: Research, Other

Open AccessReview
Mathematical Methods and Algorithms for Improving Near-Infrared Tunable Diode-Laser Absorption Spectroscopy
Sensors 2018, 18(12), 4295; https://doi.org/10.3390/s18124295
Received: 22 October 2018 / Revised: 26 November 2018 / Accepted: 27 November 2018 / Published: 6 December 2018
Cited by 1 | PDF Full-text (5030 KB) | HTML Full-text | XML Full-text
Abstract
Tunable diode laser absorption spectroscopy technology (TDLAS) has been widely applied in gaseous component analysis based on gas molecular absorption spectroscopy. When dealing with molecular absorption signals, the desired signal is usually interfered by various noises from electronic components and optical paths. This [...] Read more.
Tunable diode laser absorption spectroscopy technology (TDLAS) has been widely applied in gaseous component analysis based on gas molecular absorption spectroscopy. When dealing with molecular absorption signals, the desired signal is usually interfered by various noises from electronic components and optical paths. This paper introduces TDLAS-specific signal processing issues and summarizes effective algorithms so solve these. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Other

Jump to: Research, Review

Open AccessLetter
A Switched-Element System Based Direction of Arrival (DOA) Estimation Method for Un-Cooperative Wideband Orthogonal Frequency Division Multi Linear Frequency Modulation (OFDM-LFM) Radar Signals
Sensors 2019, 19(1), 132; https://doi.org/10.3390/s19010132
Received: 4 December 2018 / Revised: 28 December 2018 / Accepted: 28 December 2018 / Published: 2 January 2019
PDF Full-text (527 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a switched-element direction finding (SEDF) system based Direction of Arrival (DOA) estimation method for un-cooperative wideband Orthogonal Frequency Division Multi Linear Frequency Modulation (OFDM-LFM) radar signals. This method is designed to improve the problem that most DOA algorithms occupy numbers [...] Read more.
This paper proposes a switched-element direction finding (SEDF) system based Direction of Arrival (DOA) estimation method for un-cooperative wideband Orthogonal Frequency Division Multi Linear Frequency Modulation (OFDM-LFM) radar signals. This method is designed to improve the problem that most DOA algorithms occupy numbers of channel and computational resources to handle the direction finding for wideband signals. Then, an iterative spatial parameter estimator is designed through deriving the analytical steering vector of the intercepted OFDM-LFM signal by the SEDF system, which can remarkably mitigate the dispersion effect that is caused by high chirp rate. Finally, the algorithm flow and numerical simulations are given to corroborate the feasibility and validity of our proposed DOA method. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Open AccessLetter
Iterative High-Accuracy Parameter Estimation of Uncooperative OFDM-LFM Radar Signals Based on FrFT and Fractional Autocorrelation Interpolation
Sensors 2018, 18(10), 3550; https://doi.org/10.3390/s18103550
Received: 3 September 2018 / Revised: 3 October 2018 / Accepted: 17 October 2018 / Published: 19 October 2018
Cited by 1 | PDF Full-text (295 KB) | HTML Full-text | XML Full-text
Abstract
To improve the parameter estimation performance of uncooperative Orthogonal Frequency Division Multi- (OFDM) Linear Frequency Modulation (LFM) radar signals, this paper proposes an iterative high-accuracy method, which is based on Fractional Fourier Transform (FrFT) and Fractional Autocorrelation (FA) interpolation. Two iterative estimators for [...] Read more.
To improve the parameter estimation performance of uncooperative Orthogonal Frequency Division Multi- (OFDM) Linear Frequency Modulation (LFM) radar signals, this paper proposes an iterative high-accuracy method, which is based on Fractional Fourier Transform (FrFT) and Fractional Autocorrelation (FA) interpolation. Two iterative estimators for rotation angle and center frequencies are derived from the analytical formulations of the OFDM-LFM signal. Both estimators are designed by measuring the residual terms between the quasi peak and the real peak in the fractional spectrum, which were obtained from the finite sampling data. Successful elimination of spectral leakage caused by multiple components of the OFDM-LFM signal is also proposed by a sequential removal of the strong coefficient in the fractional spectrum through an iterative process. The method flow is given and its superior performance is demonstrated by the simulation results. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing II)
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top