sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensors Signal Processing and Visual Computing 2019"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 September 2019).

Special Issue Editor

Dr. Chen Chen
E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, The University of North Carolina at Charlotte, Charlotte, NC 28223
Interests: compressed sensing; signal and image processing; pattern recognition; computer vision; hyperspectral image analysis
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Signal processing and visual computing research plays an important role in industrial and scientific applications. With the rapid advance of sensor technology, a vast and ever-growing amount of data (i.e., Big Data) in various domains and modalities is readily available, for example, videos captured by a camera network. The emergence of Big Data has brought about a paradigm shift to many fields of data analytics such as signal and image processing and computer vision, namely from handcrafted feature extraction to high level feature learning through deep learning techniques. Therefore, the primary goal of this Special Issue of Sensors is to provide the opportunity for researchers and product developers to discuss the state-of-the-art and trends of architectures, techniques and systems for signal processing and visual understanding.

Topics of Interest:

This Special Issue aims to solicit contributions reporting the most recent progress in signal processing and visual computing. The list of possible topics includes, but is not limited to, the following:

  • Speech analysis
  • Radar signal processing
  • Remote sensing image processing
  • Biomedical signal/image analysis
  • High dimensional signal processing
  • Real-time signal/image processing algorithms and architectures (e.g., FPGA, DSP, GPU)
  • Wearable sensor signal processing and its applications
  • Sensor data fusion and integration
  • Image and video processing (e.g., denoising, deblurring, super-resolution, etc.)
  • Image and video understanding (e.g., novel feature extraction, classification, semantic segmentation, object detection and recognition, action recognition, tracking, etc.)
  • Machine learning (e.g., deep learning) in signal processing and visual computing
  • Big data processing infrastructures and systems, such as cloud computing, high performance computing, Web computing

Dr. Chen Chen
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Multi-GPU, Multi-Node Algorithms for Acceleration of Image Reconstruction in 3D Electrical Capacitance Tomography in Heterogeneous Distributed System
Sensors 2020, 20(2), 391; https://doi.org/10.3390/s20020391 - 10 Jan 2020
Abstract
Electrical capacitance tomography (ECT) is one of non-invasive visualization techniques which can be used for industrial process monitoring. However, acquiring images trough 3D ECT often requires performing time consuming complex computations on large size matrices. Therefore, a new parallel approach for 3D ECT [...] Read more.
Electrical capacitance tomography (ECT) is one of non-invasive visualization techniques which can be used for industrial process monitoring. However, acquiring images trough 3D ECT often requires performing time consuming complex computations on large size matrices. Therefore, a new parallel approach for 3D ECT image reconstruction is proposed, which is based on application of multi-GPU, multi-node algorithms in heterogeneous distributed system. This solution allows to speed up the required data processing. Distributed measurement system with a new framework for parallel computing and a special plugin dedicated to ECT are presented in the paper. Computing system architecture and its main features are described. Both data distribution as well as transmission between the computing nodes are discussed. System performance was measured using LBP and the Landweber’s reconstruction algorithms which were implemented as a part of the ECT plugin. Application of the framework with a new network communication layer reduced data transfer times significantly and improved the overall system efficiency. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing 2019)
Show Figures

Figure 1

Open AccessArticle
Exploring Inter-Instance Relationships within the Query Set for Robust Image Set Matching
Sensors 2019, 19(22), 5051; https://doi.org/10.3390/s19225051 - 19 Nov 2019
Abstract
Image set matching (ISM) has attracted increasing attention in the field of computer vision and pattern recognition. Some studies attempt to model query and gallery sets under a joint or collaborative representation framework, achieving impressive performance. However, existing models consider only the competition [...] Read more.
Image set matching (ISM) has attracted increasing attention in the field of computer vision and pattern recognition. Some studies attempt to model query and gallery sets under a joint or collaborative representation framework, achieving impressive performance. However, existing models consider only the competition and collaboration among gallery sets, neglecting the inter-instance relationships within the query set which are also regarded as one important clue for ISM. In this paper, inter-instance relationships within the query set are explored for robust image set matching. Specifically, we propose to represent the query set instances jointly via a combined dictionary learned from the gallery sets. To explore the commonality and variations within the query set simultaneously to benefit the matching, both low rank and class-level sparsity constraints are imposed on the representation coefficients. Then, to deal with nonlinear data in real scenarios, the‘kernelized version is also proposed. Moreover, to tackle the gross corruptions mixed in the query set, the proposed model is extended for robust ISM. The optimization problems are solved efficiently by employing singular value thresholding and block soft thresholding operators in an alternating direction manner. Experiments on five public datasets demonstrate the effectiveness of the proposed method, comparing favorably with state-of-the-art methods. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing 2019)
Show Figures

Figure 1

Open AccessArticle
Anti-Interference Deep Visual Identification Method for Fault Localization of Transformer Using a Winding Model
Sensors 2019, 19(19), 4153; https://doi.org/10.3390/s19194153 - 25 Sep 2019
Abstract
The idea of Ubiquitous Power Internet of Things (UPIoTs) accelerates the development of intelligent monitoring and diagnostic technologies. In this paper, a diagnostic method suitable for power equipment in an interference environment was proposed based on the deep Convolutional Neural Network (CNN): MobileNet-V2 [...] Read more.
The idea of Ubiquitous Power Internet of Things (UPIoTs) accelerates the development of intelligent monitoring and diagnostic technologies. In this paper, a diagnostic method suitable for power equipment in an interference environment was proposed based on the deep Convolutional Neural Network (CNN): MobileNet-V2 and Digital Image Processing (DIP) methods to conduct fault identification process: including fault type classification and fault localization. A data visualization theory was put forward in this paper, which was applied in frequency response (FR) curves of transformer to obtain dataset. After the image augmentation process, the dataset was input into the deep CNN: MobileNet-V2 for training procedures. Then a spatial-probabilistic mapping relationship was established based on traditional Frequency Response Analysis (FRA) fault diagnostic method. Each image in the dataset was compared with the fingerprint values to get traditional diagnosing results. Next, the anti-interference abilities of the proposed CNN-DIP method were compared with that of the traditional one while the magnitude of the interference gradually increased. Finally, the fault tolerance of the proposed method was verified by further analyzing the deviations between the wrong diagnosing results with the corresponding actual labels. Experimental results showed that the proposed deep visual identification (CNN-DIP) method has a higher diagnosing accuracy, a stronger anti-interference ability and a better fault tolerance. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing 2019)
Show Figures

Graphical abstract

Open AccessArticle
MLC-LSTM: Exploiting the Spatiotemporal Correlation between Multi-Level Weather Radar Echoes for Echo Sequence Extrapolation
Sensors 2019, 19(18), 3988; https://doi.org/10.3390/s19183988 - 15 Sep 2019
Cited by 1
Abstract
Weather radar echo is the data detected by the weather radar sensor and reflects the intensity of meteorological targets. Using the technique of radar echo extrapolation, which is the prediction of future echoes based on historical echo observations, the approaching short-term weather conditions [...] Read more.
Weather radar echo is the data detected by the weather radar sensor and reflects the intensity of meteorological targets. Using the technique of radar echo extrapolation, which is the prediction of future echoes based on historical echo observations, the approaching short-term weather conditions can be forecasted, and warnings can be raised with regard to disastrous weather. Recently, deep learning based extrapolation methods have been proposed and show significant application potential. However, there are two limitations of existing extrapolation methods which should be considered. First, few methods have investigated the impact of the evolutionary process of weather systems on extrapolation accuracy. Second, current deep learning methods usually encounter the problem of blurry echo prediction as extrapolation goes deeper. In this paper, we aim to address the two problems by proposing a Multi-Level Correlation Long Short-Term Memory (MLC-LSTM) and integrate the adversarial training into our approach. The MLC-LSTM can exploit the spatiotemporal correlation between multi-level radar echoes and model their evolution, while the adversarial training can help the model extrapolate realistic and sharp echoes. To train and test our model, we build a real-life multi-level weather radar echoes dataset based on raw CINRAD/SA radar observations provided by the National Meteorological Information Center, China. Extrapolation experiments show that our model can accurately forecast the motion and evolution of an echo while keeping the predicted echo looking realistic and fine-grained. For quantitative evaluation on probability of detection (POD), false alarm rate (FAR), critical success index (CSI), and Heidke skill score (HSS) metrics, our model can reach average scores of 0.6538 POD, 0.2818 FAR, 0.5348 CSI, and 0.6298 HSS, respectively when extrapolating 15 echoes into the future, which outperforms the current state-of-the-art extrapolation methods. Both the qualitative and quantitative experimental results demonstrate the effectiveness of our model, suggesting that it can be effectively applied to operational weather forecasting practice. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing 2019)
Show Figures

Figure 1

Open AccessArticle
Fuzzy Clustering Algorithm with Non-Neighborhood Spatial Information for Surface Roughness Measurement Based on the Reflected Aliasing Images
Sensors 2019, 19(15), 3285; https://doi.org/10.3390/s19153285 - 26 Jul 2019
Abstract
Due to the limitation of the fixed structures of neighborhood windows, the quality of spatial information obtained from the neighborhood pixels may be affected by noise. In order to compensate this drawback, a robust fuzzy c-means clustering with non-neighborhood spatial information (FCM_NNS) is [...] Read more.
Due to the limitation of the fixed structures of neighborhood windows, the quality of spatial information obtained from the neighborhood pixels may be affected by noise. In order to compensate this drawback, a robust fuzzy c-means clustering with non-neighborhood spatial information (FCM_NNS) is presented. Through incorporating non-neighborhood spatial information, the robustness performance of the proposed FCM_NNS with respect to the noise can be significantly improved. The results indicate that FCM_NNS is very effective and robust to noisy aliasing images. Moreover, the comparison of other seven roughness indexes indicates that the proposed FCM_NNS-based F index can characterize the aliasing degree in the surface images and is highly correlated with surface roughness (R2 = 0.9327 for thirty grinding samples). Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing 2019)
Show Figures

Figure 1

Open AccessArticle
WePBAS: A Weighted Pixel-Based Adaptive Segmenter for Change Detection
Sensors 2019, 19(12), 2672; https://doi.org/10.3390/s19122672 - 13 Jun 2019
Abstract
The pixel-based adaptive segmenter (PBAS) is a classic background modeling algorithm for change detection. However, it is difficult for the PBAS method to detect foreground targets in dynamic background regions. To solve this problem, based on PBAS, a weighted pixel-based adaptive segmenter named [...] Read more.
The pixel-based adaptive segmenter (PBAS) is a classic background modeling algorithm for change detection. However, it is difficult for the PBAS method to detect foreground targets in dynamic background regions. To solve this problem, based on PBAS, a weighted pixel-based adaptive segmenter named WePBAS for change detection is proposed in this paper. WePBAS uses weighted background samples as a background model. In the PBAS method, the samples in the background model are not weighted. In the weighted background sample set, the low-weight background samples typically represent the wrong background pixels and need to be replaced. Conversely, high-weight background samples need to be preserved. According to this principle, a directional background model update mechanism is proposed to improve the segmentation performance of the foreground targets in the dynamic background regions. In addition, due to the “background diffusion” mechanism, the PBAS method often identifies small intermittent motion foreground targets as background. To solve this problem, an adaptive foreground counter was added to the WePBAS to limit the “background diffusion” mechanism. The adaptive foreground counter can automatically adjust its own parameters based on videos’ characteristics. The experiments showed that the proposed method is competitive with the state-of-the-art background modeling method for change detection. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing 2019)
Show Figures

Figure 1

Open AccessArticle
RT-Seg: A Real-Time Semantic Segmentation Network for Side-Scan Sonar Images
Sensors 2019, 19(9), 1985; https://doi.org/10.3390/s19091985 - 28 Apr 2019
Abstract
Real-time processing of high-resolution sonar images is of great significance for the autonomy and intelligence of autonomous underwater vehicle (AUV) in complex marine environments. In this paper, we propose a real-time semantic segmentation network termed RT-Seg for Side-Scan Sonar (SSS) images. The proposed [...] Read more.
Real-time processing of high-resolution sonar images is of great significance for the autonomy and intelligence of autonomous underwater vehicle (AUV) in complex marine environments. In this paper, we propose a real-time semantic segmentation network termed RT-Seg for Side-Scan Sonar (SSS) images. The proposed architecture is based on a novel encoder-decoder structure, in which the encoder blocks utilized Depth-Wise Separable Convolution and a 2-way branch for improving performance, and a corresponding decoder network is implemented to restore the details of the targets, followed by a pixel-wise classification layer. Moreover, we use patch-wise strategy for splitting the high-resolution image into local patches and applying them to network training. The well-trained model is used for testing high-resolution SSS images produced by sonar sensor in an onboard Graphic Processing Unit (GPU). The experimental results show that RT-Seg can greatly reduce the number of parameters and floating point operations compared to other networks. It runs at 25.67 frames per second on an NVIDIA Jetson AGX Xavier on 500*500 inputs with excellent segmentation result. Further insights on the speed and accuracy trade-off are discussed in this paper. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing 2019)
Show Figures

Figure 1

Back to TopTop