sensors-logo

Journal Browser

Journal Browser

Data, Signal and Image Processing and Applications in Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 99617

Special Issue Editor


E-Mail Website
Guest Editor
Department of Engineering/IEETA, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal
Interests: signal & image processing and applications; study and development of devices & systems for friendly smart environments; development of multimedia-based teaching/learning methods and tools, with particular emphasis on the use of the internet
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid advance of sensor technology, a vast and ever-growing amount of data in various domains and modalities is readily available. However, presenting raw signal data collected directly from sensors is sometimes inappropriate, due to the presence of, for example, noise or distortion, among others. In order to obtain relevant and insightful metrics from sensors signals’ data, further enhancement of the sensor signals acquired, such as the noise reduction in the one-dimensional electroencephalographic (EEG) signals or color correction in the endoscopic images, and their analysis by computer-based medical systems, is needed. The processing of the data in itself and the consequent extraction of useful information are also vital and included in the topics of this Special Issue.

This Special Issue of Sensors aims to highlight advances in the development, testing, and application of data, signal, and image processing algorithms and techniques to all types of sensors and sensing methodologies. Experimental and theoretical results, in as much detail as possible, are very welcome. Review papers are also very welcome. There is no restriction on the length of the papers.

Topics include but are not limited to:

  • Advanced sensor characterization techniques
  • Ambient assisted living
  • Biomedical signal and image analysis
  • Signal and image processing (e.g., deblurring, denoising, super-resolution)
  • Signal and image understanding (e.g., object detection and recognition, action recognition, semantic segmentation, novel feature extraction)
  • Internet of things (IoT)
  • Machine learning (e.g., deep learning) in signal and image processing
  • Radar signal processing
  • Real-time signal and image processing algorithms and architectures (e.g., FPGA, DSP, GPU)
  • Remote sensing processing
  • Sensor data fusion and integration
  • Sensor error modelling and online calibration
  • Smart environments and smart cities
  • Wearable sensor signal processing and its applications

Dr. Manuel J.C.S. Reis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensors data, signal and image processing
  • Sensors data, signal and image applications
  • Sensors applications

Published Papers (29 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

6 pages, 187 KiB  
Editorial
Data, Signal and Image Processing and Applications in Sensors
by Manuel J. C. S. Reis
Sensors 2021, 21(10), 3323; https://doi.org/10.3390/s21103323 - 11 May 2021
Viewed by 2085
Abstract
With the rapid advance of sensor technology, a vast and ever-growing amount of data in various domains and modalities are readily available [...] Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)

Research

Jump to: Editorial, Other

13 pages, 1034 KiB  
Article
Classification of Critical Levels of CO Exposure of Firefigthers through Monitored Heart Rate
by Raquel Sebastião, Sandra Sorte, José M. Fernandes and Ana I. Miranda
Sensors 2021, 21(5), 1561; https://doi.org/10.3390/s21051561 - 24 Feb 2021
Cited by 3 | Viewed by 2467
Abstract
Smoke inhalation poses a serious health threat to firefighters (FFs), with potential effects including respiratory and cardiac disorders. In this work, environmental and physiological data were collected from FFs, during experimental fires performed in 2015 and 2019. Extending a previous work, which allowed [...] Read more.
Smoke inhalation poses a serious health threat to firefighters (FFs), with potential effects including respiratory and cardiac disorders. In this work, environmental and physiological data were collected from FFs, during experimental fires performed in 2015 and 2019. Extending a previous work, which allowed us to conclude that changes in heart rate (HR) were associated with alterations in the inhalation of carbon monoxide (CO), we performed a HR analysis according to different levels of CO exposure during firefighting based on data collected from three FFs. Based on HR collected and on CO occupational exposure standards (OES), we propose a classifier to identify CO exposure levels through the HR measured values. An ensemble of 100 bagged classification trees was used and the classification of CO levels obtained an overall accuracy of 91.9%. The classification can be performed in real-time and can be embedded in a decision fire-fighting support system. This classification of FF’ exposure to critical CO levels, through minimally-invasive monitored HR, opens the possibility to identify hazardous situations, preventing and avoiding possible severe problems in FF’ health due to inhaled pollutants. The obtained results also show the importance of future studies on the relevance and influence of the exposure and inhalation of pollutants on the FF’ health, especially in what refers to hazardous levels of toxic air pollutants. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

26 pages, 11142 KiB  
Article
Intelligent Video Highlights Generation with Front-Camera Emotion Sensing
by Hugo Meyer, Peter Wei and Xiaofan Jiang
Sensors 2021, 21(4), 1035; https://doi.org/10.3390/s21041035 - 3 Feb 2021
Cited by 4 | Viewed by 2981
Abstract
In this paper, we present HOMER, a cloud-based system for video highlight generation which enables the automated, relevant, and flexible segmentation of videos. Our system outperforms state-of-the-art solutions by fusing internal video content-based features with the user’s emotion data. While current research mainly [...] Read more.
In this paper, we present HOMER, a cloud-based system for video highlight generation which enables the automated, relevant, and flexible segmentation of videos. Our system outperforms state-of-the-art solutions by fusing internal video content-based features with the user’s emotion data. While current research mainly focuses on creating video summaries without the use of affective data, our solution achieves the subjective task of detecting highlights by leveraging human emotions. In two separate experiments, including videos filmed with a dual camera setup, and home videos randomly picked from Microsoft’s Video Titles in the Wild (VTW) dataset, HOMER demonstrates an improvement of up to 38% in F1-score from baseline, while not requiring any external hardware. We demonstrated both the portability and scalability of HOMER through the implementation of two smartphone applications. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

20 pages, 7679 KiB  
Article
DLNR-SIQA: Deep Learning-Based No-Reference Stitched Image Quality Assessment
by Hayat Ullah, Muhammad Irfan, Kyungjin Han and Jong Weon Lee
Sensors 2020, 20(22), 6457; https://doi.org/10.3390/s20226457 - 12 Nov 2020
Cited by 16 | Viewed by 3573
Abstract
Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has [...] Read more.
Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

17 pages, 1179 KiB  
Article
User-Experience with Haptic Feedback Technologies and Text Input in Interactive Multimedia Devices
by Bruno Silva, Hugo Costelha, Luis C. Bento, Marcio Barata and Pedro Assuncao
Sensors 2020, 20(18), 5316; https://doi.org/10.3390/s20185316 - 17 Sep 2020
Cited by 4 | Viewed by 3529
Abstract
Remote control devices are commonly used for interaction with multimedia equipment and applications (e.g., smart TVs, gaming, etc.). To improve conventional keypad-based technologies, haptic feedback and user input capabilities are being developed for enhancing the UX and providing advanced functionalities in remote control [...] Read more.
Remote control devices are commonly used for interaction with multimedia equipment and applications (e.g., smart TVs, gaming, etc.). To improve conventional keypad-based technologies, haptic feedback and user input capabilities are being developed for enhancing the UX and providing advanced functionalities in remote control devices. Although the sensation provided by haptic feedback is similar to mechanical push buttons, the former offers much greater flexibility, due to the possibility of dynamically choosing different mechanical effects and associating different functions to each of them. However, selecting the best haptic feedback effects among the wide variety that is currently enabled by recent technologies, remains a challenge for design engineers aiming to optimise the UX. Rich interaction further requires text input capability, which greatly influences the UX. This work is a contribution towards UX evaluation of remote control devices with haptic feedback and text input. A user evaluation study of a wide variety of haptic feedback effects and text input methods is presented, considering different technologies and different number of actuators on a device. The user preferences, given by subjective evaluation scores, demonstrate that haptic feedback has undoubtedly a positive impact on the UX. Moreover, it is also shown that different levels of UX are obtained, according to the technological characteristics of the haptic actuators and how many of them are used on the device. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

16 pages, 2517 KiB  
Article
Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features
by Tursunov Anvarjon, Mustaqeem and Soonil Kwon
Sensors 2020, 20(18), 5212; https://doi.org/10.3390/s20185212 - 12 Sep 2020
Cited by 106 | Viewed by 9623
Abstract
Artificial intelligence (AI) and machine learning (ML) are employed to make systems smarter. Today, the speech emotion recognition (SER) system evaluates the emotional state of the speaker by investigating his/her speech signal. Emotion recognition is a challenging task for a machine. In addition, [...] Read more.
Artificial intelligence (AI) and machine learning (ML) are employed to make systems smarter. Today, the speech emotion recognition (SER) system evaluates the emotional state of the speaker by investigating his/her speech signal. Emotion recognition is a challenging task for a machine. In addition, making it smarter so that the emotions are efficiently recognized by AI is equally challenging. The speech signal is quite hard to examine using signal processing methods because it consists of different frequencies and features that vary according to emotions, such as anger, fear, sadness, happiness, boredom, disgust, and surprise. Even though different algorithms are being developed for the SER, the success rates are very low according to the languages, the emotions, and the databases. In this paper, we propose a new lightweight effective SER model that has a low computational complexity and a high recognition accuracy. The suggested method uses the convolutional neural network (CNN) approach to learn the deep frequency features by using a plain rectangular filter with a modified pooling strategy that have more discriminative power for the SER. The proposed CNN model was trained on the extracted frequency features from the speech data and was then tested to predict the emotions. The proposed SER model was evaluated over two benchmarks, which included the interactive emotional dyadic motion capture (IEMOCAP) and the berlin emotional speech database (EMO-DB) speech datasets, and it obtained 77.01% and 92.02% recognition results. The experimental results demonstrated that the proposed CNN-based SER system can achieve a better recognition performance than the state-of-the-art SER systems. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

20 pages, 1884 KiB  
Article
Minimum Eigenvector Collaborative Representation Discriminant Projection for Feature Extraction
by Haoshuang Hu and Da-Zheng Feng
Sensors 2020, 20(17), 4778; https://doi.org/10.3390/s20174778 - 24 Aug 2020
Cited by 3 | Viewed by 1594
Abstract
High-dimensional signals, such as image signals and audio signals, usually have a sparse or low-dimensional manifold structure, which can be projected into a low-dimensional subspace to improve the efficiency and effectiveness of data processing. In this paper, we propose a linear dimensionality reduction [...] Read more.
High-dimensional signals, such as image signals and audio signals, usually have a sparse or low-dimensional manifold structure, which can be projected into a low-dimensional subspace to improve the efficiency and effectiveness of data processing. In this paper, we propose a linear dimensionality reduction method—minimum eigenvector collaborative representation discriminant projection—to address high-dimensional feature extraction problems. On the one hand, unlike the existing collaborative representation method, we use the eigenvector corresponding to the smallest non-zero eigenvalue of the sample covariance matrix to reduce the error of collaborative representation. On the other hand, we maintain the collaborative representation relationship of samples in the projection subspace to enhance the discriminability of the extracted features. Also, the between-class scatter of the reconstructed samples is used to improve the robustness of the projection space. The experimental results on the COIL-20 image object database, ORL, and FERET face databases, as well as Isolet database demonstrate the effectiveness of the proposed method, especially in low dimensions and small training sample size. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

13 pages, 4900 KiB  
Article
Demodulation Method for Loran-C at Low SNR Based on Envelope Correlation–Phase Detection
by Jiangbin Yuan, Wenhe Yan, Shifeng Li and Yu Hua
Sensors 2020, 20(16), 4535; https://doi.org/10.3390/s20164535 - 13 Aug 2020
Cited by 9 | Viewed by 2758
Abstract
Loran-C is the most important backup and supplement system for the global navigation satellite system (GNSS). However, existing Loran-C demodulation methods are easily affected by noise and skywave interference (SWI). Therefore, this article proposes a demodulation method based on Loran-C pulse envelope correlation–phase [...] Read more.
Loran-C is the most important backup and supplement system for the global navigation satellite system (GNSS). However, existing Loran-C demodulation methods are easily affected by noise and skywave interference (SWI). Therefore, this article proposes a demodulation method based on Loran-C pulse envelope correlation–phase detection (EC–PD), in which EC has two implementation schemes, namely moving average-cross correlation and matched correlation, to reduce the effects of noise and SWI. The mathematical models of the EC, calculation of the signal-to-noise ratio (SNR) gain, and selection of the EC schemes are given. The simulation results show that compared with an existing method, the proposed method has clear advantages: (1) The demodulation SNR threshold under Gaussian channel is only −2 dB, a reduction of 12.5 dB; (2) The probability of the demodulated SNR threshold, being less than zero under the SWI environment, can reach 0.78, a 26-fold increase. The test results show that the average data availability of the proposed method is 3.3 times higher than that of the existing method. Thus, our demodulation method has higher engineering application value. This will improve the performance of the modern Loran-C system, making it a more reliable backup for the GNSS. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

20 pages, 8374 KiB  
Article
High-Efficiency Microsatellite-Using Super-Resolution Algorithm Based on the Multi-Modality Super-CMOS Sensor
by Ke Zhang, Cankun Yang, Xiaojuan Li, Chunping Zhou and Ruofei Zhong
Sensors 2020, 20(14), 4019; https://doi.org/10.3390/s20144019 - 20 Jul 2020
Cited by 6 | Viewed by 2701
Abstract
To realize the application of super-resolution technology from theory to practice, and to improve microsatellite spatial resolution, we propose a special super-resolution algorithm based on the multi-modality super-CMOS sensor which can adapt to the limited operation capacity of microsatellite computers. First, we designed [...] Read more.
To realize the application of super-resolution technology from theory to practice, and to improve microsatellite spatial resolution, we propose a special super-resolution algorithm based on the multi-modality super-CMOS sensor which can adapt to the limited operation capacity of microsatellite computers. First, we designed an oblique sampling mode with the sensor rotated at an angle of 26.56 ( arctan 1 2 ) to obtain high overlap ratio images with sub-pixel displacement. Secondly, the proposed super-resolution algorithm was applied to reconstruct the final high-resolution image. Because the satellite equipped with this sensor is scheduled to be launched this year, we also designed the simulation mode of conventional sampling and the oblique sampling of the sensor to obtain the comparison and experimental data. Lastly, we evaluated the super-resolution quality of images, the effectiveness, the practicality, and the efficiency of the algorithm. The results of the experiments showed that the satellite-using super-resolution algorithm combined with multi-modality super-CMOS sensor oblique-mode sampling can increase the spatial resolution of an image by about 2 times. The algorithm is simple and highly efficient, and can realize the super-resolution reconstruction of two remote-sensing images within 0.713 s, which has good performance on the microsatellite. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

19 pages, 21475 KiB  
Article
Implicit and Explicit Regularization for Optical Flow Estimation
by Konstantinos Karageorgos, Anastasios Dimou, Federico Alvarez and Petros Daras
Sensors 2020, 20(14), 3855; https://doi.org/10.3390/s20143855 - 10 Jul 2020
Cited by 1 | Viewed by 2737
Abstract
In this paper, two novel and practical regularizing methods are proposed to improve existing neural network architectures for monocular optical flow estimation. The proposed methods aim to alleviate deficiencies of current methods, such as flow leakage across objects and motion consistency within rigid [...] Read more.
In this paper, two novel and practical regularizing methods are proposed to improve existing neural network architectures for monocular optical flow estimation. The proposed methods aim to alleviate deficiencies of current methods, such as flow leakage across objects and motion consistency within rigid objects, by exploiting contextual information. More specifically, the first regularization method utilizes semantic information during the training process to explicitly regularize the produced optical flow field. The novelty of this method lies in the use of semantic segmentation masks to teach the network to implicitly identify the semantic edges of an object and better reason on the local motion flow. A novel loss function is introduced that takes into account the objects’ boundaries as derived from the semantic segmentation mask to selectively penalize motion inconsistency within an object. The method is architecture agnostic and can be integrated into any neural network without modifying or adding complexity at inference. The second regularization method adds spatial awareness to the input data of the network in order to improve training stability and efficiency. The coordinates of each pixel are used as an additional feature, breaking the invariance properties of the neural network architecture. The additional features are shown to implicitly regularize the optical flow estimation enforcing a consistent flow, while improving both the performance and the convergence time. Finally, the combination of both regularization methods further improves the performance of existing cutting edge architectures in a complementary way, both quantitatively and qualitatively, on popular flow estimation benchmark datasets. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

19 pages, 13335 KiB  
Article
Image Deblurring Using Multi-Stream Bottom-Top-Bottom Attention Network and Global Information-Based Fusion and Reconstruction Network
by Quan Zhou, Mingyue Ding and Xuming Zhang
Sensors 2020, 20(13), 3724; https://doi.org/10.3390/s20133724 - 3 Jul 2020
Cited by 6 | Viewed by 3030
Abstract
Image deblurring has been a challenging ill-posed problem in computer vision. Gaussian blur is a common model for image and signal degradation. The deep learning-based deblurring methods have attracted much attention due to their advantages over the traditional methods relying on hand-designed features. [...] Read more.
Image deblurring has been a challenging ill-posed problem in computer vision. Gaussian blur is a common model for image and signal degradation. The deep learning-based deblurring methods have attracted much attention due to their advantages over the traditional methods relying on hand-designed features. However, the existing deep learning-based deblurring techniques still cannot perform well in restoring the fine details and reconstructing the sharp edges. To address this issue, we have designed an effective end-to-end deep learning-based non-blind image deblurring algorithm. In the proposed method, a multi-stream bottom-top-bottom attention network (MBANet) with the encoder-to-decoder structure is designed to integrate low-level cues and high-level semantic information, which can facilitate extracting image features more effectively and improve the computational efficiency of the network. Moreover, the MBANet adopts a coarse-to-fine multi-scale strategy to process the input images to improve image deblurring performance. Furthermore, the global information-based fusion and reconstruction network is proposed to fuse multi-scale output maps to improve the global spatial information and recurrently refine the output deblurred image. The experiments were done on the public GoPro dataset and the realistic and dynamic scenes (REDS) dataset to evaluate the effectiveness and robustness of the proposed method. The experimental results show that the proposed method generally outperforms some traditional deburring methods and deep learning-based state-of-the-art deblurring methods such as scale-recurrent network (SRN) and denoising prior driven deep neural network (DPDNN) in terms of such quantitative indexes as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) and human vision. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

16 pages, 3703 KiB  
Article
A Combined Deep-Learning and Lattice Boltzmann Model for Segmentation of the Hippocampus in MRI
by Yingqian Liu and Zhuangzhi Yan
Sensors 2020, 20(13), 3628; https://doi.org/10.3390/s20133628 - 28 Jun 2020
Cited by 13 | Viewed by 2926
Abstract
Segmentation of the hippocampus (HC) in magnetic resonance imaging (MRI) is an essential step for diagnosis and monitoring of several clinical situations such as Alzheimer’s disease (AD), schizophrenia and epilepsy. Automatic segmentation of HC structures is challenging due to their small volume, complex [...] Read more.
Segmentation of the hippocampus (HC) in magnetic resonance imaging (MRI) is an essential step for diagnosis and monitoring of several clinical situations such as Alzheimer’s disease (AD), schizophrenia and epilepsy. Automatic segmentation of HC structures is challenging due to their small volume, complex shape, low contrast and discontinuous boundaries. The active contour model (ACM) with a statistical shape prior is robust. However, it is difficult to build a shape prior that is general enough to cover all possible shapes of the HC and that suffers the problems of complicated registration of the shape prior and the target object and of low efficiency. In this paper, we propose a semi-automatic model that combines a deep belief network (DBN) and the lattice Boltzmann (LB) method for the segmentation of HC. The training process of DBN consists of unsupervised bottom-up training and supervised training of a top restricted Boltzmann machine (RBM). Given an input image, the trained DBN is utilized to infer the patient-specific shape prior of the HC. The specific shape prior is not only used to determine the initial contour, but is also introduced into the LB model as part of the external force to refine the segmentation. We used a subset of OASIS-1 as the training set and the preliminary release of EADC-ADNI as the testing set. The segmentation results of our method have good correlation and consistency with the manual segmentation results. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

17 pages, 926 KiB  
Article
Speech Quality Feature Analysis for Classification of Depression and Dementia Patients
by Brian Sumali, Yasue Mitsukura, Kuo-ching Liang, Michitaka Yoshimura, Momoko Kitazawa, Akihiro Takamiya, Takanori Fujita, Masaru Mimura and Taishiro Kishimoto
Sensors 2020, 20(12), 3599; https://doi.org/10.3390/s20123599 - 26 Jun 2020
Cited by 14 | Viewed by 4172
Abstract
Loss of cognitive ability is commonly associated with dementia, a broad category of progressive brain diseases. However, major depressive disorder may also cause temporary deterioration of one’s cognition known as pseudodementia. Differentiating a true dementia and pseudodementia is still difficult even for an [...] Read more.
Loss of cognitive ability is commonly associated with dementia, a broad category of progressive brain diseases. However, major depressive disorder may also cause temporary deterioration of one’s cognition known as pseudodementia. Differentiating a true dementia and pseudodementia is still difficult even for an experienced clinician and extensive and careful examinations must be performed. Although mental disorders such as depression and dementia have been studied, there is still no solution for shorter and undemanding pseudodementia screening. This study inspects the distribution and statistical characteristics from both dementia patient and depression patient, and compared them. It is found that some acoustic features were shared in both dementia and depression, albeit their correlation was reversed. Statistical significance was also found when comparing the features. Additionally, the possibility of utilizing machine learning for automatic pseudodementia screening was explored. The machine learning part includes feature selection using LASSO algorithm and support vector machine (SVM) with linear kernel as the predictive model with age-matched symptomatic depression patient and dementia patient as the database. High accuracy, sensitivity, and specificity was obtained in both training session and testing session. The resulting model was also tested against other datasets that were not included and still performs considerably well. These results imply that dementia and depression might be both detected and differentiated based on acoustic features alone. Automated screening is also possible based on the high accuracy of machine learning results. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

13 pages, 2085 KiB  
Article
Multimodal Emotion Evaluation: A Physiological Model for Cost-Effective Emotion Classification
by Gisela Pinto, João M. Carvalho, Filipa Barros, Sandra C. Soares, Armando J. Pinho and Susana Brás
Sensors 2020, 20(12), 3510; https://doi.org/10.3390/s20123510 - 21 Jun 2020
Cited by 23 | Viewed by 3937
Abstract
Emotional responses are associated with distinct body alterations and are crucial to foster adaptive responses, well-being, and survival. Emotion identification may improve peoples’ emotion regulation strategies and interaction with multiple life contexts. Several studies have investigated emotion classification systems, but most of them [...] Read more.
Emotional responses are associated with distinct body alterations and are crucial to foster adaptive responses, well-being, and survival. Emotion identification may improve peoples’ emotion regulation strategies and interaction with multiple life contexts. Several studies have investigated emotion classification systems, but most of them are based on the analysis of only one, a few, or isolated physiological signals. Understanding how informative the individual signals are and how their combination works would allow to develop more cost-effective, informative, and objective systems for emotion detection, processing, and interpretation. In the present work, electrocardiogram, electromyogram, and electrodermal activity were processed in order to find a physiological model of emotions. Both a unimodal and a multimodal approach were used to analyze what signal, or combination of signals, may better describe an emotional response, using a sample of 55 healthy subjects. The method was divided in: (1) signal preprocessing; (2) feature extraction; (3) classification using random forest and neural networks. Results suggest that the electrocardiogram (ECG) signal is the most effective for emotion classification. Yet, the combination of all signals provides the best emotion identification performance, with all signals providing crucial information for the system. This physiological model of emotions has important research and clinical implications, by providing valuable information about the value and weight of physiological signals for emotional classification, which can critically drive effective evaluation, monitoring and intervention, regarding emotional processing and regulation, considering multiple contexts. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

18 pages, 5167 KiB  
Article
Weak Signal Enhance Based on the Neural Network Assisted Empirical Mode Decomposition
by Kai Chen, Kai Xie, Chang Wen and Xin-Gong Tang
Sensors 2020, 20(12), 3373; https://doi.org/10.3390/s20123373 - 15 Jun 2020
Cited by 5 | Viewed by 2494
Abstract
In order to enhance weak signals in strong noise background, a weak signal enhancement method based on EMDNN (neural network-assisted empirical mode decomposition) is proposed. This method combines CEEMD (complementary ensemble empirical mode decomposition), GAN (generative adversarial networks) and LSTM (long short-term memory), [...] Read more.
In order to enhance weak signals in strong noise background, a weak signal enhancement method based on EMDNN (neural network-assisted empirical mode decomposition) is proposed. This method combines CEEMD (complementary ensemble empirical mode decomposition), GAN (generative adversarial networks) and LSTM (long short-term memory), it enhances the efficiency of selecting effective natural mode components in empirical mode decomposition, thus the SNR (signal-noise ratio) is improved. It can also reconstruct and enhance weak signals. The experimental results show that the SNR of this method is improved from 4.1 to 6.2, and the weak signal is clearly recovered. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

21 pages, 5821 KiB  
Article
A Robust Nonrigid Point Set Registration Method Based on Collaborative Correspondences
by Xiang-Wei Feng and Da-Zheng Feng
Sensors 2020, 20(11), 3248; https://doi.org/10.3390/s20113248 - 7 Jun 2020
Cited by 3 | Viewed by 2142
Abstract
The nonrigid point set registration is one of the bottlenecks and has the wide applications in computer vision, pattern recognition, image fusion, video processing, and so on. In a nonrigid point set registration problem, finding the point-to-point correspondences is challengeable because of the [...] Read more.
The nonrigid point set registration is one of the bottlenecks and has the wide applications in computer vision, pattern recognition, image fusion, video processing, and so on. In a nonrigid point set registration problem, finding the point-to-point correspondences is challengeable because of the various image degradations. In this paper, a robust method is proposed to accurately determine the correspondences by fusing the two complementary structural features, including the spatial location of a point and the local structure around it. The former is used to define the absolute distance (AD), and the latter is exploited to define the relative distance (RD). The AD-correspondences and the RD-correspondences can be established based on AD and RD, respectively. The neighboring corresponding consistency is employed to assign the confidence for each RD-correspondence. The proposed heuristic method combines the AD-correspondences and the RD-correspondences to determine the corresponding relationship between two point sets, which can significantly improve the corresponding accuracy. Subsequently, the thin plate spline (TPS) is employed as the transformation function. At each step, the closed-form solutions of the affine and nonaffine parts of TPS can be independently and robustly solved. It facilitates to analyze and control the registration process. Experimental results demonstrate that our method can achieve better performance than several existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

25 pages, 3622 KiB  
Article
An Efficient Orthonormalization-Free Approach for Sparse Dictionary Learning and Dual Principal Component Pursuit
by Xiaoyin Hu and Xin Liu
Sensors 2020, 20(11), 3041; https://doi.org/10.3390/s20113041 - 27 May 2020
Cited by 7 | Viewed by 2353
Abstract
Sparse dictionary learning (SDL) is a classic representation learning method and has been widely used in data analysis. Recently, the m -norm ( m 3 , m N ) maximization has been proposed to solve SDL, which reshapes the problem [...] Read more.
Sparse dictionary learning (SDL) is a classic representation learning method and has been widely used in data analysis. Recently, the m -norm ( m 3 , m N ) maximization has been proposed to solve SDL, which reshapes the problem to an optimization problem with orthogonality constraints. In this paper, we first propose an m -norm maximization model for solving dual principal component pursuit (DPCP) based on the similarities between DPCP and SDL. Then, we propose a smooth unconstrained exact penalty model and show its equivalence with the m -norm maximization model. Based on our penalty model, we develop an efficient first-order algorithm for solving our penalty model (PenNMF) and show its global convergence. Extensive experiments illustrate the high efficiency of PenNMF when compared with the other state-of-the-art algorithms on solving the m -norm maximization with orthogonality constraints. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

22 pages, 8245 KiB  
Article
Outlier Detection Based on Residual Histogram Preference for Geometric Multi-Model Fitting
by Xi Zhao, Yun Zhang, Shoulie Xie, Qianqing Qin, Shiqian Wu and Bin Luo
Sensors 2020, 20(11), 3037; https://doi.org/10.3390/s20113037 - 27 May 2020
Cited by 7 | Viewed by 3049
Abstract
Geometric model fitting is a fundamental issue in computer vision, and the fitting accuracy is affected by outliers. In order to eliminate the impact of the outliers, the inlier threshold or scale estimator is usually adopted. However, a single inlier threshold cannot satisfy [...] Read more.
Geometric model fitting is a fundamental issue in computer vision, and the fitting accuracy is affected by outliers. In order to eliminate the impact of the outliers, the inlier threshold or scale estimator is usually adopted. However, a single inlier threshold cannot satisfy multiple models in the data, and scale estimators with a certain noise distribution model work poorly in geometric model fitting. It can be observed that the residuals of outliers are big for all true models in the data, which makes the consensus of the outliers. Based on this observation, we propose a preference analysis method based on residual histograms to study the outlier consensus for outlier detection in this paper. We have found that the outlier consensus makes the outliers gather away from the inliers on the designed residual histogram preference space, which is quite convenient to separate outliers from inliers through linkage clustering. After the outliers are detected and removed, a linkage clustering with permutation preference is introduced to segment the inliers. In addition, in order to make the linkage clustering process stable and robust, an alternative sampling and clustering framework is proposed in both the outlier detection and inlier segmentation processes. The experimental results also show that the outlier detection scheme based on residual histogram preference can detect most of the outliers in the data sets, and the fitting results are better than most of the state-of-the-art methods in geometric multi-model fitting. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

23 pages, 15638 KiB  
Article
Deep Learning Based Switching Filter for Impulsive Noise Removal in Color Images
by Krystian Radlak, Lukasz Malinski and Bogdan Smolka
Sensors 2020, 20(10), 2782; https://doi.org/10.3390/s20102782 - 14 May 2020
Cited by 26 | Viewed by 4920
Abstract
Noise reduction is one of the most important and still active research topics in low-level image processing due to its high impact on object detection and scene understanding for computer vision systems. Recently, we observed a substantially increased interest in the application of [...] Read more.
Noise reduction is one of the most important and still active research topics in low-level image processing due to its high impact on object detection and scene understanding for computer vision systems. Recently, we observed a substantially increased interest in the application of deep learning algorithms. Many computer vision systems use them, due to their impressive capability of feature extraction and classification. While these methods have also been successfully applied in image denoising, significantly improving its performance, most of the proposed approaches were designed for Gaussian noise suppression. In this paper, we present a switching filtering technique intended for impulsive noise removal using deep learning. In the proposed method, the distorted pixels are detected using a deep neural network architecture and restored with the fast adaptive mean filter. The performed experiments show that the proposed approach is superior to the state-of-the-art filters designed for impulsive noise removal in color digital images. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

15 pages, 6407 KiB  
Article
Precise Loran-C Signal Acquisition Based on Envelope Delay Correlation Method
by Wenhe Yan, Kunjuan Zhao, Shifeng Li, Xinghui Wang and Yu Hua
Sensors 2020, 20(8), 2329; https://doi.org/10.3390/s20082329 - 19 Apr 2020
Cited by 14 | Viewed by 4097
Abstract
The Loran-C system is an internationally standardized positioning, navigation, and timing service system. It is the most important backup and supplement for the global navigation satellite system (GNSS). However, the existing Loran-C signal acquisition methods are easily affected by noise and cross-rate interference [...] Read more.
The Loran-C system is an internationally standardized positioning, navigation, and timing service system. It is the most important backup and supplement for the global navigation satellite system (GNSS). However, the existing Loran-C signal acquisition methods are easily affected by noise and cross-rate interference (CRI). Therefore, this article proposes an envelope delay correlation acquisition method that, when combined with linear digital averaging (LDA) technology, can effectively suppress noise and CRI. The selection of key parameters and the performance of the acquisition method are analyzed through a simulation. When the signal-to-noise ratio (SNR) is −16 dB, the acquisition probability is more than 90% and the acquisition error is less than 1 μs. When the signal-to-interference ratio (SIR) of the CRI is −5 dB, the CRI can also be suppressed and the acquisition error is less than 5 μs. These results show that our acquisition method is accurate. The performance of the method is also verified by actual signals emitted by a Loran-C system. These test results show that our method can reliably detect Loran-C pulse group signals over distances up to 1500 km, even at low SNR. This will enable the modern Loran-C system to be a more reliable backup for the GNSS system. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

13 pages, 2485 KiB  
Article
Small Foreign Object Debris Detection for Millimeter-Wave Radar Based on Power Spectrum Features
by Peishuang Ni, Chen Miao, Hui Tang, Mengjie Jiang and Wen Wu
Sensors 2020, 20(8), 2316; https://doi.org/10.3390/s20082316 - 18 Apr 2020
Cited by 17 | Viewed by 4179
Abstract
Foreign object debris (FOD) detection can be considered a kind of classification that distinguishes the measured signal as either containing FOD targets or only corresponding to ground clutter. In this paper, we propose a support vector domain description (SVDD) classifier with the particle [...] Read more.
Foreign object debris (FOD) detection can be considered a kind of classification that distinguishes the measured signal as either containing FOD targets or only corresponding to ground clutter. In this paper, we propose a support vector domain description (SVDD) classifier with the particle swarm optimization (PSO) algorithm for FOD detection. The echo features of FOD and ground clutter received by the millimeter-wave radar are first extracted in the power spectrum domain as input eigenvectors of the classifier, followed with the parameters optimized by the PSO algorithm, and lastly, a PSO-SVDD classifier is established. However, since only ground clutter samples are utilized to train the SVDD classifier, overfitting inevitably occurs. Thus, a small number of samples with FOD are added in the training stage to further construct a PSO-NSVDD (NSVDD: SVDD with negative examples) classifier to achieve better classification performance. Experimental results based on measured data showed that the proposed methods could not only achieve a good detection performance but also significantly reduce the false alarm rate. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

14 pages, 2970 KiB  
Article
Double-Constraint Inpainting Model of a Single-Depth Image
by Wu Jin, Li Zun and Liu Yong
Sensors 2020, 20(6), 1797; https://doi.org/10.3390/s20061797 - 24 Mar 2020
Cited by 4 | Viewed by 2567
Abstract
In real applications, obtained depth images are incomplete; therefore, depth image inpainting is studied here. A novel model that is characterised by both a low-rank structure and nonlocal self-similarity is proposed. As a double constraint, the low-rank structure and nonlocal self-similarity can fully [...] Read more.
In real applications, obtained depth images are incomplete; therefore, depth image inpainting is studied here. A novel model that is characterised by both a low-rank structure and nonlocal self-similarity is proposed. As a double constraint, the low-rank structure and nonlocal self-similarity can fully exploit the features of single-depth images to complete the inpainting task. First, according to the characteristics of pixel values, we divide the image into blocks, and similar block groups and three-dimensional arrangements are then formed. Then, the variable splitting technique is applied to effectively divide the inpainting problem into the sub-problems of the low-rank constraint and nonlocal self-similarity constraint. Finally, different strategies are used to solve different sub-problems, resulting in greater reliability. Experiments show that the proposed algorithm attains state-of-the-art performance. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

15 pages, 5335 KiB  
Article
High-Efficiency Wavelet Compressive Fusion for Improving MEMS Array Performance
by Siyuan Liang, Weilong Zhu, Feng Zhao and Congyi Wang
Sensors 2020, 20(6), 1662; https://doi.org/10.3390/s20061662 - 17 Mar 2020
Cited by 4 | Viewed by 2361
Abstract
With the rapid development of microelectromechanical systems (MEMS) technology, low-cost MEMS inertial devices have been widely used for inertial navigation. However, their application range is greatly limited in some fields with high precision requirements because of their low precision and high noise. In [...] Read more.
With the rapid development of microelectromechanical systems (MEMS) technology, low-cost MEMS inertial devices have been widely used for inertial navigation. However, their application range is greatly limited in some fields with high precision requirements because of their low precision and high noise. In this paper, to improve the performance of MEMS inertial devices, we propose a highly efficient optimal estimation algorithm for MEMS arrays based on wavelet compressive fusion (WCF). First, the algorithm uses the compression property of the multiscale wavelet transform to compress the original signal, fusing the compressive data based on the support. Second, threshold processing is performed on the fused wavelet coefficients. The simulation result demonstrates that the proposed algorithm performs well on the output of the inertial sensor array. Then, a ten-gyro array system is designed for collecting practical data, and the frequency of the embedded processor in our verification environment is 800 MHz. The experimental results show that, under the normal working conditions of the MEMS array system, the 100 ms input array data require an approximately 75 ms processing delay when employing the WCF algorithm to support real-time processing. Additionally, the zero-bias instability, angle random walk, and rate slope of the gyroscope are improved by 8.0, 8.0, and 9.5 dB, respectively, as compared with the original device. The experimental results demonstrate that the WCF algorithm has outstanding real-time performance and can effectively improve the accuracy of low-cost MEMS inertial devices. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

22 pages, 10796 KiB  
Article
A Line Matching Method Based on Multiple Intensity Ordering with Uniformly Spaced Sampling
by Jing Xing, Zhenzhong Wei and Guangjun Zhang
Sensors 2020, 20(6), 1639; https://doi.org/10.3390/s20061639 - 15 Mar 2020
Cited by 8 | Viewed by 2982
Abstract
This paper presents a line matching method based on multiple intensity ordering with uniformly spaced sampling. Line segments are extracted from the image pyramid, with the aim of adapting scale changes and addressing fragmentation problem. The neighborhood of line segments was divided into [...] Read more.
This paper presents a line matching method based on multiple intensity ordering with uniformly spaced sampling. Line segments are extracted from the image pyramid, with the aim of adapting scale changes and addressing fragmentation problem. The neighborhood of line segments was divided into sub-regions adaptively according to intensity order to overcome the difficulty brought by various line lengths. An intensity-based local feature descriptor was introduced by constructing multiple concentric ring-shaped structures. The dimension of the descriptor was reduced significantly by uniformly spaced sampling and dividing sample points into several point sets while improving the discriminability. The performance of the proposed method was tested on public datasets which cover various scenarios and compared with another two well-known line matching algorithms. The experimental results show that our method achieves superior performance dealing with various image deformations, especially scale changes and large illumination changes, and provides much more reliable correspondences. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

16 pages, 4125 KiB  
Article
Research on an Infrared Multi-Target Saliency Detection Algorithm under Sky Background Conditions
by Shaosheng Dai and Dongyang Li
Sensors 2020, 20(2), 459; https://doi.org/10.3390/s20020459 - 14 Jan 2020
Cited by 4 | Viewed by 2398
Abstract
Aiming at solving the problem of incomplete saliency detection and unclear boundaries in infrared multi-target images with different target sizes and low signal-to-noise ratio under sky background conditions, this paper proposes a saliency detection method for multiple targets based on multi-saliency detection. The [...] Read more.
Aiming at solving the problem of incomplete saliency detection and unclear boundaries in infrared multi-target images with different target sizes and low signal-to-noise ratio under sky background conditions, this paper proposes a saliency detection method for multiple targets based on multi-saliency detection. The multiple target areas of the infrared image are mainly bright and the background areas are dark. Combining with the multi-scale top hat (Top-hat) transformation, the image is firstly corroded and expanded to extract the subtraction of light and shade parts and reconstruct the image to reduce the interference of sky blurred background noise. Then the image obtained by a multi-scale Top-hat transformation is transformed from the time domain to the frequency domain, and the spectral residuals and phase spectrum are extracted directly to obtain two kinds of image saliency maps by multi-scale Gauss filtering reconstruction, respectively. On the other hand, the quaternion features are extracted directly to transform the phase spectrum, and then the phase spectrum is reconstructed to obtain one kind of image saliency map by the Gauss filtering. Finally, the above three saliency maps are fused to complete the saliency detection of infrared images. The test results show that after the experimental analysis of infrared video photographs and the comparative analysis of Receiver Operating Characteristic (ROC) curve and Area Under the Curve (AUC) index, the infrared image saliency map generated by this method has clear target details and good background suppression effect, and the AUC index performance is good, reaching over 99%. It effectively improves the multi-target saliency detection effect of the infrared image under the sky background and is beneficial to subsequent detection and tracking of image targets. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

18 pages, 4886 KiB  
Article
Separation of Partial Discharge Sources Measured in the High-Frequency Range with HFCT Sensors Using PRPD-teff Patterns
by Ricardo Albarracín-Sánchez, Fernando Álvarez-Gómez, Carlos A. Vera-Romero and Johnatan M. Rodríguez-Serna
Sensors 2020, 20(2), 382; https://doi.org/10.3390/s20020382 - 9 Jan 2020
Cited by 17 | Viewed by 5328
Abstract
During the last two decades, on-line partial discharge (PD) measurements have been proven as a very efficient test to evaluate the insulation condition of high-voltage (HV) installations in service. Among the different PD-measuring techniques, the non-conventional electromagnetic methods are the most used due [...] Read more.
During the last two decades, on-line partial discharge (PD) measurements have been proven as a very efficient test to evaluate the insulation condition of high-voltage (HV) installations in service. Among the different PD-measuring techniques, the non-conventional electromagnetic methods are the most used due to their effectiveness and versatility. However, there are two main difficulties to overcome in on-line PD measurements when these methods are applied: the ambient electric noise and the simultaneous presence of various types of PD or pulse-shaped signals in the HV facility to be evaluated. A practical and effective method is presented to separate and identify PD sources acting simultaneously in HV systems under test. This method enables testers to carry out a first accurate diagnosis of the installation while performing the measurements in situ with non-invasive high-frequency current transformers (HFCT) used as sensors. The data acquisition in real-time reduces the time of postprocessing by an expert. This method was implemented in a Matlab application named PRPD-time tool, which consists of the analysis of the Phase-Resolved Partial Discharge (PRPD) pattern in combination with two types of interactive graphic representations. These graphical depictions are obtained including a feature parameter, effective time (teff), related to the duration of single measured pulses as a third axis incorporated in a classical PRPD representation, named the PRPD-teff pattern. The resulting interactive diagrams are complementary and allow the pulse source separation of pulses and clustering. The effectiveness of the proposed method and the developed Matlab application for separating PD sources is demonstrated with a practical laboratory experiment where various PD sources and pulse-type noise interferences were simultaneously measured. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Graphical abstract

23 pages, 8809 KiB  
Article
A Multidimensional Hyperjerk Oscillator: Dynamics Analysis, Analogue and Embedded Systems Implementation, and Its Application as a Cryptosystem
by Tsafack Nestor, Nkapkop Jean De Dieu, Kengne Jacques, Effa Joseph Yves, Abdullah M. Iliyasu and Ahmed A. Abd El-Latif
Sensors 2020, 20(1), 83; https://doi.org/10.3390/s20010083 - 21 Dec 2019
Cited by 83 | Viewed by 4170
Abstract
A lightweight image encryption algorithm is presented based on chaos induction via a 5-dimensional hyperjerk oscillator (5DHO) network. First, the dynamics of our 5DHO network is investigated and shown to exhibit up to five coexisting hidden attractors in the state space that depend [...] Read more.
A lightweight image encryption algorithm is presented based on chaos induction via a 5-dimensional hyperjerk oscillator (5DHO) network. First, the dynamics of our 5DHO network is investigated and shown to exhibit up to five coexisting hidden attractors in the state space that depend exclusively on the system’s initial values. Further, a simple implementation of the circuit was used to validate its ability to exhibit chaotic dynamical properties. Second, an Arduino UNO platform is used to confirm the usability of our oscillator in embedded system implementation. Finally, an efficient image encryption application is executed using the proposed chaotic networks based on the use of permutation-substitution sequences. The superior qualities of the proposed strategy are traced to the dynamic set of keys used in the substitution process which heralds the generation of the final ciphered image. Based on the average results obtained from the entropy analysis (7.9976), NPCR values (99.62), UACI tests (33.69) and encryption execution time for 512 × 512 images (0.1141 s), the proposed algorithm is adjudged to be fast and robust to differential and statistical attacks relative to similar approaches. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

22 pages, 7322 KiB  
Article
Fringe Phase-Shifting Field Based Fuzzy Quotient Space-Oriented Partial Differential Equations Filtering Method for Gaussian Noise-Induced Phase Error
by Changzhi Yu, Fang Ji, Junpeng Xue and Yajun Wang
Sensors 2019, 19(23), 5202; https://doi.org/10.3390/s19235202 - 27 Nov 2019
Cited by 5 | Viewed by 2689
Abstract
Traditional filtering methods only focused on improving the peak signal-to-noise ratio of the single fringe pattern, which ignore the filtering effect on phase extraction. Fringe phase-shifting field based fuzzy quotient space-oriented partial differential equations filtering method is proposed to reduce the phase error [...] Read more.
Traditional filtering methods only focused on improving the peak signal-to-noise ratio of the single fringe pattern, which ignore the filtering effect on phase extraction. Fringe phase-shifting field based fuzzy quotient space-oriented partial differential equations filtering method is proposed to reduce the phase error caused by Gaussian noise while filtering. First, the phase error distribution that is caused by Gaussian noise is analyzed. Furthermore, by introducing the fringe phase-shifting field and the theory of fuzzy quotient space, the modified filtering direction can be adaptively obtained, which transforms the traditional single image filtering into multi-image filtering. Finally, the improved fourth-order oriented partial differential equations with fidelity item filtering method is established. Experiments demonstrated that the proposed method achieves a higher signal-to-noise ratio and lower phase error caused by noise, while also retaining more edge details. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

Other

Jump to: Editorial, Research

12 pages, 3801 KiB  
Letter
Depth-Dependent High Distortion Lens Calibration
by Carlos Ricolfe-Viala and Alicia Esparza
Sensors 2020, 20(13), 3695; https://doi.org/10.3390/s20133695 - 1 Jul 2020
Cited by 3 | Viewed by 3022
Abstract
Accurate correction of high distorted images is a very complex problem. Several lens distortion models exist that are adjusted using different techniques. Usually, regardless of the chosen model, a unique distortion model is adjusted to undistort images and the camera-calibration template distance is [...] Read more.
Accurate correction of high distorted images is a very complex problem. Several lens distortion models exist that are adjusted using different techniques. Usually, regardless of the chosen model, a unique distortion model is adjusted to undistort images and the camera-calibration template distance is not considered. Several authors have presented the depth dependency of lens distortion but none of them have treated it with highly distorted images. This paper presents an analysis of the distortion depth dependency in strongly distorted images. The division model that is able to represent high distortion with only one parameter is modified to represent a depth-dependent high distortion lens model. The proposed calibration method obtains more accurate results when compared to existing calibration methods. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Show Figures

Figure 1

Back to TopTop