E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Recent Advances in Artificial Intelligence and Deep Learning for Sensor Information Fusion"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (30 January 2019)

Special Issue Editors

Guest Editor
Prof. Dr. Han-Chieh Chao

Department of Electrical Engineering, National Dong Hwa University, Taiwan
Website | E-Mail
Phone: +886-3-9357400 - 251
Fax: +866-3-9354238
Interests: wireless network; mobile computing; IoT; bacteria-inspired network
Guest Editor
Dr. Chi-Yuan Chen

Department of Computer Science and Information Engineering, National Ilan University, Taiwan
Website | E-Mail
Interests: mobile communication; network security; quantum communication
Guest Editor
Dr. Fan-Hsun Tseng

Department of Technology Application and Human Resource Development, National Taiwan Normal University, Taiwan
Website | E-Mail
Interests: cloud computing; IoT applications; 5G mobile networks

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) has attracted the attention of almost every researcher in computer science since AlphaGo Master continuously beat the current world number one ranking Go player three times. A few months later, DeepMind announced AlphaGo Zero which is a new version of AlphaGo Master created without using any human knowledge. AlphaGo Zero exceeded the Master after 40 days of reinforcement learning in its neural network. The brilliant performance and outstanding breakthroughs in AI brought machine learning and deep learning back to focus.

On the other hand, the number of sensors has been increased incrementally due to the rapid development of the Internet of Things (IoT) in the past decade. The numerous sensors and Internet-connected devices generate a vast quantity of data that needs to be tackled. During the past few years, a great number of systems, algorithms, mechanisms and methodologies have been proposed for sensor information fusion. However, sensor networks and information fusion are still searching for more intelligent and learning-based fusion technique, system architecture, sensor chip, fusion processing, data analysis, message control algorithm, sensing method, and so on.

This Special Issue will focus on AI and deep learning for sensor information fusion. It will also present a holistic view of research challenges and opportunities in the emerging area of deep learning and machine learning for sensor information fusion. Research papers that describe innovative ideas and solutions for intelligent and learning-based sensor information fusion are solicited.

Topics of interest include, but are not limited to:

  • AI and deep learning for sensor information fusion system
  • System architecture of AI sensors and multi-sensors
  • Learning model for sensor information fusion
  • Intelligent and learning-based fusion techniques for multi-sensor system
  • AI and deep learning for sensor fusion decision
  • AI and deep learning for sensor applications
  • Deep learning and machine learning for sensor message control
  • Intelligent and learning-based sensor communication technology
  • Learning-based fusion processing for sensor and multi-sensor system
  • Intelligent data analysis for sensor information fusion
  • AI and deep learning for data mining in IoT
  • AI chips for sensors, UAVs, home appliances and mobile devices

Prof. Dr. Han-Chieh Chao
Dr. Chi-Yuan Chen
Dr. Fan-Hsun Tseng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence
  • Deep Learning
  • Sensor Networks
  • Information Fusion

Published Papers (16 papers)

View options order results:
result details:
Displaying articles 1-16
Export citation of selected articles as:

Research

Open AccessArticle Sensors Information Fusion System with Fault Detection Based on Multi-Manifold Regularization Neighborhood Preserving Embedding
Sensors 2019, 19(6), 1440; https://doi.org/10.3390/s19061440
Received: 28 January 2019 / Revised: 8 March 2019 / Accepted: 21 March 2019 / Published: 23 March 2019
PDF Full-text (4760 KB) | HTML Full-text | XML Full-text
Abstract
Electrical drive systems play an increasingly important role in high-speed trains. The whole system is equipped with sensors that support complicated information fusion, which means the performance around this system ought to be monitored especially during incipient changes. In such situation, it is [...] Read more.
Electrical drive systems play an increasingly important role in high-speed trains. The whole system is equipped with sensors that support complicated information fusion, which means the performance around this system ought to be monitored especially during incipient changes. In such situation, it is crucial to distinguish faulty state from observed normal state because of the dire consequences closed-loop faults might bring. In this research, an optimal neighborhood preserving embedding (NPE) method called multi-manifold regularization NPE (MMRNPE) is proposed to detect various faults in an electrical drive sensor information fusion system. By taking locality preserving embedding into account, the proposed methodology extends the united application of Euclidean distance of both designated points and paired points, which guarantees the access to both local and global sensor information. Meanwhile, this structure fuses several manifolds to extract their own features. In addition, parameters are allocated in diverse manifolds to seek an optimal combination of manifolds while entropy of information with parameters is also selected to avoid the overweight of single manifold. Moreover, an experimental test based on the platform was built to validate the MMRNPE approach and demonstrate the effectiveness of the fault detection. Results and observations show that the proposed MMRNPE offers a better fault detection representation in comparison with NPE. Full article
Figures

Figure 1

Open AccessArticle AI-Based Sensor Information Fusion for Supporting Deep Supervised Learning
Sensors 2019, 19(6), 1345; https://doi.org/10.3390/s19061345
Received: 30 January 2019 / Revised: 26 February 2019 / Accepted: 11 March 2019 / Published: 18 March 2019
PDF Full-text (696 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, artificial intelligence (AI) and its subarea of deep learning have drawn the attention of many researchers. At the same time, advances in technologies enable the generation or collection of large amounts of valuable data (e.g., sensor data) from various sources [...] Read more.
In recent years, artificial intelligence (AI) and its subarea of deep learning have drawn the attention of many researchers. At the same time, advances in technologies enable the generation or collection of large amounts of valuable data (e.g., sensor data) from various sources in different applications, such as those for the Internet of Things (IoT), which in turn aims towards the development of smart cities. With the availability of sensor data from various sources, sensor information fusion is in demand for effective integration of big data. In this article, we present an AI-based sensor-information fusion system for supporting deep supervised learning of transportation data generated and collected from various types of sensors, including remote sensed imagery for the geographic information system (GIS), accelerometers, as well as sensors for the global navigation satellite system (GNSS) and global positioning system (GPS). The discovered knowledge and information returned from our system provides analysts with a clearer understanding of trajectories or mobility of citizens, which in turn helps to develop better transportation models to achieve the ultimate goal of smarter cities. Evaluation results show the effectiveness and practicality of our AI-based sensor information fusion system for supporting deep supervised learning of big transportation data. Full article
Figures

Figure 1

Open AccessArticle Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals
Sensors 2019, 19(1), 210; https://doi.org/10.3390/s19010210
Received: 25 September 2018 / Revised: 18 December 2018 / Accepted: 26 December 2018 / Published: 8 January 2019
Cited by 1 | PDF Full-text (2931 KB) | HTML Full-text | XML Full-text
Abstract
Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an [...] Read more.
Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI. Full article
Figures

Figure 1

Open AccessArticle Road Surface Classification Using a Deep Ensemble Network with Sensor Feature Selection
Sensors 2018, 18(12), 4342; https://doi.org/10.3390/s18124342
Received: 29 November 2018 / Revised: 6 December 2018 / Accepted: 7 December 2018 / Published: 9 December 2018
PDF Full-text (1770 KB) | HTML Full-text | XML Full-text
Abstract
Deep learning is a fast-growing field of research, in particular, for autonomous application. In this study, a deep learning network based on various sensor data is proposed for identifying the roads where the vehicle is driving. Long-Short Term Memory (LSTM) unit and ensemble [...] Read more.
Deep learning is a fast-growing field of research, in particular, for autonomous application. In this study, a deep learning network based on various sensor data is proposed for identifying the roads where the vehicle is driving. Long-Short Term Memory (LSTM) unit and ensemble learning are utilized for network design and a feature selection technique is applied such that unnecessary sensor data could be excluded without a loss of performance. Real vehicle experiments were carried out for the learning and verification of the proposed deep learning structure. The classification performance was verified through four different test roads. The proposed network shows the classification accuracy of 94.6% in the test data. Full article
Figures

Figure 1

Open AccessArticle Detection and Validation of Tow-Away Road Sign Licenses through Deep Learning Methods
Sensors 2018, 18(12), 4147; https://doi.org/10.3390/s18124147
Received: 1 October 2018 / Revised: 14 November 2018 / Accepted: 20 November 2018 / Published: 26 November 2018
PDF Full-text (2283 KB) | HTML Full-text | XML Full-text
Abstract
This work presents the practical design of a system that faces the problem of identification and validation of private no-parking road signs. This issue is very important for the public city administrations since many people, after receiving a code that identifies the signal [...] Read more.
This work presents the practical design of a system that faces the problem of identification and validation of private no-parking road signs. This issue is very important for the public city administrations since many people, after receiving a code that identifies the signal at the entrance of their private car garage as valid, forget to renew the code validity through the payment of a city tax, causing large money shortages to the public administration. The goal of the system is twice since, after recognition of the official road sign pattern, its validity must be controlled by extracting the code put in a specific sub-region inside it. Despite a lot of work on the road signs’ topic having been carried out, a complete benchmark dataset also considering the particular setting of the Italian law is today not available for comparison, thus the second goal of this work is to provide experimental results that exploit machine learning and deep learning techniques that can be satisfactorily used in industrial applications. Full article
Figures

Figure 1

Open AccessArticle Roof Shape Classification from LiDAR and Satellite Image Data Fusion Using Supervised Learning
Sensors 2018, 18(11), 3960; https://doi.org/10.3390/s18113960
Received: 9 October 2018 / Revised: 3 November 2018 / Accepted: 13 November 2018 / Published: 15 November 2018
Cited by 1 | PDF Full-text (2579 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Geographic information systems (GIS) provide accurate maps of terrain, roads, waterways, and building footprints and heights. Aircraft, particularly small unmanned aircraft systems (UAS), can exploit this and additional information such as building roof structure to improve navigation accuracy and safely perform contingency landings [...] Read more.
Geographic information systems (GIS) provide accurate maps of terrain, roads, waterways, and building footprints and heights. Aircraft, particularly small unmanned aircraft systems (UAS), can exploit this and additional information such as building roof structure to improve navigation accuracy and safely perform contingency landings particularly in urban regions. However, building roof structure is not fully provided in maps. This paper proposes a method to automatically label building roof shape from publicly available GIS data. Satellite imagery and airborne LiDAR data are processed and manually labeled to create a diverse annotated roof image dataset for small to large urban cities. Multiple convolutional neural network (CNN) architectures are trained and tested, with the best performing networks providing a condensed feature set for support vector machine and decision tree classifiers. Satellite image and LiDAR data fusion is shown to provide greater classification accuracy than using either data type alone. Model confidence thresholds are adjusted leading to significant increases in models precision. Networks trained from roof data in Witten, Germany and Manhattan (New York City) are evaluated on independent data from these cities and Ann Arbor, Michigan. Full article
Figures

Graphical abstract

Open AccessArticle Generative Adversarial Networks-Based Semi-Supervised Automatic Modulation Recognition for Cognitive Radio Networks
Sensors 2018, 18(11), 3913; https://doi.org/10.3390/s18113913
Received: 14 September 2018 / Revised: 7 November 2018 / Accepted: 9 November 2018 / Published: 13 November 2018
Cited by 1 | PDF Full-text (5701 KB) | HTML Full-text | XML Full-text
Abstract
With the recently explosive growth of deep learning, automatic modulation recognition has undergone rapid development. Most of the newly proposed methods are dependent on large numbers of labeled samples. We are committed to using fewer labeled samples to perform automatic modulation recognition in [...] Read more.
With the recently explosive growth of deep learning, automatic modulation recognition has undergone rapid development. Most of the newly proposed methods are dependent on large numbers of labeled samples. We are committed to using fewer labeled samples to perform automatic modulation recognition in the cognitive radio domain. Here, a semi-supervised learning method based on adversarial training is proposed which is called signal classifier generative adversarial network. Most of the prior methods based on this technology involve computer vision applications. However, we improve the existing network structure of a generative adversarial network by adding the encoder network and a signal spatial transform module, allowing our framework to address radio signal processing tasks more efficiently. These two technical improvements effectively avoid nonconvergence and mode collapse problems caused by the complexity of the radio signals. The results of simulations show that compared with well-known deep learning methods, our method improves the classification accuracy on a synthetic radio frequency dataset by 0.1% to 12%. In addition, we verify the advantages of our method in a semi-supervised scenario and obtain a significant increase in accuracy compared with traditional semi-supervised learning methods. Full article
Figures

Figure 1

Open AccessArticle A Multisensor Fusion Method for Tool Condition Monitoring in Milling
Sensors 2018, 18(11), 3866; https://doi.org/10.3390/s18113866
Received: 29 September 2018 / Revised: 1 November 2018 / Accepted: 8 November 2018 / Published: 10 November 2018
Cited by 2 | PDF Full-text (8382 KB) | HTML Full-text | XML Full-text
Abstract
Tool fault diagnosis in numerical control (NC) machines plays a significant role in ensuring manufacturing quality. Tool condition monitoring (TCM) based on multisensors can provide more information related to tool condition, but it can also increase the risk that effective information is overwhelmed [...] Read more.
Tool fault diagnosis in numerical control (NC) machines plays a significant role in ensuring manufacturing quality. Tool condition monitoring (TCM) based on multisensors can provide more information related to tool condition, but it can also increase the risk that effective information is overwhelmed by redundant information. Thus, the method of obtaining the most effective feature information from multisensor signals is currently a hot topic. However, most of the current feature selection methods take into account the correlation between the feature parameters and the tool state and do not analyze the influence of feature parameters on prediction accuracy. In this paper, a multisensor global feature extraction method for TCM in the milling process is researched. Several statistical parameters in the time, frequency, and time–frequency (Wavelet packet transform) domains of multiple sensors are selected as an alternative parameter set. The monitoring model is executed by a Kernel-based extreme learning Machine (KELM), and a modified genetic algorithm (GA) is applied in order to search the optimal parameter combinations in a two-objective optimization model to achieve the highest prediction precision. The experimental results show that the proposed method outperforms the Pearson’s correlation coefficient (PCC) based, minimal redundancy and maximal relevance (mRMR) based, and Principal component analysis (PCA)-based feature selection methods. Full article
Figures

Figure 1

Open AccessArticle Sensor Information Fusion by Integrated AI to Control Public Emotion in a Cyber-Physical Environment
Sensors 2018, 18(11), 3767; https://doi.org/10.3390/s18113767
Received: 13 September 2018 / Revised: 22 October 2018 / Accepted: 29 October 2018 / Published: 4 November 2018
Cited by 1 | PDF Full-text (12140 KB) | HTML Full-text | XML Full-text
Abstract
The cyber-physical system (CPS) is a next-generation smart system that combines computing with physical space. It has been applied in various fields because the uncertainty of the physical world can be ideally controlled using cyber technology. In terms of environmental control, studies have [...] Read more.
The cyber-physical system (CPS) is a next-generation smart system that combines computing with physical space. It has been applied in various fields because the uncertainty of the physical world can be ideally controlled using cyber technology. In terms of environmental control, studies have been conducted to enhance the effectiveness of the service by inducing ideal emotions in the service space. This paper proposes a CPS control system for inducing emotion based on multiple sensors. The CPS can expand the constrained environmental sensors of the physical space variously by combining the virtual space with the physical space. The cyber space is constructed in a Unity 3D space that can be experienced through virtual reality devices. We collect the temperature, humidity, dust concentration, and current emotion in the physical space as an environmental control elements, and the control illumination, color temperature, video, sound and volume in the cyber space. The proposed system consists of an emotion prediction module using modular Bayesian networks and an optimal stimulus decision module for deriving the predicted emotion to the target emotion based on utility theory and reinforcement learning. To verify the system, the performance is evaluated using the data collected from real situations. Full article
Figures

Figure 1

Open AccessArticle An End-to-End Deep Reinforcement Learning-Based Intelligent Agent Capable of Autonomous Exploration in Unknown Environments
Sensors 2018, 18(10), 3575; https://doi.org/10.3390/s18103575
Received: 14 September 2018 / Revised: 14 September 2018 / Accepted: 15 October 2018 / Published: 22 October 2018
PDF Full-text (5412 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, machine learning (and as a result artificial intelligence) has experienced considerable progress. As a result, robots in different shapes and with different purposes have found their ways into our everyday life. These robots, which have been developed with the goal [...] Read more.
In recent years, machine learning (and as a result artificial intelligence) has experienced considerable progress. As a result, robots in different shapes and with different purposes have found their ways into our everyday life. These robots, which have been developed with the goal of human companionship, are here to help us in our everyday and routine life. These robots are different to the previous family of robots that were used in factories and static environments. These new robots are social robots that need to be able to adapt to our environment by themselves and to learn from their own experiences. In this paper, we contribute to the creation of robots with a high degree of autonomy, which is a must for social robots. We try to create an algorithm capable of autonomous exploration in and adaptation to unknown environments and implement it in a simulated robot. We go further than a simulation and implement our algorithm in a real robot, in which our sensor fusion method is able to overcome real-world noise and perform robust exploration. Full article
Figures

Figure 1

Open AccessArticle fPADnet: Small and Efficient Convolutional Neural Network for Presentation Attack Detection
Sensors 2018, 18(8), 2532; https://doi.org/10.3390/s18082532
Received: 20 June 2018 / Revised: 19 July 2018 / Accepted: 30 July 2018 / Published: 2 August 2018
PDF Full-text (3668 KB) | HTML Full-text | XML Full-text
Abstract
The rapid growth of fingerprint authentication-based applications makes presentation attack detection, which is the detection of fake fingerprints, become a crucial problem. There have been numerous attempts to deal with this problem; however, the existing algorithms have a significant trade-off between accuracy and [...] Read more.
The rapid growth of fingerprint authentication-based applications makes presentation attack detection, which is the detection of fake fingerprints, become a crucial problem. There have been numerous attempts to deal with this problem; however, the existing algorithms have a significant trade-off between accuracy and computational complexity. This paper proposes a presentation attack detection method using Convolutional Neural Networks (CNN), named fPADnet (fingerprint Presentation Attack Detection network), which consists of Fire and Gram-K modules. Fire modules of fPADnet are designed following the structure of the SqueezeNet Fire module. Gram-K modules, which are derived from the Gram matrix, are used to extract texture information since texture can provide useful features in distinguishing between real and fake fingerprints. Combining Fire and Gram-K modules results in a compact and efficient network for fake fingerprint detection. Experimental results on three public databases, including LivDet 2011, 2013 and 2015, show that fPADnet can achieve an average detection error rate of 2.61%, which is comparable to the state-of-the-art accuracy, while the network size and processing time are significantly reduced. Full article
Figures

Figure 1

Open AccessArticle A Combined Deep Learning GRU-Autoencoder for the Early Detection of Respiratory Disease in Pigs Using Multiple Environmental Sensors
Sensors 2018, 18(8), 2521; https://doi.org/10.3390/s18082521
Received: 22 June 2018 / Revised: 25 July 2018 / Accepted: 31 July 2018 / Published: 2 August 2018
Cited by 1 | PDF Full-text (1898 KB) | HTML Full-text | XML Full-text
Abstract
We designed and evaluated an assumption-free, deep learning-based methodology for animal health monitoring, specifically for the early detection of respiratory disease in growing pigs based on environmental sensor data. Two recurrent neural networks (RNNs), each comprising gated recurrent units (GRUs), were used to [...] Read more.
We designed and evaluated an assumption-free, deep learning-based methodology for animal health monitoring, specifically for the early detection of respiratory disease in growing pigs based on environmental sensor data. Two recurrent neural networks (RNNs), each comprising gated recurrent units (GRUs), were used to create an autoencoder (GRU-AE) into which environmental data, collected from a variety of sensors, was processed to detect anomalies. An autoencoder is a type of network trained to reconstruct the patterns it is fed as input. By training the GRU-AE using environmental data that did not lead to an occurrence of respiratory disease, data that did not fit the pattern of “healthy environmental data” had a greater reconstruction error. All reconstruction errors were labelled as either normal or anomalous using threshold-based anomaly detection optimised with particle swarm optimisation (PSO), from which alerts are raised. The results from the GRU-AE method outperformed state-of-the-art techniques, raising alerts when such predictions deviated from the actual observations. The results show that a change in the environment can result in occurrences of pigs showing symptoms of respiratory disease within 1–7 days, meaning that there is a period of time during which their keepers can act to mitigate the negative effect of respiratory diseases, such as porcine reproductive and respiratory syndrome (PRRS), a common and destructive disease endemic in pigs. Full article
Figures

Figure 1

Open AccessArticle Staged Incentive and Punishment Mechanism for Mobile Crowd Sensing
Sensors 2018, 18(7), 2391; https://doi.org/10.3390/s18072391
Received: 6 June 2018 / Revised: 11 July 2018 / Accepted: 16 July 2018 / Published: 23 July 2018
PDF Full-text (1281 KB) | HTML Full-text | XML Full-text
Abstract
Having an incentive mechanism is crucial for the recruitment of mobile users to participate in a sensing task and to ensure that participants provide high-quality sensing data. In this paper, we investigate a staged incentive and punishment mechanism for mobile crowd sensing. We [...] Read more.
Having an incentive mechanism is crucial for the recruitment of mobile users to participate in a sensing task and to ensure that participants provide high-quality sensing data. In this paper, we investigate a staged incentive and punishment mechanism for mobile crowd sensing. We first divide the incentive process into two stages: the recruiting stage and the sensing stage. In the recruiting stage, we introduce the payment incentive coefficient and design a Stackelberg-based game method. The participants can be recruited via game interaction. In the sensing stage, we propose a sensing data utility algorithm in the interaction. After the sensing task, the winners can be filtered out using data utility, which is affected by time–space correlation. In particular, the participants’ reputation accumulation can be carried out based on data utility, and a punishment mechanism is presented to reduce the waste of payment costs caused by malicious participants. Finally, we conduct an extensive study of our solution based on realistic data. Extensive experiments show that compared to the existing positive auction incentive mechanism (PAIM) and reverse auction incentive mechanism (RAIM), our proposed staged incentive mechanism (SIM) can effectively extend the incentive behavior from the recruiting stage to the sensing stage. It not only achieves being a real-time incentive in both the recruiting and sensing stages but also improves the utility of sensing data. Full article
Figures

Figure 1

Open AccessArticle Electroencephalography Based Fusion Two-Dimensional (2D)-Convolution Neural Networks (CNN) Model for Emotion Recognition System
Sensors 2018, 18(5), 1383; https://doi.org/10.3390/s18051383
Received: 20 March 2018 / Revised: 25 April 2018 / Accepted: 26 April 2018 / Published: 30 April 2018
Cited by 2 | PDF Full-text (3408 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin [...] Read more.
The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models. Full article
Figures

Figure 1

Open AccessArticle Weak Defect Identification for Centrifugal Compressor Blade Crack Based on Pressure Sensors and Genetic Algorithm
Sensors 2018, 18(4), 1264; https://doi.org/10.3390/s18041264
Received: 5 March 2018 / Revised: 13 April 2018 / Accepted: 15 April 2018 / Published: 19 April 2018
Cited by 3 | PDF Full-text (62848 KB) | HTML Full-text | XML Full-text
Abstract
The Centrifugal compressor is a piece of key equipment for petrochemical factories. As the core component of a compressor, the blades suffer periodic vibration and flow induced excitation mechanism, which will lead to the occurrence of crack defect. Moreover, the induced blade defect [...] Read more.
The Centrifugal compressor is a piece of key equipment for petrochemical factories. As the core component of a compressor, the blades suffer periodic vibration and flow induced excitation mechanism, which will lead to the occurrence of crack defect. Moreover, the induced blade defect usually has a serious impact on the normal operation of compressors and the safety of operators. Therefore, an effective blade crack identification method is particularly important for the reliable operation of compressors. Conventional non-destructive testing and evaluation (NDT&E) methods can detect the blade defect effectively, however, the compressors should shut down during the testing process which is time-consuming and costly. In addition, it can be known these methods are not suitable for the long-term on-line condition monitoring and cannot identify the blade defect in time. Therefore, the effective on-line condition monitoring and weak defect identification method should be further studied and proposed. Considering the blade vibration information is difficult to measure directly, pressure sensors mounted on the casing are used to sample airflow pressure pulsation signal on-line near the rotating impeller for the purpose of monitoring the blade condition indirectly in this paper. A big problem is that the blade abnormal vibration amplitude induced by the crack is always small and this feature information will be much weaker in the pressure signal. Therefore, it is usually difficult to identify blade defect characteristic frequency embedded in pressure pulsation signal by general signal processing methods due to the weakness of the feature information and the interference of strong noise. In this paper, continuous wavelet transform (CWT) is used to pre-process the sampled signal first. Then, the method of bistable stochastic resonance (SR) based on Woods-Saxon and Gaussian (WSG) potential is applied to enhance the weak characteristic frequency contained in the pressure pulsation signal. Genetic algorithm (GA) is used to obtain optimal parameters for this SR system to improve its feature enhancement performance. The analysis result of experimental signal shows the validity of the proposed method for the enhancement and identification of weak defect characteristic. In the end, strain test is carried out to further verify the accuracy and reliability of the analysis result obtained by pressure pulsation signal. Full article
Figures

Figure 1

Open AccessArticle Research on Flow Field Perception Based on Artificial Lateral Line Sensor System
Sensors 2018, 18(3), 838; https://doi.org/10.3390/s18030838
Received: 7 February 2018 / Revised: 26 February 2018 / Accepted: 3 March 2018 / Published: 11 March 2018
Cited by 2 | PDF Full-text (14642 KB) | HTML Full-text | XML Full-text
Abstract
In nature, the lateral line of fish is a peculiar and important organ for sensing the surrounding hydrodynamic environment, preying, escaping from predators and schooling. In this paper, by imitating the mechanism of fish lateral canal neuromasts, we developed an artificial lateral line [...] Read more.
In nature, the lateral line of fish is a peculiar and important organ for sensing the surrounding hydrodynamic environment, preying, escaping from predators and schooling. In this paper, by imitating the mechanism of fish lateral canal neuromasts, we developed an artificial lateral line system composed of micro-pressure sensors. Through hydrodynamic simulations, an optimized sensor structure was obtained and the pressure distribution models of the lateral surface were established in uniform flow and turbulent flow. Carrying out the corresponding underwater experiment, the validity of the numerical simulation method is verified by the comparison between the experimental data and the simulation results. In addition, a variety of effective research methods are proposed and validated for the flow velocity estimation and attitude perception in turbulent flow, respectively and the shape recognition of obstacles is realized by the neural network algorithm. Full article
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top