E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Multi-Sensor Fusion and Data Analysis"

A special issue of Sensors (ISSN 1424-8220).

Deadline for manuscript submissions: 30 August 2019

Special Issue Editor

Guest Editor
Prof. Dr. Simon X. Yang

Advanced Robotics & Intelligent Systems (ARIS) Lab, School of Engineering, University of Guelph, Guelph, Ontario, N1G 2W1, Canada
Website | E-Mail
Interests: electronic noses; smart sensors; sensor signal processing; multi-sensor fusion; sensor networks; robotics; intelligent systems; control systems; intelligent transportation; systems modeling and analysis

Special Issue Information

Dear Colleagues,

Research on multi-sensor fusion and sensor data analysis have made significant progress in both theoretical investigation and practical applications, in many fields, such as monitoring, operation, planning, control, and decision making of various environmental, structural, agricultural, food processing, and manufacturing systems. Various intelligent and advanced multi-sensor fusion and data analysis algorithms and technologies have been developed for accurate information acquisition, effective monitoring, optimal decision making, and efficient operation.

This Special Issue is devoted to new advances and research results on multi-sensor fusion and data analysis for various systems, such as environmental and structural systems (e.g., lakes, rivers, dams, bridges, roads, tunnels, buildings), agricultural and food processing systems, and manufacturing systems. The topics in this issue include, but are not limited to, multi-sensor fusion for information acquisition, sensor signal processing and data analysis, machine learning for multi-sensor fusion, optimal sensor placement, intelligent multi-sensor fusion, multi-sensor-based monitoring and operation, multi-sensor based control and decision making, modelling and analysis of multi-sensors, remote sensing and monitoring, and software and hardware development for multi-sensor fusion. The applications of various multi-sensor fusion technologies and of various systems are also welcome.

Prof. Dr. Simon X. Yang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-sensor fusion
  • Multi-sensor based information acquisition
  • Data analysis for multi-sensor fusion
  • Modeling and analysis of multi-sensors
  • Intelligent multi-sensor fusion
  • Optimal sensor placement
  • Multi-sensor-based planning and decision making
  • Multi-sensor-based monitoring and control
  • Applications of multi-sensor fusion

Published Papers (19 papers)

View options order results:
result details:
Displaying articles 1-19
Export citation of selected articles as:

Research

Open AccessArticle Intelligent Control of Bulk Tobacco Curing Schedule Using LS-SVM- and ANFIS-Based Multi-Sensor Data Fusion Approaches
Sensors 2019, 19(8), 1778; https://doi.org/10.3390/s19081778
Received: 7 March 2019 / Revised: 5 April 2019 / Accepted: 8 April 2019 / Published: 13 April 2019
PDF Full-text (10414 KB) | HTML Full-text | XML Full-text
Abstract
The bulk tobacco flue-curing process is followed by a bulk tobacco curing schedule, which is typically pre-set at the beginning and might be adjusted by the curer to accommodate the need for tobacco leaves during curing. In this study, the controlled parameters of [...] Read more.
The bulk tobacco flue-curing process is followed by a bulk tobacco curing schedule, which is typically pre-set at the beginning and might be adjusted by the curer to accommodate the need for tobacco leaves during curing. In this study, the controlled parameters of a bulk tobacco curing schedule were presented, which is significant for the systematic modelling of an intelligent tobacco flue-curing process. To fully imitate the curer’s control of the bulk tobacco curing schedule, three types of sensors were applied, namely, a gas sensor, image sensor, and moisture sensor. Feature extraction methods were given forward to extract the odor, image, and moisture features of the tobacco leaves individually. Three multi-sensor data fusion schemes were applied, where a least squares support vector machines (LS-SVM) regression model and adaptive neuro-fuzzy inference system (ANFIS) decision model were used. Four experiments were conducted from July to September 2014, with a total of 603 measurement points, ensuring the results’ robustness and validness. The results demonstrate that a hybrid fusion scheme achieves a superior prediction performance with the coefficients of determination of the controlled parameters, reaching 0.9991, 0.9589, and 0.9479, respectively. The high prediction accuracy made the proposed hybrid fusion scheme a feasible, reliable, and effective method to intelligently control over the tobacco curing schedule. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Measuring Acoustic Roughness of a Longitudinal Railhead Profile Using a Multi-Sensor Integration Technique
Sensors 2019, 19(7), 1610; https://doi.org/10.3390/s19071610
Received: 26 February 2019 / Revised: 27 March 2019 / Accepted: 2 April 2019 / Published: 3 April 2019
PDF Full-text (2963 KB) | HTML Full-text | XML Full-text
Abstract
It is necessary to measure accurately the rolling noise generated by the friction between wheels and rails in railway transport systems. Although many systems have recently been developed to measure the surface roughness of wheels and rails, there exist large deviations in measurements [...] Read more.
It is necessary to measure accurately the rolling noise generated by the friction between wheels and rails in railway transport systems. Although many systems have recently been developed to measure the surface roughness of wheels and rails, there exist large deviations in measurements between each system whose measuring mechanism is based on a single sensor. To correct the structural problems in existing systems, we developed an automatic mobile measurement platform, named the Automatic Rail Checker (ARCer), which measures the acoustic roughness of a longitudinal railhead profile maintaining a constant speed. In addition, a new chord offset synchronization algorithm has been developed. This uses three displacement sensors to improve the measuring accuracy of the acoustic roughness of a longitudinal railhead profile, thereby minimizing the limitations of mobile platform measurement systems and measurement deviation. We then verified the accuracy of the measurement system and the algorithm through field tests on rails with different surface wear conditions. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle An Area Coverage and Energy Consumption Optimization Approach Based on Improved Adaptive Particle Swarm Optimization for Directional Sensor Networks
Sensors 2019, 19(5), 1192; https://doi.org/10.3390/s19051192
Received: 16 January 2019 / Revised: 10 February 2019 / Accepted: 1 March 2019 / Published: 8 March 2019
PDF Full-text (511 KB) | HTML Full-text | XML Full-text
Abstract
Coverage is a vital indicator which reflects the performance of directional sensor networks (DSNs). The random deployment of directional sensor nodes will lead to many covergae blind areas and overlapping areas. Besides, the premature death of nodes will also directly affect the service [...] Read more.
Coverage is a vital indicator which reflects the performance of directional sensor networks (DSNs). The random deployment of directional sensor nodes will lead to many covergae blind areas and overlapping areas. Besides, the premature death of nodes will also directly affect the service quality of network due to limited energy. To address these problems, this paper proposes a new area coverage and energy consumption optimization approach based on improved adaptive particle swarm optimization (IAPSO). For area coverage problem, we set up a multi-objective optimization model in order to improve coverage ratio and reduce redundancy ratio by sensing direction rotation. For energy consumption optimization, we make energy consumption evenly distribute on each sensor node by clustering network. We set up a cluster head selection optimization model which considers the total residual energy ratio and energy consumption balance degree of cluster head candidates. We also propose a cluster formation algorithm in which member nodes choose their cluster heads by weight function. We next utilize an IAPSO to solve two optimization models to achieve high coverage ratio, low redundancy ratio and energy consumption balance. Extensive simulation results demonstrate the our proposed approach performs better than other ones. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Improved Convolutional Pose Machines for Human Pose Estimation Using Image Sensor Data
Sensors 2019, 19(3), 718; https://doi.org/10.3390/s19030718
Received: 5 January 2019 / Revised: 31 January 2019 / Accepted: 5 February 2019 / Published: 10 February 2019
PDF Full-text (2209 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, increasing human data comes from image sensors. In this paper, a novel approach combining convolutional pose machines (CPMs) with GoogLeNet is proposed for human pose estimation using image sensor data. The first stage of the CPMs directly generates a response [...] Read more.
In recent years, increasing human data comes from image sensors. In this paper, a novel approach combining convolutional pose machines (CPMs) with GoogLeNet is proposed for human pose estimation using image sensor data. The first stage of the CPMs directly generates a response map of each human skeleton’s key points from images, in which we introduce some layers from the GoogLeNet. On the one hand, the improved model uses deeper network layers and more complex network structures to enhance the ability of low level feature extraction. On the other hand, the improved model applies a fine-tuning strategy, which benefits the estimation accuracy. Moreover, we introduce the inception structure to greatly reduce parameters of the model, which reduces the convergence time significantly. Extensive experiments on several datasets show that the improved model outperforms most mainstream models in accuracy and training time. The prediction efficiency of the improved model is improved by 1.023 times compared with the CPMs. At the same time, the training time of the improved model is reduced 3.414 times. This paper presents a new idea for future research. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Automatic Recognition of Ripening Tomatoes by Combining Multi-Feature Fusion with a Bi-Layer Classification Strategy for Harvesting Robots
Sensors 2019, 19(3), 612; https://doi.org/10.3390/s19030612
Received: 8 January 2019 / Revised: 30 January 2019 / Accepted: 30 January 2019 / Published: 1 February 2019
PDF Full-text (4546 KB) | HTML Full-text | XML Full-text
Abstract
Automatic recognition of ripening tomatoes is a main hurdle precluding the replacement of manual labour by robotic harvesting. In this paper, we present a novel automatic algorithm for recognition of ripening tomatoes using an improved method that combines multiple features, feature analysis and [...] Read more.
Automatic recognition of ripening tomatoes is a main hurdle precluding the replacement of manual labour by robotic harvesting. In this paper, we present a novel automatic algorithm for recognition of ripening tomatoes using an improved method that combines multiple features, feature analysis and selection, a weighted relevance vector machine (RVM) classifier, and a bi-layer classification strategy. The algorithm operates using a two-layer strategy. The first-layer classification strategy aims to identify tomato-containing regions in images using the colour difference information. The second classification strategy is based on a classifier that is trained on multi-medium features. In our proposed algorithm, to simplify the calculation and to improve the recognition efficiency, the processed images are divided into 9 × 9 pixel blocks, and these blocks, rather than single pixels, are considered as the basic units in the classification task. Six colour-related features, namely the Red (R), Green (G), Blue (B), Hue (H), Saturation (S) and Intensity (I) components, respectively, colour components, and five textural features (entropy, energy, correlation, inertial moment and local smoothing) were extracted from pixel blocks. Relevant features and their weights were analysed using the iterative RELIEF (I-RELIEF) algorithm. The image blocks were classified into different categories using a weighted RVM classifier based on the selected relevant features. The final results of tomato recognition were determined by combining the block classification results and the bi-layer classification strategy. The algorithm demonstrated the detection accuracy of 94.90% on 120 images, this suggests that the proposed algorithm is effective and suitable for tomato detection Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Deep Belief Network for Spectral–Spatial Classification of Hyperspectral Remote Sensor Data
Sensors 2019, 19(1), 204; https://doi.org/10.3390/s19010204
Received: 30 November 2018 / Revised: 29 December 2018 / Accepted: 3 January 2019 / Published: 8 January 2019
PDF Full-text (2717 KB) | HTML Full-text | XML Full-text
Abstract
With the development of high-resolution optical sensors, the classification of ground objects combined with multivariate optical sensors is a hot topic at present. Deep learning methods, such as convolutional neural networks, are applied to feature extraction and classification. In this work, a novel [...] Read more.
With the development of high-resolution optical sensors, the classification of ground objects combined with multivariate optical sensors is a hot topic at present. Deep learning methods, such as convolutional neural networks, are applied to feature extraction and classification. In this work, a novel deep belief network (DBN) hyperspectral image classification method based on multivariate optical sensors and stacked by restricted Boltzmann machines is proposed. We introduced the DBN framework to classify spatial hyperspectral sensor data on the basis of DBN. Then, the improved method (combination of spectral and spatial information) was verified. After unsupervised pretraining and supervised fine-tuning, the DBN model could successfully learn features. Additionally, we added a logistic regression layer that could classify the hyperspectral images. Moreover, the proposed training method, which fuses spectral and spatial information, was tested over the Indian Pines and Pavia University datasets. The advantages of this method over traditional methods are as follows: (1) the network has deep structure and the ability of feature extraction is stronger than traditional classifiers; (2) experimental results indicate that our method outperforms traditional classification and other deep learning approaches. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Fusion of High-Dynamic and Low-Drift Sensors Using Kalman Filters
Sensors 2019, 19(1), 186; https://doi.org/10.3390/s19010186
Received: 15 November 2018 / Revised: 3 January 2019 / Accepted: 3 January 2019 / Published: 7 January 2019
PDF Full-text (6496 KB) | HTML Full-text | XML Full-text
Abstract
In practice, a high-dynamic vibration sensor is often plagued by the problem of drift, which is caused by thermal effects. Conversely, low-drift sensors typically have a limited sample rate range. This paper presents a system combining different types of sensors to address general [...] Read more.
In practice, a high-dynamic vibration sensor is often plagued by the problem of drift, which is caused by thermal effects. Conversely, low-drift sensors typically have a limited sample rate range. This paper presents a system combining different types of sensors to address general drift problems that occur in measurements for high-dynamic vibration signals. In this paper, the hardware structure and algorithms for fusing high-dynamic and low-drift sensors are described. The algorithms include a drift state estimation and a Kalman filter based on a linear prediction model. Key issues such as the dimension of the drift state vector, the order of the linear prediction model, are analyzed in the design of algorithm. The performance of the algorithm is illustrated by a simulation example and experiments. The simulation and experimental results show that the drift can be removed while the high-dynamic measuring ability is retained. A high-dynamic vibration measuring system with the frequency range starting from 0 Hz is achieved. Meanwhile, measurement noise was improved 9.3 dB through using the linear-prediction-based Kalman filter. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle A Multi-Model Combined Filter with Dual Uncertainties for Data Fusion of MEMS Gyro Array
Sensors 2019, 19(1), 85; https://doi.org/10.3390/s19010085
Received: 22 November 2018 / Revised: 16 December 2018 / Accepted: 21 December 2018 / Published: 27 December 2018
Cited by 1 | PDF Full-text (3544 KB) | HTML Full-text | XML Full-text
Abstract
The gyro array is a useful technique in improving the accuracy of a micro-electro-mechanical system (MEMS) gyroscope, but the traditional estimate algorithm that plays an important role in this technique has two problems restricting its performance: The limitation of the stochastic assumption and [...] Read more.
The gyro array is a useful technique in improving the accuracy of a micro-electro-mechanical system (MEMS) gyroscope, but the traditional estimate algorithm that plays an important role in this technique has two problems restricting its performance: The limitation of the stochastic assumption and the influence of the dynamic condition. To resolve these problems, a multi-model combined filter with dual uncertainties is proposed to integrate the outputs from numerous gyroscopes. First, to avoid the limitations of the stochastic and set-membership approaches and to better utilize the potentials of both concepts, a dual-noise acceleration model was proposed to describe the angular rate. On this basis, a dual uncertainties model of gyro array was established. Then the multiple model theory was used to improve dynamic performance, and a multi-model combined filter with dual uncertainties was designed. This algorithm could simultaneously deal with stochastic uncertainties and set-membership uncertainties by calculating the Minkowski sum of multiple ellipsoidal sets. The experimental results proved the effectiveness of the proposed filter in improving gyroscope accuracy and adaptability to different kinds of uncertainties and different dynamic characteristics. Most of all, the method gave the boundary surrounding the true value, which is of great significance in attitude control and guidance applications. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle An Efficient Incremental Mining Algorithm for Discovering Sequential Pattern in Wireless Sensor Network Environments
Sensors 2019, 19(1), 29; https://doi.org/10.3390/s19010029
Received: 16 November 2018 / Revised: 17 December 2018 / Accepted: 18 December 2018 / Published: 21 December 2018
PDF Full-text (3618 KB) | HTML Full-text | XML Full-text
Abstract
Wireless sensor networks (WSNs) are an important type of network for sensing the environment and collecting information. It can be deployed in almost every type of environment in the real world, providing a reliable and low-cost solution for management. Huge amounts of data [...] Read more.
Wireless sensor networks (WSNs) are an important type of network for sensing the environment and collecting information. It can be deployed in almost every type of environment in the real world, providing a reliable and low-cost solution for management. Huge amounts of data are produced from WSNs all the time, and it is significant to process and analyze data effectively to support intelligent decision and management. However, the new characteristics of sensor data, such as rapid growth and frequent updates, bring new challenges to the mining algorithms, especially given the time constraints for intelligent decision-making. In this work, an efficient incremental mining algorithm for discovering sequential pattern (novel incremental algorithm, NIA) is proposed, in order to enhance the efficiency of the whole mining process. First, a reasoned proof is given to demonstrate how to update the frequent sequences incrementally, and the mining space is greatly narrowed based on the proof. Second, an improvement is made on PrefixSpan, which is a classic sequential pattern mining algorithm with a high-complexity recursive process. The improved algorithm, named PrefixSpan+, utilizes a mapping structure to extend the prefixes to sequential patterns, making the mining step more efficient. Third, a fast support number-counting algorithm is presented to choose frequent sequences from the potential frequent sequences. A reticular tree is constructed to store all the potential frequent sequences according to subordinate relations between them, and then the support degree can be efficiently calculated without scanning the original database repeatedly. NIA is compared with various kinds of mining algorithms via intensive experiments on the real monitoring datasets, benchmarking datasets and synthetic datasets from aspects including time cost, sensitivity of factors, and space cost. The results show that NIA performs better than the existed methods. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Hyperspectral Remote Sensing Image Classification Based on Maximum Overlap Pooling Convolutional Neural Network
Sensors 2018, 18(10), 3587; https://doi.org/10.3390/s18103587
Received: 27 September 2018 / Revised: 15 October 2018 / Accepted: 20 October 2018 / Published: 22 October 2018
Cited by 1 | PDF Full-text (3626 KB) | HTML Full-text | XML Full-text
Abstract
In a traditional convolutional neural network structure, pooling layers generally use an average pooling method: a non-overlapping pooling. However, this condition results in similarities in the extracted image features, especially for the hyperspectral images of a continuous spectrum, which makes it more difficult [...] Read more.
In a traditional convolutional neural network structure, pooling layers generally use an average pooling method: a non-overlapping pooling. However, this condition results in similarities in the extracted image features, especially for the hyperspectral images of a continuous spectrum, which makes it more difficult to extract image features with differences, and image detail features are easily lost. This result seriously affects the accuracy of image classification. Thus, a new overlapping pooling method is proposed, where maximum pooling is used in an improved convolutional neural network to avoid the fuzziness of average pooling. The step size used is smaller than the size of the pooling kernel to achieve overlapping and coverage between the outputs of the pooling layer. The dataset selected for this experiment was the Indian Pines dataset, collected by the airborne visible/infrared imaging spectrometer (AVIRIS) sensor. Experimental results show that using the improved convolutional neural network for remote sensing image classification can effectively improve the details of the image and obtain a high classification accuracy. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Underwater Target Detection and 3D Reconstruction System Based on Binocular Vision
Sensors 2018, 18(10), 3570; https://doi.org/10.3390/s18103570
Received: 4 September 2018 / Revised: 16 October 2018 / Accepted: 18 October 2018 / Published: 21 October 2018
Cited by 2 | PDF Full-text (8715 KB) | HTML Full-text | XML Full-text
Abstract
To better solve the problem of target detection in marine environment and to deal with the difficulty of 3D reconstruction of underwater target, a binocular vision-based underwater target detection and 3D reconstruction system is proposed in this paper. Two optical sensors are used [...] Read more.
To better solve the problem of target detection in marine environment and to deal with the difficulty of 3D reconstruction of underwater target, a binocular vision-based underwater target detection and 3D reconstruction system is proposed in this paper. Two optical sensors are used as the vision of the system. Firstly, denoising and color restoration are performed on the image sequence acquired by the vision of the system and the underwater target is segmented and extracted according to the image saliency using the super-pixel segmentation method. Secondly, aiming to reduce mismatch, we improve the semi-global stereo matching method by strictly constraining the matching in the valid target area and then optimizing the basic disparity map within each super-pixel area using the least squares fitting interpolation method. Finally, based on the optimized disparity map, triangulation principle is used to calculate the three-dimensional data of the target and the 3D structure and color information of the target can be given by MeshLab. The experimental results show that for a specific size underwater target, the system can achieve higher measurement accuracy and better 3D reconstruction effect within a suitable distance. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Partial Discharge Recognition with a Multi-Resolution Convolutional Neural Network
Sensors 2018, 18(10), 3512; https://doi.org/10.3390/s18103512
Received: 24 September 2018 / Revised: 11 October 2018 / Accepted: 16 October 2018 / Published: 18 October 2018
Cited by 1 | PDF Full-text (7021 KB) | HTML Full-text | XML Full-text
Abstract
Partial discharge (PD) is not only an important symptom for monitoring the imperfections in the insulation system of a gas-insulated switchgear (GIS), but also the factor that accelerates the degradation. At present, monitoring ultra-high-frequency (UHF) signals induced by PDs is regarded as one [...] Read more.
Partial discharge (PD) is not only an important symptom for monitoring the imperfections in the insulation system of a gas-insulated switchgear (GIS), but also the factor that accelerates the degradation. At present, monitoring ultra-high-frequency (UHF) signals induced by PDs is regarded as one of the most effective approaches for assessing the insulation severity and classifying the PDs. Therefore, in this paper, a deep learning-based PD classification algorithm is proposed and realized with a multi-column convolutional neural network (CNN) that incorporates UHF spectra of multiple resolutions. First, three subnetworks, as characterized by their specified designed temporal filters, frequency filters, and texture filters, are organized and then intergraded by a fully-connected neural network. Then, a long short-term memory (LSTM) network is utilized for fusing the embedded multi-sensor information. Furthermore, to alleviate the risk of overfitting, a transfer learning approach inspired by manifold learning is also present for model training. To demonstrate, 13 modes of defects considering both the defect types and their relative positions were well designed for a simulated GIS tank. A detailed analysis of the performance reveals the clear superiority of the proposed method, compared to18 typical baselines. Several advanced visualization techniques are also implemented to explore the possible qualitative interpretations of the learned features. Finally, a unified framework based on matrix projection is discussed to provide a possible explanation for the effectiveness of the architecture. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle A Bimodal Model to Estimate Dynamic Metropolitan Population by Mobile Phone Data
Sensors 2018, 18(10), 3431; https://doi.org/10.3390/s18103431
Received: 17 July 2018 / Revised: 23 August 2018 / Accepted: 4 September 2018 / Published: 12 October 2018
Cited by 1 | PDF Full-text (4571 KB) | HTML Full-text | XML Full-text
Abstract
Accurate, real-time and fine-spatial population distribution is crucial for urban planning, government management, and advertisement promotion. Limited by technics and tools, we rely on the census to obtain this information in the past, which is coarse and costly. The popularity of mobile phones [...] Read more.
Accurate, real-time and fine-spatial population distribution is crucial for urban planning, government management, and advertisement promotion. Limited by technics and tools, we rely on the census to obtain this information in the past, which is coarse and costly. The popularity of mobile phones gives us a new opportunity to investigate population estimation. However, real-time and accurate population estimation is still a challenging problem because of the coarse localization and complicated user behaviors. With the help of the passively collected human mobility and locations from the mobile networks including call detail records and mobility management signals, we develop a bimodal model beyond the prior work to better estimate real-time population distribution at metropolitan scales. We discuss how the estimation interval, space granularity, and data type will influence the estimation accuracy, and find the data collected from the mobility management signals with the 30 min estimation interval performs better which reduces the population estimation error by 30% in terms of Root Mean Square Error (RMSE). These results show us the great potential of using bimodal model and mobile phone data to estimate real-time population distribution. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Domain Adaptation and Adaptive Information Fusion for Object Detection on Foggy Days
Sensors 2018, 18(10), 3286; https://doi.org/10.3390/s18103286
Received: 13 July 2018 / Revised: 20 September 2018 / Accepted: 28 September 2018 / Published: 30 September 2018
PDF Full-text (1487 KB) | HTML Full-text | XML Full-text
Abstract
Foggy days pose many difficulties for outdoor camera surveillance systems. On foggy days, the optical attenuation and scattering effects of the medium significantly distort and degenerate the scene radiation, making it noisy and indistinguishable. Aiming to solve this problem, in this paper we [...] Read more.
Foggy days pose many difficulties for outdoor camera surveillance systems. On foggy days, the optical attenuation and scattering effects of the medium significantly distort and degenerate the scene radiation, making it noisy and indistinguishable. Aiming to solve this problem, in this paper we propose a novel object detection method that has the ability to exploit the information in the color and depth domains. To prevent the error propagation problem, we clean the depth information before the training process and remove false samples from the database. A domain adaptation strategy is employed to adaptively fuse the decisions obtained in the color and depth domains. In the experiments, we evaluate the contribution of the depth information for object detection on foggy days. Moreover, the advantages of the multiple-domain adaptation strategy are experimentally demonstrated via comparison with other methods. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Reconstructed Order Analysis-Based Vibration Monitoring under Variable Rotation Speed by Using Multiple Blade Tip-Timing Sensors
Sensors 2018, 18(10), 3235; https://doi.org/10.3390/s18103235
Received: 24 August 2018 / Revised: 14 September 2018 / Accepted: 17 September 2018 / Published: 26 September 2018
Cited by 1 | PDF Full-text (9676 KB) | HTML Full-text | XML Full-text
Abstract
On-line vibration monitoring is significant for high-speed rotating blades, and blade tip-timing (BTT) is generally regarded as a promising solution. BTT methods must assume that rotating speeds are constant. This assumption is impractical, and blade damages are always formed and accumulated during variable [...] Read more.
On-line vibration monitoring is significant for high-speed rotating blades, and blade tip-timing (BTT) is generally regarded as a promising solution. BTT methods must assume that rotating speeds are constant. This assumption is impractical, and blade damages are always formed and accumulated during variable operational conditions. Thus, how to carry out BTT vibration monitoring under variable rotation speed (VRS) is a big challenge. Angular sampling-based order analyses have been widely used for vibration signals in rotating machinery with variable speeds. However, BTT vibration signals are well under-sampled, and Shannon’s sampling theorem is not satisfied so that existing order analysis methods will not work well. To overcome this problem, a reconstructed order analysis-based BTT vibration monitoring method is proposed in this paper. First, the effects of VRS on BTT vibration monitoring are analyzed, and the basic structure of angular sampling-based BTT vibration monitoring under VRS is presented. Then a band-pass sampling-based engine order (EO) reconstruction algorithm is proposed for uniform BTT sensor configuration so that few BTT sensors can be used to extract high EOs. In addition, a periodically non-uniform sampling-based EO reconstruction algorithm is proposed for non-uniform BTT sensor configuration. Next, numerical simulations are done to validate the two reconstruction algorithms. In the end, an experimental set-up is built. Both uniform and non-uniform BTT vibration signals are collected, and reconstructed order analysis are carried out. Simulation and experimental results testify that the proposed algorithms can accurately capture characteristic high EOs of synchronous and asynchronous vibrations under VRS by using few BTT sensors. The significance of this paper is to overcome the limitation of conventional BTT methods of dealing with variable blade rotating speeds. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Polar Transversal Initial Alignment Algorithm for UUV with a Large Misalignment Angle
Sensors 2018, 18(10), 3231; https://doi.org/10.3390/s18103231
Received: 6 August 2018 / Revised: 22 September 2018 / Accepted: 23 September 2018 / Published: 25 September 2018
PDF Full-text (3325 KB) | HTML Full-text | XML Full-text
Abstract
The conventional initial alignment algorithms are invalid in the polar region. This is caused by the rapid convergence of the Earth meridians in the high-latitude areas. However, the initial alignment algorithms are important for the accurate navigation of Unmanned Underwater Vehicles. The polar [...] Read more.
The conventional initial alignment algorithms are invalid in the polar region. This is caused by the rapid convergence of the Earth meridians in the high-latitude areas. However, the initial alignment algorithms are important for the accurate navigation of Unmanned Underwater Vehicles. The polar transversal initial alignment algorithm is proposed to overcome this problem. In the polar transversal initial alignment algorithm, the transversal geographic frame is chosen as the navigation frame. The polar region in the conventional frames is equivalent to the equatorial region in the transversal frames. Therefore, the polar transversal initial can be effectively applied in the polar region. According to the complex environment in the polar region, a large misalignment angle is considered in this paper. Based on the large misalignment angle condition, the non-linear dynamics models are established. In addition, the simplified unscented Kalman filter (UKF) is chosen to realize the data fusion. Two comparison simulations and an experiment are performed to verify the performance of the proposed algorithm. The simulation and experiment results indicate the validity of the proposed algorithm, especially when large misalignment angles occur. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Support Vector Machine Optimized by Genetic Algorithm for Data Analysis of Near-Infrared Spectroscopy Sensors
Sensors 2018, 18(10), 3222; https://doi.org/10.3390/s18103222
Received: 20 August 2018 / Revised: 14 September 2018 / Accepted: 20 September 2018 / Published: 25 September 2018
PDF Full-text (1318 KB) | HTML Full-text | XML Full-text
Abstract
Near-infrared (NIR) spectral sensors deliver the spectral response of the light absorbed by materials for quantification, qualification or identification. Spectral analysis technology based on the NIR sensor has been a useful tool for complex information processing and high precision identification in the tobacco [...] Read more.
Near-infrared (NIR) spectral sensors deliver the spectral response of the light absorbed by materials for quantification, qualification or identification. Spectral analysis technology based on the NIR sensor has been a useful tool for complex information processing and high precision identification in the tobacco industry. In this paper, a novel method based on the support vector machine (SVM) is proposed to discriminate the tobacco cultivation region using the near-infrared (NIR) sensors, where the genetic algorithm (GA) is employed for input subset selection to identify the effective principal components (PCs) for the SVM model. With the same number of PCs as the inputs to the SVM model, a number of comparative experiments were conducted between the effective PCs selected by GA and the PCs orderly starting from the first one. The model performance was evaluated in terms of prediction accuracy and four parameters of assessment criteria (true positive rate, true negative rate, positive predictive value and F1 score). From the results, it is interesting to find that some PCs with less information may contribute more to the cultivation regions and are considered as more effective PCs, and the SVM model with the effective PCs selected by GA has a superior discrimination capacity. The proposed GA-SVM model can effectively learn the relationship between tobacco cultivation regions and tobacco NIR sensor data. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Domain Correction Based on Kernel Transformation for Drift Compensation in the E-Nose System
Sensors 2018, 18(10), 3209; https://doi.org/10.3390/s18103209
Received: 13 August 2018 / Revised: 7 September 2018 / Accepted: 12 September 2018 / Published: 23 September 2018
PDF Full-text (1302 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a way for drift compensation in electronic noses (e-nose) that often suffers from uncertain and unpredictable sensor drift. Traditional machine learning methods for odor recognition require consistent data distribution, which makes the model trained with previous data less generalized. In [...] Read more.
This paper proposes a way for drift compensation in electronic noses (e-nose) that often suffers from uncertain and unpredictable sensor drift. Traditional machine learning methods for odor recognition require consistent data distribution, which makes the model trained with previous data less generalized. In the actual application scenario, the data collected previously and the data collected later may have different data distributions due to the sensor drift. If the dataset without sensor drift is treated as a source domain and the dataset with sensor drift as a target domain, a domain correction based on kernel transformation (DCKT) method is proposed to compensate the sensor drift. The proposed method makes the distribution consistency of two domains greatly improved through mapping to a high-dimensional reproducing kernel space and reducing the domain distance. A public benchmark sensor drift dataset is used to verify the effectiveness and efficiency of the proposed DCKT method. The experimental result shows that the proposed method yields the highest average accuracies compared to other considered methods. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Open AccessArticle Grey Model Optimized by Particle Swarm Optimization for Data Analysis and Application of Multi-Sensors
Sensors 2018, 18(8), 2503; https://doi.org/10.3390/s18082503
Received: 6 July 2018 / Revised: 26 July 2018 / Accepted: 28 July 2018 / Published: 1 August 2018
PDF Full-text (3873 KB) | HTML Full-text | XML Full-text
Abstract
Data on the effective operation of new pumping station is scarce, and the unit structure is complex, as the temperature changes of different parts of the unit are coupled with multiple factors. The multivariable grey system prediction model can effectively predict the multiple [...] Read more.
Data on the effective operation of new pumping station is scarce, and the unit structure is complex, as the temperature changes of different parts of the unit are coupled with multiple factors. The multivariable grey system prediction model can effectively predict the multiple parameter change of a nonlinear system model by using a small amount of data, but the value of its q parameters greatly influences the prediction accuracy of the model. Therefore, the particle swarm optimization algorithm is used to optimize the q parameters and the multi-sensor temperature data of a pumping station unit is processed. Then, the change trends of the temperature data are analyzed and predicted. Comparing the results with the unoptimized multi-variable grey model and the BP neural network prediction method trained under insufficient data conditions, it is proved that the relative error of the multi-variable grey model after optimizing the q parameters is smaller. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top