E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Emerging Algorithms and Applications in Vision Sensors System based on Artificial Intelligence"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 30 September 2018

Special Issue Editors

Guest Editor
Dr. Jiachen Yang

School of Electrical and Information Engineering, Tianjin University, Tianjin, China
Website | E-Mail
Interests: Image Processing, Vision Sensors, Machine Learning and Deep Learning, Pattern Recognition based on Artificial Intelligence, Cybersecurity and Privacy, Internet of Things (IoT)
Guest Editor
Dr. Qinggang Meng

Department of Computer Science, Loughborough University, Loughborough, UK
Website | E-Mail
Interests: Robotics, Unmanned Aerial Vehicles, Driverless Vehicles, Networked Systems, Ambient Assisted living, Computer Vision, AI and Pattern Recognition, Machine learning and Deep Learning
Guest Editor
Prof. Dr. Houbing Song

Department of Electrical, Computer, Software, and Systems Engineering, Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA
Website | E-Mail
Interests: Cyber-Physical Systems; Signal Processing for Communications and Networking; Cloud Computing/Edge Computing
Guest Editor
Dr. Burak Kantarci

School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, K1N 6N5, Canada
Website | E-Mail
Interests: Internet of Things; Big Data in the Network; Crowdsensing and Social Networks; Cloud Networking; Digital Health (D-Health); Green ICT and Sustainable Communications

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) is an emerging issue in the research of vision sensor systems, which can simulate the ability of human intelligence processing. With the development of computer science, artificial intelligence is being gradually applied to all aspects of human life. At present, AI-related technologies have come out of the laboratory and have made breakthroughs in many fields of application. Among them, machine vision is a branch of the rapid development of artificial intelligence, which is based on vision sensor systems. Machine vision is at the forefront of interdisciplinary research, and it is different from the study of human or animal vision, it uses geometric, physical and learning techniques to build the model, using statistical methods to deal with data. Vision sensor systems refer to a computer achieving the human visual functions of perception, recognition and understanding in an objective world of three-dimensional scenes. In short, vision sensor systems aim to use machines instead of human eyes for measurement and judgment, e.g., 3D reconstruction, facial recognition, image retrieval, and so on. The application of AI in vision sensor systems can reduce a system's requirements for the environment, such as background, occlusion, and location, and can enhance the adaptability and processing speed of the complex environment of the system. Therefore, vision sensor systems must be combined with AI to achieve a better development. 

The goal of this special issue will be dedicated to development, challenges, and current status in AI for vision sensors system. The requested research articles should be related to AI technologies for supporting machine vision, and artificial intelligence inspired aspects of target detection, tracking and recognition system, image analysis and processing techniques. The manuscripts should not be submitted simultaneously for publication elsewhere. Submissions of high-quality manuscripts describing future potentials or on-going work are sought. 

Topics of interest include, but are not limited to:

  • AI technologies for supporting vision sensors system
  • Target detection, tracking and recognition in real-time dynamic system based on vision sensors
  • Face recognition, iris recognition of vision sensors system based on AI
  • License plate detection and recognition based on vision sensors system
  • AI inspired deep learning algorithms, especially in unsupervised and semi-supervised learning
  • AI inspired image/video retrieval and classification in vision sensors system
  • AI inspired image/video quality evaluation based on the features in vision sensors system
  • Research and application of AI inspired visual features extraction based on vision sensors system
  • Intelligent information search system based on vision sensors
  • Intelligent processing of visual information based on vision sensors system

Dr. Jiachen Yang
Dr. Qinggang Meng
Dr. Houbing Song 
Dr. Burak Kantarci
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

View options order results:
result details:
Displaying articles 1-11
Export citation of selected articles as:

Research

Open AccessArticle Design and Implementation of a Stereo Vision System on an Innovative 6DOF Single-Edge Machining Device for Tool Tip Localization and Path Correction
Sensors 2018, 18(9), 3132; https://doi.org/10.3390/s18093132
Received: 26 July 2018 / Revised: 13 September 2018 / Accepted: 13 September 2018 / Published: 17 September 2018
PDF Full-text (10731 KB) | HTML Full-text | XML Full-text
Abstract
In the current meso cutting technology industry, the demand for more advanced, accurate and cheaper devices capable of creating a wide range surfaces and geometries is rising. To fulfill this demand, an alternative single point cutting device with 6 degrees of freedom (6DOF)
[...] Read more.
In the current meso cutting technology industry, the demand for more advanced, accurate and cheaper devices capable of creating a wide range surfaces and geometries is rising. To fulfill this demand, an alternative single point cutting device with 6 degrees of freedom (6DOF) was developed. Its main advantage compared to milling has been the need for simpler cutting tools that require an easier development. To obtain accurate and precise geometries, the tool tip must be monitored to compensate its position and make the proper corrections on the computer numerical control (CNC). For this, a stereo vision system was carried out as a different approach to the modern available technologies in the industry. In this paper, the artificial intelligence technologies required for implementing such vision system are explored and discussed. The vision system was compared with commercial measurement software Dino Capture, and a dedicated metrological microscope system TESA V-200GL. Experimental analysis were carried out and results were measured in terms of accuracy. The proposed vision system yielded an error equal to ±3 µm in the measurement. Full article
Figures

Graphical abstract

Open AccessArticle Hand Tracking and Gesture Recognition Using Lensless Smart Sensors
Sensors 2018, 18(9), 2834; https://doi.org/10.3390/s18092834
Received: 7 June 2018 / Revised: 15 August 2018 / Accepted: 18 August 2018 / Published: 28 August 2018
PDF Full-text (7854 KB) | HTML Full-text | XML Full-text
Abstract
The Lensless Smart Sensor (LSS) developed by Rambus, Inc. is a low-power, low-cost visual sensing technology that captures information-rich optical data in a tiny form factor using a novel approach to optical sensing. The spiral gratings of LSS diffractive grating, coupled with sophisticated
[...] Read more.
The Lensless Smart Sensor (LSS) developed by Rambus, Inc. is a low-power, low-cost visual sensing technology that captures information-rich optical data in a tiny form factor using a novel approach to optical sensing. The spiral gratings of LSS diffractive grating, coupled with sophisticated computational algorithms, allow point tracking down to millimeter-level accuracy. This work is focused on developing novel algorithms for the detection of multiple points and thereby enabling hand tracking and gesture recognition using the LSS. The algorithms are formulated based on geometrical and mathematical constraints around the placement of infrared light-emitting diodes (LEDs) on the hand. The developed techniques dynamically adapt the recognition and orientation of the hand and associated gestures. A detailed accuracy analysis for both hand tracking and gesture classification as a function of LED positions is conducted to validate the performance of the system. Our results indicate that the technology is a promising approach, as the current state-of-the-art focuses on human motion tracking that requires highly complex and expensive systems. A wearable, low-power, low-cost system could make a significant impact in this field, as it does not require complex hardware or additional sensors on the tracked segments. Full article
Figures

Figure 1

Open AccessArticle Superpixel Segmentation Based Synthetic Classifications with Clear Boundary Information for a Legged Robot
Sensors 2018, 18(9), 2808; https://doi.org/10.3390/s18092808
Received: 25 July 2018 / Revised: 21 August 2018 / Accepted: 22 August 2018 / Published: 25 August 2018
PDF Full-text (7206 KB) | HTML Full-text | XML Full-text
Abstract
In view of terrain classification of the autonomous multi-legged walking robots, two synthetic classification methods for terrain classification, Simple Linear Iterative Clustering based Support Vector Machine (SLIC-SVM) and Simple Linear Iterative Clustering based SegNet (SLIC-SegNet), are proposed. SLIC-SVM is proposed to solve the
[...] Read more.
In view of terrain classification of the autonomous multi-legged walking robots, two synthetic classification methods for terrain classification, Simple Linear Iterative Clustering based Support Vector Machine (SLIC-SVM) and Simple Linear Iterative Clustering based SegNet (SLIC-SegNet), are proposed. SLIC-SVM is proposed to solve the problem that the SVM can only output a single terrain label and fails to identify the mixed terrain. The SLIC-SegNet single-input multi-output terrain classification model is derived to improve the applicability of the terrain classifier. Since terrain classification results of high quality for legged robot use are hard to gain, the SLIC-SegNet obtains the satisfied information without too much effort. A series of experiments on regular terrain, irregular terrain and mixed terrain were conducted to present that both superpixel segmentation based synthetic classification methods can supply reliable mixed terrain classification result with clear boundary information and will put the terrain depending gait selection and path planning of the multi-legged robots into practice. Full article
Figures

Figure 1

Open AccessArticle Generative vs. Discriminative Recognition Models for Off-Line Arabic Handwriting
Sensors 2018, 18(9), 2786; https://doi.org/10.3390/s18092786
Received: 27 April 2018 / Revised: 13 July 2018 / Accepted: 17 July 2018 / Published: 24 August 2018
PDF Full-text (863 KB) | HTML Full-text | XML Full-text
Abstract
The majority of handwritten word recognition strategies are constructed on learning-based generative frameworks from letter or word training samples. Theoretically, constructing recognition models through discriminative learning should be the more effective alternative. The primary goal of this research is to compare the performances
[...] Read more.
The majority of handwritten word recognition strategies are constructed on learning-based generative frameworks from letter or word training samples. Theoretically, constructing recognition models through discriminative learning should be the more effective alternative. The primary goal of this research is to compare the performances of discriminative and generative recognition strategies, which are described by generatively-trained hidden Markov modeling (HMM), discriminatively-trained conditional random fields (CRF) and discriminatively-trained hidden-state CRF (HCRF). With learning samples obtained from two dissimilar databases, we initially trained and applied an HMM classification scheme. To enable HMM classifiers to effectively reject incorrect and out-of-vocabulary segmentation, we enhance the models with adaptive threshold schemes. Aside from proposing such schemes for HMM classifiers, this research introduces CRF and HCRF classifiers in the recognition of offline Arabic handwritten words. Furthermore, the efficiencies of all three strategies are fully assessed using two dissimilar databases. Recognition outcomes for both words and letters are presented, with the pros and cons of each strategy emphasized. Full article
Figures

Figure 1

Open AccessArticle Towards Low-Cost Yet High-Performance Sensor Networks by Deploying a Few Ultra-fast Charging Battery Powered Sensors
Sensors 2018, 18(9), 2771; https://doi.org/10.3390/s18092771
Received: 6 July 2018 / Revised: 7 August 2018 / Accepted: 17 August 2018 / Published: 23 August 2018
PDF Full-text (1996 KB) | HTML Full-text | XML Full-text
Abstract
The employment of mobile vehicles to charge sensors via wireless energy transfer is a promising technology to maintain the perpetual operation of wireless sensor networks (WSNs). Most existing studies assumed that sensors are powered with off-the-shelf batteries, e.g., Lithium batteries, which are cheap,
[...] Read more.
The employment of mobile vehicles to charge sensors via wireless energy transfer is a promising technology to maintain the perpetual operation of wireless sensor networks (WSNs). Most existing studies assumed that sensors are powered with off-the-shelf batteries, e.g., Lithium batteries, which are cheap, but it takes some non-trivial time to fully charge such a battery (e.g., 30–80 min). The long charging time may incur long sensor dead durations, especially when there are many lifetime-critical sensors to be charged. On the other hand, other studies assumed that every sensor is powered with an ultra-fast charging battery, where it only takes some trivial time to replenish such a battery, e.g., 1 min, but the adoption of many ultra-fast sensors will bring about high purchasing cost. In this paper, we propose a novel heterogeneous sensor network model, in which there are only a few ultra-fast sensors and many low-cost off-the-shelf sensors. The deployment cost of the network in the model is low, as the number of ultra-fast sensors is limited. We also have an important observation that we can significantly shorten sensor dead durations by enabling the ultra-fast sensors to relay more data for lifetime-critical off-the-shelf sensors. We then propose a joint charging scheduling and routing allocation algorithm, such that the longest sensor dead duration is minimized. We finally evaluate the performance of the proposed algorithm through extensive simulation experiments. Experimental results show that the proposed algorithm is very promising and the longest sensor dead duration by it is only about 10% of those by existing algorithms. Full article
Figures

Figure 1

Open AccessArticle A Novel Semi-Supervised Feature Extraction Method and Its Application in Automotive Assembly Fault Diagnosis Based on Vision Sensor Data
Sensors 2018, 18(8), 2545; https://doi.org/10.3390/s18082545
Received: 3 May 2018 / Revised: 27 July 2018 / Accepted: 30 July 2018 / Published: 3 August 2018
Cited by 1 | PDF Full-text (4298 KB) | HTML Full-text | XML Full-text
Abstract
The fault diagnosis of dimensional variation plays an essential role in the production of an automotive body. However, it is difficult to identify faults based on small labeled sample data using traditional supervised learning methods. The present study proposed a novel feature extraction
[...] Read more.
The fault diagnosis of dimensional variation plays an essential role in the production of an automotive body. However, it is difficult to identify faults based on small labeled sample data using traditional supervised learning methods. The present study proposed a novel feature extraction method named, semi-supervised complete kernel Fisher discriminant (SS-CKFDA), and a new fault diagnosis flow for automotive assembly was introduced based on this method. SS-CKFDA is a combination of traditional complete kernel Fisher discriminant (CKFDA) and semi-supervised learning. It adjusts the Fisher criterion with the data global structure extracted from large unlabeled samples. When the number of labeled samples is small, the global structure that exists in the measured data can effectively improve the extraction effects of the projected vector. The experimental results on Tennessee Eastman Process (TEP) data demonstrated that the proposed method can improve diagnostic performance, when compared to other Fisher discriminant algorithms. Finally, the experimental results on the optical coordinate data proves that the method can be applied in the automotive assembly process, and achieve a better performance. Full article
Figures

Figure 1

Open AccessArticle Features of X-Band Radar Backscattering Simulation Based on the Ocean Environmental Parameters in China Offshore Seas
Sensors 2018, 18(8), 2450; https://doi.org/10.3390/s18082450
Received: 22 June 2018 / Revised: 26 July 2018 / Accepted: 26 July 2018 / Published: 28 July 2018
PDF Full-text (9253 KB) | HTML Full-text | XML Full-text
Abstract
The X-band marine radar has been employed as a remote sensing tool for sea state monitoring. However, there are few literatures about sea spectra considering both the wave parameters and short wind-wave spectra in China Offshore Seas, which are of theoretical and practical
[...] Read more.
The X-band marine radar has been employed as a remote sensing tool for sea state monitoring. However, there are few literatures about sea spectra considering both the wave parameters and short wind-wave spectra in China Offshore Seas, which are of theoretical and practical significance. Based on the wave parameters acquired from the European Centre for Medium-Range Weather Forecasts reanalysis data (ERA-Interim reanalysis data) during 36 months from 2015 to 2017, a finite depth sea spectrum considering both wind speeds and ocean environmental parameters is established in this study. The wave spectrum is then built into a modified two-scale model, which can be related to the ocean environmental parameters (wind speeds and wave parameters). The final results are the mean backscattering coefficients over the variety of sea states at a given wind speed. As the model predicts, the monthly maximum backscattering coefficients in different seas change slowly (within 4 dB). In addition, the differences of the backscattering coefficients in different seas are quite small during azimuthal angles of 0° to 90° and 270° to 360° with a relative error within 1.5 dB at low wind speed (5 m/s) and 2 dB at high wind speed (10 m/s). With the method in the paper, a corrected result from the experiment can be achieved based on the relative error analysis in different conditions. Full article
Figures

Figure 1

Open AccessArticle Generalized Vision-Based Detection, Identification and Pose Estimation of Lamps for BIM Integration
Sensors 2018, 18(7), 2364; https://doi.org/10.3390/s18072364
Received: 22 June 2018 / Revised: 16 July 2018 / Accepted: 18 July 2018 / Published: 20 July 2018
PDF Full-text (9629 KB) | HTML Full-text | XML Full-text
Abstract
This paper introduces a comprehensive approach based on computer vision for the automatic detection, identification and pose estimation of lamps in a building using the image and location data from low-cost sensors, allowing the incorporation into the building information modelling (BIM). The procedure
[...] Read more.
This paper introduces a comprehensive approach based on computer vision for the automatic detection, identification and pose estimation of lamps in a building using the image and location data from low-cost sensors, allowing the incorporation into the building information modelling (BIM). The procedure is based on our previous work, but the algorithms are substantially improved by generalizing the detection to any light surface type, including polygonal and circular shapes, and refining the BIM integration. We validate the complete methodology with a case study at the Mining and Energy Engineering School and achieve reliable results, increasing the successful real-time processing detections while using low computational resources, leading to an accurate, cost-effective and advanced method. The suitability and the adequacy of the method are proved and concluded. Full article
Figures

Figure 1

Open AccessArticle Robust Visual Tracking Based on Adaptive Convolutional Features and Offline Siamese Tracker
Sensors 2018, 18(7), 2359; https://doi.org/10.3390/s18072359
Received: 15 May 2018 / Revised: 17 July 2018 / Accepted: 18 July 2018 / Published: 20 July 2018
PDF Full-text (9848 KB) | HTML Full-text | XML Full-text
Abstract
Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns
[...] Read more.
Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns partial-target information or background information when experiencing rotation, out of view, and heavy occlusion. In order to reduce the computational complexity by creating a novel method to enhance tracking ability, we first introduce an adaptive dimensionality reduction technique to extract the features from the image, based on pre-trained VGG-Net. We then propose an adaptive model update to assign weights during an update procedure depending on the peak-to-sidelobe ratio. Finally, we combine the online SRDCF-based tracker with the offline Siamese tracker to accomplish long term tracking. Experimental results demonstrate that the proposed tracker has satisfactory performance in a wide range of challenging tracking scenarios. Full article
Figures

Figure 1

Open AccessArticle Multi-Focus Image Fusion Method for Vision Sensor Systems via Dictionary Learning with Guided Filter
Sensors 2018, 18(7), 2143; https://doi.org/10.3390/s18072143
Received: 25 May 2018 / Revised: 26 June 2018 / Accepted: 30 June 2018 / Published: 3 July 2018
PDF Full-text (19593 KB) | HTML Full-text | XML Full-text
Abstract
Vision sensor systems (VSS) are widely deployed in surveillance, traffic and industrial contexts. A large number of images can be obtained via VSS. Because of the limitations of vision sensors, it is difficult to obtain an all-focused image. This causes difficulties in analyzing
[...] Read more.
Vision sensor systems (VSS) are widely deployed in surveillance, traffic and industrial contexts. A large number of images can be obtained via VSS. Because of the limitations of vision sensors, it is difficult to obtain an all-focused image. This causes difficulties in analyzing and understanding the image. In this paper, a novel multi-focus image fusion method (SRGF) is proposed. The proposed method uses sparse coding to classify the focused regions and defocused regions to obtain the focus feature maps. Then, a guided filter (GF) is used to calculate the score maps. An initial decision map can be obtained by comparing the score maps. After that, consistency verification is performed, and the initial decision map is further refined by the guided filter to obtain the final decision map. By performing experiments, our method can obtain satisfying fusion results. This demonstrates that the proposed method is competitive with the existing state-of-the-art fusion methods. Full article
Figures

Figure 1

Open AccessArticle LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone
Sensors 2018, 18(6), 1703; https://doi.org/10.3390/s18061703
Received: 3 May 2018 / Revised: 17 May 2018 / Accepted: 22 May 2018 / Published: 24 May 2018
PDF Full-text (13062 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple
[...] Read more.
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker’s location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time. Full article
Figures

Figure 1

Back to Top