E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Emerging Algorithms and Applications in Vision Sensors System based on Artificial Intelligence"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 August 2018

Special Issue Editors

Guest Editor
Dr. Jiachen Yang

School of Electrical and Information Engineering, Tianjin University, Tianjin, China
Website | E-Mail
Interests: Image Processing, Vision Sensors, Machine Learning and Deep Learning, Pattern Recognition based on Artificial Intelligence, Cybersecurity and Privacy, Internet of Things (IoT)
Guest Editor
Dr. Qinggang Meng

Department of Computer Science, Loughborough University, Loughborough, UK
Website | E-Mail
Interests: Robotics, Unmanned Aerial Vehicles, Driverless Vehicles, Networked Systems, Ambient Assisted living, Computer Vision, AI and Pattern Recognition, Machine learning and Deep Learning
Guest Editor
Prof. Dr. Houbing Song

Department of Electrical, Computer, Software, and Systems Engineering, Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA
Website | E-Mail
Interests: Cyber-Physical Systems; Signal Processing for Communications and Networking; Cloud Computing/Edge Computing
Guest Editor
Dr. Burak Kantarci

School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, K1N 6N5, Canada
Website | E-Mail
Interests: Internet of Things; Big Data in the Network; Crowdsensing and Social Networks; Cloud Networking; Digital Health (D-Health); Green ICT and Sustainable Communications

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) is an emerging issue in the research of vision sensor systems, which can simulate the ability of human intelligence processing. With the development of computer science, artificial intelligence is being gradually applied to all aspects of human life. At present, AI-related technologies have come out of the laboratory and have made breakthroughs in many fields of application. Among them, machine vision is a branch of the rapid development of artificial intelligence, which is based on vision sensor systems. Machine vision is at the forefront of interdisciplinary research, and it is different from the study of human or animal vision, it uses geometric, physical and learning techniques to build the model, using statistical methods to deal with data. Vision sensor systems refer to a computer achieving the human visual functions of perception, recognition and understanding in an objective world of three-dimensional scenes. In short, vision sensor systems aim to use machines instead of human eyes for measurement and judgment, e.g., 3D reconstruction, facial recognition, image retrieval, and so on. The application of AI in vision sensor systems can reduce a system's requirements for the environment, such as background, occlusion, and location, and can enhance the adaptability and processing speed of the complex environment of the system. Therefore, vision sensor systems must be combined with AI to achieve a better development. 

The goal of this special issue will be dedicated to development, challenges, and current status in AI for vision sensors system. The requested research articles should be related to AI technologies for supporting machine vision, and artificial intelligence inspired aspects of target detection, tracking and recognition system, image analysis and processing techniques. The manuscripts should not be submitted simultaneously for publication elsewhere. Submissions of high-quality manuscripts describing future potentials or on-going work are sought. 

Topics of interest include, but are not limited to:

  • AI technologies for supporting vision sensors system
  • Target detection, tracking and recognition in real-time dynamic system based on vision sensors
  • Face recognition, iris recognition of vision sensors system based on AI
  • License plate detection and recognition based on vision sensors system
  • AI inspired deep learning algorithms, especially in unsupervised and semi-supervised learning
  • AI inspired image/video retrieval and classification in vision sensors system
  • AI inspired image/video quality evaluation based on the features in vision sensors system
  • Research and application of AI inspired visual features extraction based on vision sensors system
  • Intelligent information search system based on vision sensors
  • Intelligent processing of visual information based on vision sensors system

Dr. Jiachen Yang
Dr. Qinggang Meng
Dr. Houbing Song 
Dr. Burak Kantarci
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle A Novel Semi-Supervised Feature Extraction Method and Its Application in Automotive Assembly Fault Diagnosis Based on Vision Sensor Data
Sensors 2018, 18(8), 2545; https://doi.org/10.3390/s18082545
Received: 3 May 2018 / Revised: 27 July 2018 / Accepted: 30 July 2018 / Published: 3 August 2018
PDF Full-text (4298 KB) | HTML Full-text | XML Full-text
Abstract
The fault diagnosis of dimensional variation plays an essential role in the production of an automotive body. However, it is difficult to identify faults based on small labeled sample data using traditional supervised learning methods. The present study proposed a novel feature extraction
[...] Read more.
The fault diagnosis of dimensional variation plays an essential role in the production of an automotive body. However, it is difficult to identify faults based on small labeled sample data using traditional supervised learning methods. The present study proposed a novel feature extraction method named, semi-supervised complete kernel Fisher discriminant (SS-CKFDA), and a new fault diagnosis flow for automotive assembly was introduced based on this method. SS-CKFDA is a combination of traditional complete kernel Fisher discriminant (CKFDA) and semi-supervised learning. It adjusts the Fisher criterion with the data global structure extracted from large unlabeled samples. When the number of labeled samples is small, the global structure that exists in the measured data can effectively improve the extraction effects of the projected vector. The experimental results on Tennessee Eastman Process (TEP) data demonstrated that the proposed method can improve diagnostic performance, when compared to other Fisher discriminant algorithms. Finally, the experimental results on the optical coordinate data proves that the method can be applied in the automotive assembly process, and achieve a better performance. Full article
Figures

Figure 1

Open AccessArticle Features of X-Band Radar Backscattering Simulation Based on the Ocean Environmental Parameters in China Offshore Seas
Sensors 2018, 18(8), 2450; https://doi.org/10.3390/s18082450
Received: 22 June 2018 / Revised: 26 July 2018 / Accepted: 26 July 2018 / Published: 28 July 2018
PDF Full-text (9253 KB) | HTML Full-text | XML Full-text
Abstract
The X-band marine radar has been employed as a remote sensing tool for sea state monitoring. However, there are few literatures about sea spectra considering both the wave parameters and short wind-wave spectra in China Offshore Seas, which are of theoretical and practical
[...] Read more.
The X-band marine radar has been employed as a remote sensing tool for sea state monitoring. However, there are few literatures about sea spectra considering both the wave parameters and short wind-wave spectra in China Offshore Seas, which are of theoretical and practical significance. Based on the wave parameters acquired from the European Centre for Medium-Range Weather Forecasts reanalysis data (ERA-Interim reanalysis data) during 36 months from 2015 to 2017, a finite depth sea spectrum considering both wind speeds and ocean environmental parameters is established in this study. The wave spectrum is then built into a modified two-scale model, which can be related to the ocean environmental parameters (wind speeds and wave parameters). The final results are the mean backscattering coefficients over the variety of sea states at a given wind speed. As the model predicts, the monthly maximum backscattering coefficients in different seas change slowly (within 4 dB). In addition, the differences of the backscattering coefficients in different seas are quite small during azimuthal angles of 0° to 90° and 270° to 360° with a relative error within 1.5 dB at low wind speed (5 m/s) and 2 dB at high wind speed (10 m/s). With the method in the paper, a corrected result from the experiment can be achieved based on the relative error analysis in different conditions. Full article
Figures

Figure 1

Open AccessArticle Generalized Vision-Based Detection, Identification and Pose Estimation of Lamps for BIM Integration
Sensors 2018, 18(7), 2364; https://doi.org/10.3390/s18072364
Received: 22 June 2018 / Revised: 16 July 2018 / Accepted: 18 July 2018 / Published: 20 July 2018
PDF Full-text (9629 KB) | HTML Full-text | XML Full-text
Abstract
This paper introduces a comprehensive approach based on computer vision for the automatic detection, identification and pose estimation of lamps in a building using the image and location data from low-cost sensors, allowing the incorporation into the building information modelling (BIM). The procedure
[...] Read more.
This paper introduces a comprehensive approach based on computer vision for the automatic detection, identification and pose estimation of lamps in a building using the image and location data from low-cost sensors, allowing the incorporation into the building information modelling (BIM). The procedure is based on our previous work, but the algorithms are substantially improved by generalizing the detection to any light surface type, including polygonal and circular shapes, and refining the BIM integration. We validate the complete methodology with a case study at the Mining and Energy Engineering School and achieve reliable results, increasing the successful real-time processing detections while using low computational resources, leading to an accurate, cost-effective and advanced method. The suitability and the adequacy of the method are proved and concluded. Full article
Figures

Figure 1

Open AccessArticle Robust Visual Tracking Based on Adaptive Convolutional Features and Offline Siamese Tracker
Sensors 2018, 18(7), 2359; https://doi.org/10.3390/s18072359
Received: 15 May 2018 / Revised: 17 July 2018 / Accepted: 18 July 2018 / Published: 20 July 2018
PDF Full-text (9848 KB) | HTML Full-text | XML Full-text
Abstract
Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns
[...] Read more.
Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns partial-target information or background information when experiencing rotation, out of view, and heavy occlusion. In order to reduce the computational complexity by creating a novel method to enhance tracking ability, we first introduce an adaptive dimensionality reduction technique to extract the features from the image, based on pre-trained VGG-Net. We then propose an adaptive model update to assign weights during an update procedure depending on the peak-to-sidelobe ratio. Finally, we combine the online SRDCF-based tracker with the offline Siamese tracker to accomplish long term tracking. Experimental results demonstrate that the proposed tracker has satisfactory performance in a wide range of challenging tracking scenarios. Full article
Figures

Figure 1

Open AccessArticle Multi-Focus Image Fusion Method for Vision Sensor Systems via Dictionary Learning with Guided Filter
Sensors 2018, 18(7), 2143; https://doi.org/10.3390/s18072143
Received: 25 May 2018 / Revised: 26 June 2018 / Accepted: 30 June 2018 / Published: 3 July 2018
PDF Full-text (19593 KB) | HTML Full-text | XML Full-text
Abstract
Vision sensor systems (VSS) are widely deployed in surveillance, traffic and industrial contexts. A large number of images can be obtained via VSS. Because of the limitations of vision sensors, it is difficult to obtain an all-focused image. This causes difficulties in analyzing
[...] Read more.
Vision sensor systems (VSS) are widely deployed in surveillance, traffic and industrial contexts. A large number of images can be obtained via VSS. Because of the limitations of vision sensors, it is difficult to obtain an all-focused image. This causes difficulties in analyzing and understanding the image. In this paper, a novel multi-focus image fusion method (SRGF) is proposed. The proposed method uses sparse coding to classify the focused regions and defocused regions to obtain the focus feature maps. Then, a guided filter (GF) is used to calculate the score maps. An initial decision map can be obtained by comparing the score maps. After that, consistency verification is performed, and the initial decision map is further refined by the guided filter to obtain the final decision map. By performing experiments, our method can obtain satisfying fusion results. This demonstrates that the proposed method is competitive with the existing state-of-the-art fusion methods. Full article
Figures

Figure 1

Open AccessArticle LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone
Sensors 2018, 18(6), 1703; https://doi.org/10.3390/s18061703
Received: 3 May 2018 / Revised: 17 May 2018 / Accepted: 22 May 2018 / Published: 24 May 2018
PDF Full-text (13062 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple
[...] Read more.
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker’s location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time. Full article
Figures

Figure 1

Back to Top