E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Emerging Algorithms and Applications in Vision Sensors System based on Artificial Intelligence"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 August 2018

Special Issue Editors

Guest Editor
Dr. Jiachen Yang

School of Electrical and Information Engineering, Tianjin University, Tianjin, China
Website | E-Mail
Interests: Image Processing, Vision Sensors, Machine Learning and Deep Learning, Pattern Recognition based on Artificial Intelligence, Cybersecurity and Privacy, Internet of Things (IoT)
Guest Editor
Dr. Qinggang Meng

Department of Computer Science, Loughborough University, Loughborough, UK
Website | E-Mail
Interests: Robotics, Unmanned Aerial Vehicles, Driverless Vehicles, Networked Systems, Ambient Assisted living, Computer Vision, AI and Pattern Recognition, Machine learning and Deep Learning
Guest Editor
Prof. Dr. Houbing Song

Department of Electrical, Computer, Software, and Systems Engineering, Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA
Website | E-Mail
Interests: Cyber-Physical Systems; Signal Processing for Communications and Networking; Cloud Computing/Edge Computing
Guest Editor
Dr. Burak Kantarci

School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, K1N 6N5, Canada
Website | E-Mail
Interests: Internet of Things; Big Data in the Network; Crowdsensing and Social Networks; Cloud Networking; Digital Health (D-Health); Green ICT and Sustainable Communications

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) is an emerging issue in the research of vision sensor systems, which can simulate the ability of human intelligence processing. With the development of computer science, artificial intelligence is being gradually applied to all aspects of human life. At present, AI-related technologies have come out of the laboratory and have made breakthroughs in many fields of application. Among them, machine vision is a branch of the rapid development of artificial intelligence, which is based on vision sensor systems. Machine vision is at the forefront of interdisciplinary research, and it is different from the study of human or animal vision, it uses geometric, physical and learning techniques to build the model, using statistical methods to deal with data. Vision sensor systems refer to a computer achieving the human visual functions of perception, recognition and understanding in an objective world of three-dimensional scenes. In short, vision sensor systems aim to use machines instead of human eyes for measurement and judgment, e.g., 3D reconstruction, facial recognition, image retrieval, and so on. The application of AI in vision sensor systems can reduce a system's requirements for the environment, such as background, occlusion, and location, and can enhance the adaptability and processing speed of the complex environment of the system. Therefore, vision sensor systems must be combined with AI to achieve a better development. 

The goal of this special issue will be dedicated to development, challenges, and current status in AI for vision sensors system. The requested research articles should be related to AI technologies for supporting machine vision, and artificial intelligence inspired aspects of target detection, tracking and recognition system, image analysis and processing techniques. The manuscripts should not be submitted simultaneously for publication elsewhere. Submissions of high-quality manuscripts describing future potentials or on-going work are sought. 

Topics of interest include, but are not limited to:

  • AI technologies for supporting vision sensors system
  • Target detection, tracking and recognition in real-time dynamic system based on vision sensors
  • Face recognition, iris recognition of vision sensors system based on AI
  • License plate detection and recognition based on vision sensors system
  • AI inspired deep learning algorithms, especially in unsupervised and semi-supervised learning
  • AI inspired image/video retrieval and classification in vision sensors system
  • AI inspired image/video quality evaluation based on the features in vision sensors system
  • Research and application of AI inspired visual features extraction based on vision sensors system
  • Intelligent information search system based on vision sensors
  • Intelligent processing of visual information based on vision sensors system

Dr. Jiachen Yang
Dr. Qinggang Meng
Dr. Houbing Song 
Dr. Burak Kantarci
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

View options order results:
result details:
Displaying articles 1-2
Export citation of selected articles as:

Research

Open AccessArticle Multi-Focus Image Fusion Method for Vision Sensor Systems via Dictionary Learning with Guided Filter
Sensors 2018, 18(7), 2143; https://doi.org/10.3390/s18072143
Received: 25 May 2018 / Revised: 26 June 2018 / Accepted: 30 June 2018 / Published: 3 July 2018
PDF Full-text (19593 KB) | HTML Full-text | XML Full-text
Abstract
Vision sensor systems (VSS) are widely deployed in surveillance, traffic and industrial contexts. A large number of images can be obtained via VSS. Because of the limitations of vision sensors, it is difficult to obtain an all-focused image. This causes difficulties in analyzing
[...] Read more.
Vision sensor systems (VSS) are widely deployed in surveillance, traffic and industrial contexts. A large number of images can be obtained via VSS. Because of the limitations of vision sensors, it is difficult to obtain an all-focused image. This causes difficulties in analyzing and understanding the image. In this paper, a novel multi-focus image fusion method (SRGF) is proposed. The proposed method uses sparse coding to classify the focused regions and defocused regions to obtain the focus feature maps. Then, a guided filter (GF) is used to calculate the score maps. An initial decision map can be obtained by comparing the score maps. After that, consistency verification is performed, and the initial decision map is further refined by the guided filter to obtain the final decision map. By performing experiments, our method can obtain satisfying fusion results. This demonstrates that the proposed method is competitive with the existing state-of-the-art fusion methods. Full article
Figures

Figure 1

Open AccessArticle LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone
Sensors 2018, 18(6), 1703; https://doi.org/10.3390/s18061703
Received: 3 May 2018 / Revised: 17 May 2018 / Accepted: 22 May 2018 / Published: 24 May 2018
PDF Full-text (13062 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple
[...] Read more.
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker’s location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time. Full article
Figures

Figure 1

Back to Top