Special Issue "Computer Vision & Intelligent Transportation Systems"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 March 2021.

Special Issue Editors

Prof. Dr. Javier Alonso Ruiz
Website
Guest Editor
Computer Engineering Department, INVETT Research Group, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain
Interests: intelligent transportation systems; autonomous vehicles; control systems; driver assistance systems; artificial vision
Special Issues and Collections in MDPI journals
Dr.ir. Jeroen Ploeg
Website
Guest Editor
(1) Lead Cooperative Driving, 2getthere B.V., Utrecht, The Netherlands;
(2) Associate Professor (part-time), Mechanical Engineering Department, Dynamics and Control group, Eindhoven University of Technology, Eindhoven, The Netherlands
Interests: networked control, string stability, agent-based control, vehicle automation, platooning
Special Issues and Collections in MDPI journals
Dr. Martin Lauer
Website
Guest Editor
Institute of Measurement and Control Systems,Karlsruhe Institute of Technology, Germany
Interests: autonomous vehicles, machine vision, machine learning
Special Issues and Collections in MDPI journals
Dr. Angel Llamazares Llamazares
Website
Guest Editor
Postdoctoral Researcher, INVETT Research Group, Computer Engineering Department, Universidad de Alcalá, Alcalá de Henares, Spain
Interests: Robotics; Intelligent Transportation Systems
Special Issues and Collections in MDPI journals
Prof. Dr. Noelia Hernández Parra
Website
Guest Editor
Assistant professor, Computer Engineering Department. INVETT Research Group. Universidad de Alcalá, Alcalá de Henares, Madrid, Spain
Interests: Accurate Indoor and Outdoor Global Positioning; Vehicle Localization; Autonomous Vehicles; Driver Assistance Systems; Imaging and Image Analysis
Special Issues and Collections in MDPI journals
Dr. Carlota Salinas

Guest Editor
Hired Researcher, Computer Engineering Department. INVETT Research Group. Universidad de Alcalá, Alcalá de Henares, Madrid, Spain
Interests: Computer vision, multi-sensory systems, 3D sensing, mapping and localization, autonomous vehicles and robotics

Special Issue Information

Dear Colleagues,

Perception systems have a key role in intelligent transportation systems (ITS) applications. There are a wide variety of sensors included in this topic. Cameras, radars, and lidars are the most usual ones. Radars and cameras are the preferred option in the industry, in order to avoid anti-aesthetic effects on the cars appearance. The latest ones have suffered a small revolution thanks to the application of convolutional neural networks to the image processing.  These sensors, cameras, radars, and lidars are used in several ITS applications, like intelligent traffic lights control, automatic number plate recognition, traffic flow detection, vehicle's speed detection, asphalt pavement cracks detection, and so on. Inside the vehicle, there are several advanced driver assistance systems (ADAS) that rely on perception systems, like the collision mitigation brake system, the driving monitoring system, and the parking assistance system. In addition, regarding the self-driving car, these sensors are used for localization (visual odometry, lidar odometry, 3D maps, etc.), perception (trajectory planning, scene understanding, traffic signs detection, drivable space detection, obstacle avoidance, etc.), and so on.  The aim of this Special Issue is to get a view of the latest works in these fields, and to give the reader a clear picture on the advances that are to come.  Welcome topics include, but are not strictly limited to, the following: 

  • Computer vision and image processing;
  • Lidar and 3D sensors;
  • Radar and other proximity sensors;
  • Infrastructure ITS applications;
  • Advanced driver assistance systems onboard the vehicles;
  • Self-driving car perception and navigation systems.

Prof. Dr. Javier Alonso Ruiz
Dr. Jeroen Ploeg
Dr. Martin Lauer
Dr. Angel Llamazares Llamazares
Prof. Dr. Noelia Hernández Parra
Dr. Carlota Salinas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer vision
  • Lidar
  • Radar
  • 3D perception systems
  • Convolutional neural networks
  • Intelligent traffic lights control
  • Automatic number plate recognition
  • Traffic flow detection
  • Vehicle's speed detection
  • Asphalt pavement cracks detection
  • Collision mitigation brake system
  • Driving monitoring system
  • Parking assistance system
  • Visual odometry
  • Lidar odometry
  • 3D maps construction and localization
  • Scene understanding
  • Traffic signs detection
  • Drivable space detection
  • Obstacle detection…

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
An Efficiency Enhancing Methodology for Multiple Autonomous Vehicles in an Urban Network Adopting Deep Reinforcement Learning
Appl. Sci. 2021, 11(4), 1514; https://doi.org/10.3390/app11041514 - 08 Feb 2021
Abstract
To reduce the impact of congestion, it is necessary to improve our overall understanding of the influence of the autonomous vehicle. Recently, deep reinforcement learning has become an effective means of solving complex control tasks. Accordingly, we show an advanced deep reinforcement learning [...] Read more.
To reduce the impact of congestion, it is necessary to improve our overall understanding of the influence of the autonomous vehicle. Recently, deep reinforcement learning has become an effective means of solving complex control tasks. Accordingly, we show an advanced deep reinforcement learning that investigates how the leading autonomous vehicles affect the urban network under a mixed-traffic environment. We also suggest a set of hyperparameters for achieving better performance. Firstly, we feed a set of hyperparameters into our deep reinforcement learning agents. Secondly, we investigate the leading autonomous vehicle experiment in the urban network with different autonomous vehicle penetration rates. Thirdly, the advantage of leading autonomous vehicles is evaluated using entire manual vehicle and leading manual vehicle experiments. Finally, the proximal policy optimization with a clipped objective is compared to the proximal policy optimization with an adaptive Kullback–Leibler penalty to verify the superiority of the proposed hyperparameter. We demonstrate that full automation traffic increased the average speed 1.27 times greater compared with the entire manual vehicle experiment. Our proposed method becomes significantly more effective at a higher autonomous vehicle penetration rate. Furthermore, the leading autonomous vehicles could help to mitigate traffic congestion. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

Open AccessArticle
Fast Planar Detection System Using a GPU-Based 3D Hough Transform for LiDAR Point Clouds
Appl. Sci. 2020, 10(5), 1744; https://doi.org/10.3390/app10051744 - 04 Mar 2020
Cited by 2
Abstract
Plane extraction is regarded as a necessary function that supports judgment basis in many applications, including semantic digital map reconstruction and path planning for unmanned ground vehicles. Owing to the heterogeneous density and unstructured spatial distribution of three-dimensional (3D) point clouds collected by [...] Read more.
Plane extraction is regarded as a necessary function that supports judgment basis in many applications, including semantic digital map reconstruction and path planning for unmanned ground vehicles. Owing to the heterogeneous density and unstructured spatial distribution of three-dimensional (3D) point clouds collected by light detection and ranging (LiDAR), plane extraction from it is recently a significant challenge. This paper proposed a parallel 3D Hough transform algorithm to realize rapid and precise plane detection from 3D LiDAR point clouds. After transforming all the 3D points from a Cartesian coordinate system to a pre-defined 3D Hough space, the generated Hough space is rasterised into a series of arranged cells to store the resided point counts into individual cells. A 3D connected component labeling algorithm is developed to cluster the cells with high values in Hough space into several clusters. The peaks from these clusters are extracted so that the targeting planar surfaces are obtained in polar coordinates. Because the laser beams emitted by LiDAR sensor holds several fixed angles, the collected 3D point clouds distribute as several horizontal and parallel circles in plane surfaces. This kind of horizontal and parallel circles mislead plane detecting results from horizontal wall surfaces to parallel planes. For detecting accurate plane parameters, this paper adopts a fraction-to-fraction method to gradually transform raw point clouds into a series of sub Hough space buffers. In our proposed planar detection algorithm, a graphic processing unit (GPU) programming technology is applied to speed up the calculation of 3D Hough space updating and peaks searching. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

Open AccessArticle
Adaptive Cruise Control Based on Model Predictive Control with Constraints Softening
Appl. Sci. 2020, 10(5), 1635; https://doi.org/10.3390/app10051635 - 29 Feb 2020
Cited by 1
Abstract
In this paper, with the aim of meeting the requirements of car following, safety, comfort, and economy for adaptive cruise control (ACC) system, an ACC algorithm based on model predictive control (MPC) using constraints softening is proposed. A higher-order kinematics model is established [...] Read more.
In this paper, with the aim of meeting the requirements of car following, safety, comfort, and economy for adaptive cruise control (ACC) system, an ACC algorithm based on model predictive control (MPC) using constraints softening is proposed. A higher-order kinematics model is established based on the mutual longitudinal kinematics between the host vehicle and the preceding vehicle that considers the changing characteristics of the inter-distance, relative velocity, acceleration, and jerk of the host vehicle. Performance indexes are adopted to represent the multi-objective demands and constraints of the ACC system. To avoid the solution becoming unfeasible because of the overlarge feedback correction, the constraint softening method was introduced to improve robustness. Finally, the proposed ACC method is verified in typical car-following scenarios. Through comparisons and case studies, the proposed method can improve the robustness and control precision of the ACC system, while satisfying the demands of safety, comfort, and economy. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

Open AccessArticle
Prediction of Driver’s Attention Points Based on Attention Model
Appl. Sci. 2020, 10(3), 1083; https://doi.org/10.3390/app10031083 - 06 Feb 2020
Cited by 2
Abstract
The current intelligent driving system does not consider the selective attention mechanism of drivers, and it cannot completely replace the drivers to extract effective road information. A Driver Visual Attention Network (DVAN), which is based on deep learning attention model, is proposed in [...] Read more.
The current intelligent driving system does not consider the selective attention mechanism of drivers, and it cannot completely replace the drivers to extract effective road information. A Driver Visual Attention Network (DVAN), which is based on deep learning attention model, is proposed in our paper, in order to solve this problem. The DVAN is aimed at extracting the key information affecting the driver’s operation by predicting the driver’s attention points. It completes the fast localization and extraction of road information that is most interesting to drivers by merging local apparent features and contextual visual information. Meanwhile, a Cross Convolutional Neural Network (C-CNN) is proposed in order to ensure the integrity of the extracted information. Here, we verify the network on the KITTI dataset, which is the largest computer vision algorithm evaluation data set in the world’s largest autonomous driving scenario. Our results show that the DVAN can quickly locate and identify the target that the driver is most interested in a picture, and the average accuracy of prediction is 96.3%. This will provide useful theoretical basis and technical methods that are related to visual perception for intelligent driving vehicles, driving training and assisted driving systems in the future. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

Back to TopTop