Next Article in Journal
Deep Learning-Based Stereopsis and Monocular Depth Estimation Techniques: A Review
Previous Article in Journal
A Bidirectional Wireless Power Transfer System with Integrated Near-Field Communication for E-Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Architecture and Potential of Connected and Autonomous Vehicles

by
Michele Pipicelli
1,2,3,*,
Alfredo Gimelli
2,
Bernardo Sessa
3,
Francesco De Nola
3,
Gianluca Toscano
3 and
Gabriele Di Blasio
1,*
1
Institute of Sciences and Technologies for Sustainable Energy and Mobility (STEMS), National Research Council, Viale Marconi 4, 80125 Naples, Italy
2
Department of Industrial Engineering, University of Naples Federico II, 80125 Naples, Italy
3
Teoresi S.P.A., 10152 Torino, Italy
*
Authors to whom correspondence should be addressed.
Vehicles 2024, 6(1), 275-304; https://doi.org/10.3390/vehicles6010012
Submission received: 12 December 2023 / Revised: 13 January 2024 / Accepted: 26 January 2024 / Published: 29 January 2024

Abstract

:
The transport sector is under an intensive renovation process. Innovative concepts such as shared and intermodal mobility, mobility as a service, and connected and autonomous vehicles (CAVs) will contribute to the transition toward carbon neutrality and are foreseen as crucial parts of future mobility systems, as demonstrated by worldwide efforts in research and industry communities. The main driver of CAVs development is road safety, but other benefits, such as comfort and energy saving, are not to be neglected. CAVs analysis and development usually focus on Information and Communication Technology (ICT) research themes and less on the entire vehicle system. Many studies on specific aspects of CAVs are available in the literature, including advanced powertrain control strategies and their effects on vehicle efficiency. However, most studies neglect the additional power consumption due to the autonomous driving system. This work aims to assess uncertain CAVs’ efficiency improvements and offers an overview of their architecture. In particular, a combination of the literature survey and proper statistical methods are proposed to provide a comprehensive overview of CAVs. The CAV layout, data processing, and management to be used in energy management strategies are discussed. The data gathered are used to define statistical distribution relative to the efficiency improvement, number of sensors, computing units and their power requirements. Those distributions have been employed within a Monte Carlo method simulation to evaluate the effect on vehicle energy consumption and energy saving, using optimal driving behaviour, and considering the power consumption from additional CAV hardware. The results show that the assumption that CAV technologies will reduce energy consumption compared to the reference vehicle, should not be taken for granted. In 75% of scenarios, simulated light-duty CAVs worsen energy efficiency, while the results are more promising for heavy-duty vehicles.

1. Introduction

The road transport sector is currently evolving towards new emerging vehicular solutions driven by goals of air quality improvement, climate change mitigation, and vehicle safety enhancement. From the vehicle architecture perspective, the first two points are strictly related to the powertrain architecture adopted. Electrification plays a crucial role, with copious investments, in developing technology and infrastructure [1]. Electrified light-duty vehicles have the advantage of high overall efficiency at the expense of battery costs, weight, and lifetime, which can be partially improved by adopting hybrid powertrains and hybrid energy storage systems [2]. In some use cases, zero-carbon tailpipe emissions powertrains based on fuel cells or hydrogen-fuelled internal combustion engines can be viable solutions [3]. In the pursuit of enhancing road safety, many are pushing for the development of driving automation systems (DASs), with positive repercussions on social, economic, and efficiency areas [4]. DASs can range from assisted to fully autonomous driving systems. Different classifications have been recently proposed for a standardised framework, such as the National Highway Traffic Safety Administration’s (NHTSA) 5-level system or BASt 5-degrees of automation [5]. However, the most common classification is the SAE J3016, which defines six classes from L0 to L5 representing vehicles without any assistance and fully autonomous vehicles, respectively [6]. Mercedes-Benz recently introduced to the market their first production car equipped with the first L3 system [7]. Many pilot projects at L4 have already been deployed, while L5 vehicles are expected to be available only in just a few years. L4 pilot vehicles are able to drive autonomously, but with some limitations on vehicle speed, a take-over manoeuvre, and operation in the absence of a Global Navigation Satellite System (GNSS) signal or under severe weather conditions [8]. The importance of those pilot projects relies on understanding the main obstacles encountered, limiting the diffusion of autonomous solutions, and defining possible solutions [9]. The social aspects are crucial for wide acceptance of self-driving cars and are strictly related to the technical level reached by autonomous vehicles (AVs). In a recent statistical study on a simulator, manual drivers in scenarios with autonomous vehicles have experienced safety concerns, average speed reduction, and more safety issues during specific manoeuvres [10]. Most AV designs include vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or in general, vehicle-to-everything (V2x) communication systems. They are usually named as AVs or as connected and autonomous vehicles (CAVs), since they likely will all be connected in the future. Thereby in the manuscript the CAVs include also the AVs. The distinction is made only where needed.
In general, connectivity allows for improving the safety, performance, and reliability of CAVs [11,12]. The communication of ego–vehicle position and future actions to surrounding vehicles provide fundamental information on crash avoidance strategies [11]. The information can be used by a network or distributed controller for optimal fleet management in terms of traffic and energy consumption [12]. However, the connectivity raises additional concerns about vehicle cybersecurity, which should be considered to avoid cyberattacks [13]. On the other hand, advanced sensors, high computational power, and novel technology can improve energy efficiency and comfort [14]. However, studies often neglect the increment of vehicle power consumption due to the DASs. A critical aspect of AVs is their interaction with human-driven vehicles and the surrounding environment [15]. The interaction among vehicles, pedestrians, cyclists, and other actors can result in frequent stops [16], and deploying autonomous vehicles in real-life scenarios is challenging to achieve safe, reliable, and comfortable operation [17]. Further development requires different competencies to cope with this ambitious goal.
This work aims to define critical points in the development of CAV platforms providing additional information on this specific research theme and offering more insight into the CAVs’ energy efficiency. In this regard, the data gathered during the literature survey enabled us to perform a Monte Carlo simulation to statistically assess the impact of the driving automation system on overall vehicle power consumption. To the authors’ knowledge, these aspects and information are rarely provided in the literature, as most of the work neglects the additional power required by the driving automation system. The manuscript is structured as follows. The research methods adopted for the literature review and the CAV energy assessment are reported in Section 2. An overview of the CAVs’ architecture is pictured in Section 3, highlighting the key points and the working principles. Section 4 reports the adopted sensors on CAVs. A brief discussion on the treatment and processing of sensor data is reported in Section 5, including some thoughts about processing units and how CAVs can use those data to improve energy efficiency. Then, in Section 6, an energy analysis is carried out based on a Monte Carlo simulation that quantitatively addresses DAS impact on vehicle energy consumption. The conclusion highlights the different trends rising in this new developing field.

2. Materials and Methods

In this section, the methodologies adopted are presented. First, the methodology adopted for the literature review is discussed. Then, the statistical methods used to assess the influence of CAV hardware on energy consumption are described.
Regarding the literature survey, an extensive scientific and technical literature study has been carried out using a keyword-based search method on Elsevier Scopus, Google Scholar, IEEE Explorer, and Web of Science databases. “CAV”, “AV”, ”Self driving”, “Autonomous vehicles”, “Connected and autonomous vehicle”, “sensors”, “Lidar”, “4D radar”, “automotive”, “communication protocols”, “energy efficiency”, “ADAS”, and “energy efficiency” are the main keywords adopted, with some variations and combinations. Additional manuscripts on specific topics have been selected by searching through the citations of the thus found articles. A further selection of the most relevant and scientifically sound manuscripts was made. About 130 manuscripts were selected from about 350.
The literature survey made it possible to gather significant data to create a dataset. This should be analysed with proper techniques to obtain the greatest information content possible from the data. It is interesting to use a data-driven approach to answer the following research question: “Which variant of the same vehicle, differing only by the presence or not of a driving automation system, has the lowest energy consumption?” The simplest approach to analysing the dataset is to adopt descriptive statistics, but it was found useful to use a more sophisticated method as the Monte Carlo one. The Monte Carlo calculation, which is particularly suited for statistical exploration, uses random-number generators to recreate the inherent uncertainty of the input parameters and study their influence on the model outputs [18]. A scheme of the workflow followed is shown in Figure 1. First, the database has been generated by gathering data from battery electric vehicle (BEV) energy consumption, possible efficiency improvement with DAS, number and type of sensors, and sensor power consumption.
Second, from the data gathered, the distributions were chosen based on the population characteristics (i.e., discrete values and strong asymmetry), and the fittings of data have been made in MATLAB R2022b. The fittings have been generated based on maximum likelihood estimation techniques, except for in normal distributions for which the unbiased variance estimator has been used.
The energy consumption can be approximated with a Burr distribution, E C E V ( α E C , E V ; c E C , E V ; k E C , E V ) , with α E C , E V , c E C , E V , and k E C , E V as scale and first and second shape parameters, respectively. The vehicle data collected from vehicle manufacturers, about 200 specifications, belong to different classes of light-duty vehicles. The consumption data relate to worldwide harmonised light-duty vehicle test cycles (WLTC) commonly adopted for homologation purposes.
The power consumption of the CAV technology requires preliminary modelling of the number of sensors and computing units, made through Poisson distributions n i λ i , and the corresponding power consumption, made through Normal distributions P i μ i ; σ i . Thus, the total hardware electrical consumption P C A V , H W is evaluated according to Equation (1).
P C A V , H W = i n i λ i · P i μ i ; σ i   w i t h   i   ( l i d a r ,   r a d a r ,   u l t r a s o n i c ,   c a m e r a ,   c o m p u t i n g )
Additionally, the vehicle energy saving, due to the adoption of driving automation systems, can be represented using the normal distribution Δ E C C A V ( μ Δ E C , C A V ; σ Δ E C , C A V ) .
Third, the previous distributions have been used for feeding the random-number generators for the Monte Carlo simulations. The analysed model output is the energy consumption of the vehicle including the DAS’s power requirements. In detail, random samples generated using the given distributions have been fed to the deterministic energy model (Equation (2)).
100 Δ E C C A V ( μ Δ E C , C A V ; σ Δ E C , C A V ) 100 · E C E V α E C , E V ; c E C , E V ; k E C , E V + P C A V , H W τ D C d D C
τ D C and d D C are the duration (in hours) and the distance (in km) of the test driving cycle, respectively. The first term relates to the statistical estimation of the vehicle energy efficiency with the DAS system, while the second term represents the CAV hardware consumption. The results obtained are analysed and discussed in Section 6.
Once defined the methodology, it can be helpful for the readers to define the operating boundaries of the manuscript. Figure 2 shows a schematic of the main CAV subsystems, highlighting the investigated topics with colours. In particular, the CAV architecture (Section 3) and relative sensors (Section 4) are analysed.
Additionally, a brief overview of the sensors’ data management, processing, and their application is presented in Section 5, with the final objective of assessing the CAV energy efficiency in Section 6.

3. CAV Architecture

In this section, the CAV system is discussed. It is a complex system surrounded by a mutable environment. The definition of the primary tasks of the CAV and formalising its functions are valuable starting points for understanding the layout. Essentially, it accomplishes the mission of moving from point “A” to “B”, defining a trajectory to be followed and generating the correct commands to the powertrain, steering, and braking systems. Meanwhile, it interacts with the environment, perceiving the vehicle surroundings to guarantee a safe, comfortable, and law-respecting operation.
One possible abstraction of a CAV can be described using the Observe Orient Decide and Act (OODA) loop [19]. This is one of the predominant design paradigms for CAVs, which is graphically reported in Figure 3 [20]. According to this loop, the following loop-steps can be defined:
(i).
Observe: the data are gathered from the sensors and, eventually, will be received using infrastructure and other vehicles through V2x connectivity;
(ii).
Orient: the data are used to reconstruct the surrounding environment and localise the vehicle;
(iii).
Decide: a decision-making algorithm defines the best trajectory to follow to fulfil the mission goal, respecting the constraints;
(iv).
Act: the command for the actuators to follow the desired trajectory is generated and injected into the physical layer (i.e., electronic control units and actuators).
The OODA loop is not the unique abstraction method to describe CAVs. Another common abstraction is splitting the CAV architecture into four layers: the sensor, perception, planning, and control [21]. The two schematic architectures differ more in taxonomy than functionality. From these CAVs’ abstractions, the main hardware components are outlined.
The observe phase requires sensors to make the CAV system capable of understanding its state and the surrounding environment. Numerous sensors are adopted to cope with this aim. An overview of common sensors with their high-level classification is reported in Table 1.
Proprioceptive sensors, such as odometry, are not exclusive to CAVs, although they are needed for autonomous driving. They can be found in numerous non-autonomous vehicle applications, as they often ensure the functionality of powertrain, braking, and vehicle safety systems. The global navigation satellite system (GNSS), the inertial measurement unit (IMU), and wheel encoders are the most relevant for CAV applications. Other proprioceptive sensors, such as temperature, pressure, and position sensors, guarantee the operation of all auxiliary vehicular systems. The exteroceptive sensors are a prerogative of SAE J3016 L1 and subsequent level vehicles, as they are able to sense the environment. They are mainly light detection and ranging (LiDAR), cameras, ultrasonic, and radio detection and ranging (RADAR). The sensor does respond to the requirement of object detection, environment recognition, and ego–vehicle localization [22]. The term simultaneous localization and mapping (SLAM) is adopted for when the last two are achieved in synergy.
Usually, the CAV design foresees numerous sensors to ensure a 360 degree angle view around the vehicle. Proper data processing techniques and sensor fusion algorithms are required as intermediate steps between sensor acquisition and perception of algorithms. Data for decision tasks can be provided using the V2x communication or from databases as high-definition maps. The V2x data can include sensor data from infrastructure, pedestrians, and other vehicles, and possibly also information on their future trajectories. These data can be used for localization, control optimization, cooperative perception, intention–awareness, and improved operation safety [23]. Then, all the acquired information is elaborated according to the vehicle mission by the decision process. This includes global and local planning and behavioural planning to ensure safe and regulation-compliant operations [24]. The output of the decision task is usually a trajectory to follow and provided to the control/act layer. The latter generates the actuator signals for path tracking, adopting the usual control techniques such as proportional integral derivative (PID) control, linear quadratic regulator (LQR) control, and model predictive control (MPC) [25].

Sample Configurations

The CAVs typically require a set of different and redundant sensors to match their peculiar characteristics and ensure reliable and safe operations. Increasing the automation level, L4 and L5 autonomous systems and LiDARs are required to sense the vehicle surroundings and gather reliable data [26]. In some specific applications, as in the case of Mobileye (Jerusalem, Israel) True Redundancy™, the design choice is to have two independent sensor systems, one based on a camera and the other on LiDAR and RADAR, but they are both able to ensure autonomous operation [27]. In general, although single independent-perception systems can help to keep the platform costs within acceptable limits, redundancy should be ensured by adopting different sensors covering the same surrounding areas [28]. A schematic layout of a possible sensor configuration is reported in Figure 4.
For example, the University of Technology of Belfort-Montbliard autonomous cars is equipped with a GNSS, an IMU, a RADAR, 2 360° LiDAR, 1 solid-state LiDAR, and 2D LiDAR, 2 fisheye lateral cameras and 2 stereo cameras one in front and one rear looking [29]. The NAVYA Autonom® Shuttle Evo has 2 360° and 8 2D LiDARs, 2 cameras, GNSS, IMU and V2x connectivity [30]. The Mobileye (Jerusalem, Israel) DRIVE™ proposed architecture features of 11 cameras, 6 RADAR (4 short and 2 medium range) and 9 Lidar (6 short and 3 long range) [31].
Mobileye supervision [32] assures advanced driving assistance system (ADAS) futures, such as automated parking capability and hands-free highway driving, and uses 7 long-range and 4 short-range cameras to perceive the surrounding environment. The NVIDIA (Santa Clara, CA, USA) DRIVE Hyperion™ autonomous driving vehicle platform has 12 external and 3 internal cameras, 9 radars, 1 LiDAR, and 12 ultrasound sensors [33]. The more complex Mobileye (Jerusalem, Israel) Drive™ [34], which can be classified as an L4 or L5 autonomous system according to SAE J3016, has 13 cameras, 3 long-range LIDARs, 6 short-range LIDARs and 6 radars. This solution use two independent redundant systems for the perception, namely a camera with radars and LIDARs, to achieve a high level of safety. Both solutions are based on two and six Mobileye EyeQ5 High System-on-Chips (SoCs), respectively, capable of DL of 16 trillion of operations per second (TOPS) each. The redundancy can also be applied to the processing units, as in the case of Tesla “third hardware version (HW3)”, with a duplex configuration of two processing units able to operate the vehicle independently. Due to reliability constraints to ensure safe operation, the hardware for automotive applications is subject to rigorous testing and design constraints. The Automotive Electronics Council (AEC) provides the requirements for integrated circuits (AEC-Q100) and passive components (AEC-Q200) to be defined as automotive-grade components. Those standards pose stringent operative temperature conditions, usually between −40 °C and +125 °C. Additionally, automotive electrical and electronic parts should comply with the ISO16750 defining severe test conditions. Moreover, all critical vehicle systems to guarantee functional safety should comply with ISO2626, which defines four levels of Automotive Safety Integrity Levels (ASIL) from A to D, with D being the highest safety level [35].

4. Sensors

In this section, the sensors needed for the autonomous navigation of the vehicle are analysed more deeply, discussing their working principle, major constraints, and margin of improvement. The main sensors of CAVs, as described in Section 3, are camera, LiDAR, Radar, and ultrasonic. The need to adopt different kinds of sensors arises from their peculiar characteristics (sensitivity, reliability, etc.). Sensors and their processing algorithms for automotive applications are tested in adverse weather conditions to assess their reliability and operational limits. The test relies on the following:
  • On-road vehicle, true world, real scenario testing;
  • Sensor testing, true world condition reproduction;
  • Sensor testing, tailored simulation test bench, laboratory.
The first tests are the most significant to test CAVs but require a huge effort in both time and cost. Only a few datasets are available with true sensor data recorded by vehicles in different weather conditions in on-road testing for processing algorithms’ offline development and testing. One example dataset is RADIATE, which includes radar, lidar, camera, and odometry data under normal, rainy, snowy, and foggy conditions [36].
Regarding the sensors and their development, testing at the laboratory scale is usually preferred. Ad hoc test rooms are adopted to ensure high reproducibility of the results. Meteorological visibility, fog, rain, and snow particle size distribution, and rain intensity are measured and reproduced according to the desired test conditions. Rasshofer et al. have successfully developed an electro-optical laser radar target simulator system to reproduce LiDAR response in a snow environment [37]. Also, a virtual environment can be adopted to test sensors. For example, Espineira et al. developed a 3D virtual environment with a noise model depending on rain distribution to generate synthetic Lidar data in adverse conditions [38]. Usually, a combination of geometrical environment reconstruction, sensor physics, and stochastic methods for accounting noise and environmental conditions is adopted to virtually test a sensor [39]. A similar approach can be adopted for hardware-in-the-loop testing of visual systems. The camera system under test is pointed toward an image rendered using a 3D graphic engine, and the vision algorithm is tested in different driving conditions and scenarios [40]. Each sensor interacts differently with the weather conditions. Cameras can tolerate light conditions and adverse weather. A camera pedestrian recognition algorithm has been shown to reduce the detection rate from 90% to 70% passing from daylight to night conditions at the same detection accuracy [41]. Fog is another typical problem for a camera system. However, different post-processing techniques are being developed to mitigate this problem. These can be pre-processing techniques before the elaboration of a single image [42] or can be directly integrated into object-detection algorithms [43].
The LiDARs are sensitive to weather conditions, especially fog. From the test in a fog camera with two different LiDAR with a 905 nm wavelength and a target with 90% reflectivity, the maximum viewing distance is around half of the meteorological one [44]. Also, in extreme conditions (33 mm/h rainfall), the rain does not significantly influence LiDAR detection, while fog creates measurement and detection errors [45].
Radar usually is less affected by adverse weather conditions concerning LiDARs. In particular, under various conditions, detection precision testing has registered only a 5% loss for radar systems and up to 25% for LiDARs [46]. Particular attention should be given to water film deposition on the radome, which can strongly attenuate the radar signal, while negligible effects are provided due to ice formation [47].
Generally, adverse weather conditions affect the sensor by reducing its effective range, limiting the safe operation of CAVs [48]. The sensitivity to external conditions of each sensor allows the adoption of sensor fusion algorithms to overcome their limitations partially. The weather influences radars less but it is insufficient for CAV operations. Cameras are required for signal, traffic, and visual recognition but are most sensitive to light, snow, and rain conditions. A LiDAR is someway influenced by weather, especially fog, but not as strong as for the camera, and it is not sufficient for CAV operation alone. In general, adopting a variety of sensors, with sensor fusion techniques, seems the most promising architecture to achieve reliability and accuracy [49]. Section 5 “Data Processing” briefly discusses the sensor fusion algorithms.
Besides perception, achieving high-accuracy localization of the CAV is of primary concern for vehicle functionality, and the GNSS is the main solution to achieve this aim. Due to its relevance, it is added in the following discussion.
In the following, the sensors are described in terms of their working principles, main characteristics and specifications, pros and cons, and applications.

4.1. Camera

Cameras, or imaging sensors, can detect the light emitted by the environment on a photosensitive surface sensor. These sensors are mainly based on Charge-coupled device (CCD) and Complementary Metal Oxide Semiconductor (CMOS) technologies [50]. At the beginning of the digital image era, the CCD was preferred due to its superior image quality. Currently, CMOS imaging sensors are commonly adopted since they offer lower cost, high image quality, and framerates [51]. The interest in the depth cameras (also sometimes called 3D flash LiDARs) is growing. They incorporate a light-emitting source and measure the distance from the object using the Time-of-Flight (ToF) principle [52]. In a single module, a vision RGB and depth camera often coexist, offering high-resolution colour and depth information, which is helpful in SLAM and object detection and tracking [53].
Table A1 reports a comparison of some commercial camera systems. Typically, resolutions of about 2 Megapixels with a framerate of at least 30 fps are adopted. Power consumption is usually below 10 W, which tends to increase for systems, including processing units. Automotive ethernet and SerDes links as gigabit multimedia serial link (GMSL) or FPD-Link are commonly adopted as communication protocols due to high-bandwidth requirements.
Vision systems can detect, track, and predict the behaviour of other vehicles; although, they face some issues arising from unpredictable working conditions and different environments [54]. Additionally, the vision system can recognise traffic signs and information on displays, which is impossible with other types of exteroceptive sensors [55]. For those reasons, CAV configurations should include cameras.
For automotive applications, different manufacturers include camera and image-processing units capable of real-time processing, object detection, and SLAM. The cameras are characterised mainly by their resolution, framerate, and field-of-view (FoV). In Figure 5, resolution, FoV, and minimum pixel levels are related to the minimum distance required to detect an object correctly. Object detection requires approximately between 100 and 600 pixels [56]. Thus, squares of 10, 20, and 30 pixels sides have been considered. Changing the focal length (i.e., optical lens) improves the maximum detection distance but reduces the FoV. Increasing the resolution can effectively improve the detection distance for all focal lengths, but more data are transferred and elaborated with a higher computational power request, and this has to be considered in a CAV design.
Common vision-system architecture adopts a fisheye or wide angle lens to monitor the nearfield vehicle surroundings at 360 degrees with a reduced number of cameras [57], and a telephoto lens with narrow FoV, to catch distant objects, at the vehicle front [58].

4.2. LiDAR

LiDAR is an integrated system able to scan the FOV, using laser beams, to evaluate object distance and, in some cases, relative speed. As output, it generates a list of points, named point cloud data (PCD), and identified using their coordinates relative to the system reference frame and, in some cases, intensity or point speed [59]. The LIDAR system is composed of one or more single detectors. Each generates a laser beam, and a control unit computes the distance from the reflected signal. A beam steering or scanning system is used to sense the environment to explore larger areas, changing the directionality of the beams. These systems can be mechanical or solid state, whereas the latter is preferred in the automotive field for its higher reliability due to its lack of moving parts [60]. However, solid-state models are limited in the horizontal FoV, which usually requires the adoption of more than one LiDAR to sense the vehicle surroundings [61]. The processing unit is responsible for the system management and processing of data for output generation. In some cases, as in the case of IBEO Lux or Microvision MAVIN™, the embedded processing unit is capable of object detection, classification, and tracking, while usually these tasks are computed to the CAV processing units. A typical LIDAR schematic is shown in Figure 6.
An LiDAR can be classified based on the measurement methods: ToF and frequency modulated continuous wave (FMCW) [60]. Direct ToF systems use a high-power laser pulse to sense the environment and use the speed of light to estimate the distance. FMCW uses the signal-frequency variation due to the Doppler effect to measure both distance and relative speed [62]. In Table A2, a comparison of the main technical specification of a group of available Lidar systems is presented. The maximum range of the Lidar is usually expressed as the maximum distance at which, with a 90% probability, a Lambertian target with 10% of reflectance is detected [63]. The automotive Lidars have range capabilities of at least 100 m and can reach up to 250 m. Furthermore, 3D Lidars usually offer measurement frequencies of 10 to 25 Hz, with 2D Lidars potentially offering higher rates of up to 100 Hz. Power consumption is usually between 10 and 20 W. However, there are more power-demanding devices, which typically include an advanced computing unit inside, reaching up to 50 W. Generally, the vFOV is around 20° or 30°, while a solid-state system can offer up to 100–120° of hFOV.
It is worth pointing out that these systems suffer from mutual interferences when more than one LiDAR is used, such as in the case of the high penetration of CAVs [64]. The interferences, which are of greatest concern in ToF LiDAR, can reduce the signal-to-noise (SNR) ratio, leading to a reduced operative range and, in some cases, introducing ghost-object detection [65]. Different methods are analysed in the literature to overcome these problems, such as optical code division multiple access (OCDMA) modulation [66] and a true-random signal [67]. Another problem is the multiple returns of signals (or echoes), which occurs when the beam passes through semi-transparent surfaces [68], has a high dynamic range [69], or is in adverse weather conditions [70]. Many available systems can acquire more than one return, providing in output all the returns or one selected using an internal algorithm.
The single detector is usually made using a laser source operating in the near-infrared (NIR) spectrum from 850 to 950 nm or short-wave infrared (SWIR) at 1550 nm. The 905 nm wavelength detectors are widely adopted due to the well-established CMOS-based technology, low cost and market availability [71], while the 1550 nm technology requires more exotic and expensive materials such as Indium phosphide [72]. Due to its unconstrained operation in a public environment, the safety of laser sources for the eyes is of primary concern [26]. The laser sources should be Class 1 according to IEC 60825-2. Since the beam effect on the eye depends on the wavelength, the power limit changes with the laser wavelength. IEC 60825-2 allows for higher power for SWIR systems, which can be helpful in achieving longer ranges due to its lower absorption by the cornea, lens, and humours.
Different beam steering applications are applied to LiDAR systems with different characteristics. Typical applications use a rotating mirror which assures a 360 degree horizontal field of view (hFOV) but with a lower mean time between failures (MTBF) due to the mechanical rotating part. Due to automotive-sector requirements, in recent years, solid-state systems with a micro electro-mechanical systems (MEMS) mirror are gaining significant attention. Although there are micro motions of the mems mirrors, the MBTF is higher and is compatible with automotive applications. In some applications, a flash LiDAR configuration was used, in which a diffusive laser light and a bidimensional photodiodes are used, which are sometimes referred to also as a ToF camera [73]. However, due to eye safety, the emitted power is limited, and, consequently, the range is unsuitable for vehicle purposes. A recent development is the optical phased array (OPA) in which the beam is steered appropriately modulating multiple sources [74]. OPA technology has shown promising MBTF (1 × 106 h about 12 years), due to the absence of mechanical and MEMS mirrors [75], and is in line with automotive requirements. Many forecasts show that OPA lidar will gain market shares in the next few years. The MTBFs of other LiDARS are not publicly available, so a quantitative comparison is not possible.

4.3. Radar

Radio detection and ranging (Radar) is an active sensor system that uses the ToF principle to measure distances. The radar for automotive applications can be classified according to their operating frequency or maximum range. Typical radar frequencies are 24 GHz and 77 GHz (more attractive for the industry) [76]. Research on higher frequencies, above 100 Ghz, is a pursuit to increase bandwidth, range accuracy, and antenna and packaging downsizing, but with the significant drawback of higher atmospheric attenuation losses reducing range capability [77]. These systems are often referred to as “mmWave” radar due to their wavelength. Short range radars (SRRs), medium range radars (MRRs), and long range radars (LRRs) can be distinguished. A typical automotive radar layout is reported in Figure 7. A comparison of the main characteristics of automotive RADAR systems is reported in Table A3. It can be noted that the maximum range is less than 300 m, with an inverse relationship with the hFOV. They adopt ethernet or CAN as communication protocols and typically require low power, less than 20 W.
Automotive radars commonly adopt the monolithic microwave integrated circuit (MMIC) to lower the cost [78]. MMIC technology enables the development of highly integrated systems, reducing their volume and power requirements [79]. Modern automotive radar integrates the MMIC, transmitters, receivers, microcontroller, and other signal processing units into one integrated unit [80]. Generally, automotive radars are FMCW and are able to provide information relative to the range, Doppler (speed), and azimuth [81]. An increase in the chirp frequency (i.e., the frequency of the function used to modulate the carrier) is investigated to improve range measurement resolution [82]. Regarding antenna layout, most systems adopt a multi-input multi-output (MIMO) technology, allowing synthesis of virtual arrays with larger apertures with only a limited number of physical antennae, offering higher angular resolution and smaller packaging size [83]. MIMO radar allows measurement also of the direction of arrival (DoA) of the radar return [84]. Numerical methods are adopted to optimise the antenna pattern in a MIMO system, improve DoA estimation, and avoid signal ambiguity [85]. Recent developments are directed at developing a novel generation of MIMO radar in which elevation is measured (4D radar) [86]. The interest in 4D-imaging radar relies on the capability to generate dense cloud points as output, allowing object recognition similar to LiDAR [87]. Multicarrier modulation radar can improve radar detection capabilities, add additional data communications, and mitigate interferences among radar systems [88].

4.4. Ultrasonic

The ultrasonic distance sensor, or sound detection and ranging (SONAR), exploit the ToF principle by adopting sound waves in the ultrasonic band. This kind of sensor is usually adopted to sense the area around and close to the vehicle and mostly during parking and low-speed manoeuvring, as the cost is low, so the operating range is limited [89] and the angular discrimination and sensibility to environmental conditions are low [90]. The measurement frequency is limited. As an example, a typical system layout is reported in Figure 8. The monostatic configuration, characterised by a unique element to transmit and receive the sound waves, is preferred to the bistatic, in which independent elements are used, due to low mounting space requirement and reduced costs [91]. A comparison of ultrasonic sensors is reported in Table A4. The minimum range is in the order of centimetres, while the maximum is usually below 5 m. The frequency rate is usually less than 20 Hz, with a low power consumption of less than 5 watts.
The automotive ultrasonic sensor integrates an application-specific integrated circuit (ASIC) for generating and receiving sound pulses, reliability checks, distance calculation, and digital communications with the vehicle electronics [92].
The speed of sound value adopted in the ToF equation can induce a measurement uncertainty. A model-based range estimation employing external air temperature is usually adopted to reduce the error related to the speed of sound uncertainty [93]. The operating frequency choice is based on the minimum and maximum distance to be measured and the spatial resolution [94]. An increase in frequency results in greater attenuation due to the air limiting the maximum distance. The ringing-decay time is the time required from the start of the pulse generation to stop the piezoelectric membrane which is usually of about 700 µs, before which it is impossible to receive the echo [95]. For monostatic configuration, the adoption of a higher frequency signal, can reduce the ringing-decay time, improving the minimum measurable distance [96]. Ultrasonic sensor arrays have been proven capable of tracking static and dynamic objects at still and low vehicle speeds, adopting Kalman family filtering techniques [97]. It is possible to measure the relative speed using the doppler effect as made with LiDAR and RADAR [98]. The ultrasonic sensor may be subject to denial-of-service (jamming), spoofing, and acoustic cancellation attacks, which can be counteracted using techniques such as frequency hopping, background-noise analysis and intelligent signal processing [99]. Also, without knowledge of the sensors, it is possible to succeed with a spoofing attack, resulting in the detection of a fake object [100]. Those problems limit the usage of ultrasonic measurements to only part of CAV operations.

4.5. GNSS

The GNSS are systems capable of using radio signals, transmitted by a constellation of artificial satellites, to determine the coordinates of a receiver. Different GNSS systems are globally available, for example, as the United States of America Global Positioning System (GPS), European Galileo system, Russian GLONASS, and Chinese Beidou. The vehicle onboard GNSS is required as it performs consistent and precise measurements of the CAV position and time [101]. Modern receivers can receive signals from different constellations to improve accuracy, reliability, and coverage. In the following, the discussion focuses on the GPS only, since it is the most used worldwide.
The signals of each satellite allow calculation of the distance from the receiver to the satellite, which is affected by the time clock error of the receiver. In fact, as the evaluated distance includes this error source, the term pseudorange is adopted. To make a fix (i.e., solve GPS equation to find position) at least four pseudoranges (i.e., four satellite in view) are required due to the four unknown variables, three spatial coordinates, and one time error. Standard GNSS often offers position accuracy in the form of meters.
Different techniques can be employed to further increase the accuracy of the position measurements by adopting differential GPS, multiple frequencies receivers, and wide-area augmentation systems among many others. Generally, those methods eliminate most of the positioning uncertainty due to common-mode errors. Recently, the GPS constellation has introduced two additional civilian transmission bands (L2C and L5, other than standard L1) to achieve this goal, adopting multi-frequency receivers [102]. A major problem is maintaining accurate position data in GPS-denied conditions (as. Tunnels, forests, etc.). A typical method is sensor fusion, integrating data from other sources. Typically an IMU is used to provide high-frequency acceleration and angular speed data that could be integrated to find the velocity, position, and altitude of the vehicle [103]. Kalman filters are usually adopted to combine GPS and IMU data to improve the state estimation accuracy while avoiding error drift [104]. In the case of CAVs, additional sensors can be adopted for sensor fusion. For example, Lidar can effectively reduce position error in GPS-denied conditions by tracking target objects detected by the point cloud [105]. GPS receivers can output data with a frequency from 1 Hz to 20 Hz depending on the complexity and cost of the utilised devices. The position accuracy of a standard receiver is of the order of magnitude of a few meters in nominal conditions. Advanced receivers, exploiting multi-frequency capabilities or augmentation systems can achieve accuracies in the order of centimetres with GPS or Galileo constellations. Low-cost and reliable receivers with accuracy in the order of decimetres are of primary interest for CAV applications [106]. Usually, the GNSS receiver power consumption is limited to a few watts, and as it is typically unique in an AV layout, its power consumption can be neglected.

5. An Overview of Data Processing and Management

In this section, the sensor data generation, processing, and management are discussed. Indeed, the nature and the quantity of the available data arise new challenges regarding their transmission, processing and storage technologies. This section aims to summarise the main aspects and the most relevant problems from a global perspective without entering the details of the algorithms for brevity, as it is outside the scope of the manuscript. Firstly, a quantitative analysis of the amount of data generated by a CAV has been carried out, reporting some considerations for the communication protocol and the storage requirements. Secondly, the main techniques adopted to exploit the data for autonomous driving are discussed, reporting typical workflow and assessing requirements in terms of computational power. Finally, a brief discussion is presented on how the elaborated data can be exploited to improve vehicle energy efficiency.

5.1. Data Generation and Management

Sensor data, especially LiDAR and camera data, requires a high amount of memory, high bandwidth, and computing power, raising problems for their storage. For processing data, artificial intelligence techniques, as machine learning, deep learning, and reinforcement learning algorithms, are adopted [107]. These techniques require extensive datasets, which should be properly stored, for training and for achieving an adequate performance and reliability. In this context, data format standardisation and compression are of fundamental importance [108].
For example, for LiDAR data, a standard file format is the LASer (LAS), which includes headers, time, and other metadata in a binary form [109]. Some compression algorithms, such as the LASzip, have been developed to offer high data compression (by factor 10) while ensuring direct access to compressed data without prior global decompression [110]. Databases classified by criteria such as space location, vehicle class, etc., and recorded on a distributed cluster architecture, are the main effective solution for storing a huge amount of LiDAR data [111].
Hardware accelerators become essential as the compression should be made ideally in real time. In this context, field programmable gate array (FPGA) hardware compression has been demonstrated to be up to 250 times faster than software implementation [112]. Table 2 reports the data rate values for sensors and the base assumption adopted for the calculations. Ultrasonic, radar, and 2D lidar sensor rates are significantly lower than cameras and 3D lidars.
The different sensor data rate suggests adopting different file formats and onboard communication protocols optimises performance and reduces the total architectural costs. Table 3 reports a global overview of different automotive communication protocols. For CAVs, two main classes of communication protocols can be distinguished: network and point-to-point protocols. Due to the inclusion of new HMI interfaces, the ADAS System, and other advanced vehicular systems, different communication protocols have been established in recent years. In this context, and toward de-facto automotive standard controller area networks (CANs), various communication protocols such as Flexray, LIN, and Automotive ethernet have been developed. Adopting high-bandwidth sensors such as cameras and LIDARs requires high-bandwidth protocols, for which automotive ethernet and SerDes can be suitable solutions. These two standards offer different pros and cons, and it is unclear which of them will be the future de-facto standard [113]. LiDAR usually adopts ethernet while Camera Serializer–Deserializer (SerDes) communications cascade to a processing unit and then to the vehicular networks through the ethernet.

5.2. Data Processing

Each data source needs to be elaborated using proper techniques that take into account the peculiarities of each sensor. The detailed data workflow is generally hard to reconstruct, as few details are available and they differ among the different platforms as they are strictly related to the specific hardware adopted. For the sake of clarity, a possible flow is hereafter discussed. The distance of surrounding objects relative to the CAV is essential for its safe operation. It can be measured by adopting radar, lidar, depth camera, and ultrasonic sensors or estimated using a standard vision camera through the cooperation of two or more cameras (stereo-vision) exploiting trigonometry [114]. Recently, interest in monocular depth estimation has grown due to the development of artificial intelligence techniques, which represent an enabling technology, providing great improvement with respect to previous algorithm families [115]. Thus, the same information, but with different accuracy and reliability, can be estimated in many different ways. A general example workflow of CAV with the main processing step for each sensor type is presented in Figure 9.
All the data are pre-processed before further elaboration steps. The preprocessing involves filtering, a lowpass filter, noise rejection [116], and data transformation, as coordinate frame roto-translation [117] or image–colour–space conversion [118]. Then, according to the considered sensor, a specific elaboration follows. V2x connectivity, proprioceptive, and ultrasonic sensors, and HD maps require simple data elaboration compared to LiDAR, radar, and camera applications.
For LiDAR, the cloud points are filtered based on regions of interest. Then, the ground plane is estimated to remove the relative points. The remaining points are then clustered (i.e., machine learning grouping based on similarity [119]). Then, future extraction is carried out for each cluster, which is propaedeutic for the successive object classification [120].
In a similar way, radar data are clustered in groups and then merged. The position and relative speed are obtained by assuming the target shape [121]. The 4D radar, in particular, has an elaboration pipeline quite similar to the LiDAR’s.
Different step sequences can be made to extract future group data and classify the objects using camera data after proper preprocessing, such as conversion in HSL space conversion and boundary box creation [122]. Many proposed techniques for all sensors rely on convolutional neural networks, deep neural networks, autoencoders, and classifiers as support vector machines [123].
The obtained data from all the considered sources should be joined using proper sensor fusion techniques. Those strongly influence the effectiveness of the sensor layout. Recently, interest in multimodal data fusion algorithms is growing due to the need to perceive the environment through different sensors, such as vision, radar, and lidar systems [124]. In general, more reliable and accurate data can be obtained through sensor fusion. Wang et al. through a multi-modal multi-scale fusion algorithm for Lidar and 4D radar achieved about 5–10% higher accuracy with respect to only the LiDAR [125]. Similar improvements have been found in object detection by adopting a multimodal VoxelNet to fuse vision and lidar data [46]. Multimodal sensor fusion can effectively improve the detection performance under various adverse weather conditions such as fog, rain, and snow [126]. Neural networks are also rising to cope with the sensor fusion problem, in which the processed data of each sensor are provided in input to another neural network for data fusion [127]. However, CAVs also offer the possibility of cooperative sensor fusion, exploiting data from different vehicles with a proper processing pipeline as described in [128].
The specific hardware platform defines the way in which the processing pipeline is executed. Both centralised and distributed computing on different processing units can be adopted. Often, some processing is completed at the sensor level. A dedicated processing unit integrated in the sensor or as its companion sometimes offers object detection and tracking as well. Indeed multi-camera systems exists with multiple cameras linked through GMLS to form an elaboration unit, which sends the elaborated data, including object-detection data, including position and speed, to the CAV’s central unit for sensor fusion.
Similar to the processing algorithms, the processing units (PUs) used vary and depend on specific system, architectural, and design choices. The main processing units can be classified into central processing units (CPUs), graphical processing units (GPUs), digital signal processors (DSPs) and ASIC and FPGA solutions [129]. The choice between special purpose and general purpose PUs is guided by a tradeoff between the development complexity of the hardware and computational efficiency. But it is worth pointing out that the different choice is linked also to the particular development step at which the hardware’s architectural development is. ASICs offer higher performance and energy efficiency but the high development costs can be sustainable only for production series units. FPGA shows slightly less performance, but its flexibility makes it feasible for hardware development and testing, and applications with an expected low number of products [130]. High-general-purpose units such as CPU and GPU can be useful in development, but recently these have also been employed on prototype CAVs. Usually, for autonomous vehicles, cooperation of different processing units is employed at different levels. DSP and ASICs are often integrated in the sensors, while CPU, GPU, and sometimes FPGA are employed in the central processing units, responsible for sensor fusion and decision making. A comparison of the deep learning capabilities of some automotive hardware platforms are reported in Figure 10.
Some gray bands are reported highlighting the minimum requirements for various automation levels according to the technical literature. A need for more powerful PU can be highlighted in the graph. To reach L5 targets, more PU should be employed or high-performance computing (HPC). However, the latter solution suffers from low energy efficiency.

5.3. Data Exploitation for Energy Managament and Efficiency Improvements

The processed data can be used to optimise the vehicle’s energy efficiency. Sensor data and connectivity can help to provide complete information to the ego vehicle regarding road slope, future speed limits, and preceding vehicle future speed [131]. In this way, it is possible to optimise the vehicle speed profile, avoiding unnecessary braking and accelerating phases, and choosing optimal powertrain operating points. Generally, knowing the external conditions makes it possible to solve an optimization problem to reduce the vehicle energy consumption while maintaining safe and comfortable operations [132]. V2x and sensor data can also be used to optimise the vehicle behaviour in signalised intersections to optimise speed profile and maximise the braking energy recovery [133]. Generally, a properly developed controller can achieve a better driving style than an average human driver, therefore obtaining lower energy consumption [134]. However, while the strategy to anticipate the following car behaviour is, on one hand, effective in reducing energy consumption, on the other hand, it could lead to an increase in travel time [135]. There are many works in the literature developing several control strategies, with various techniques. Most of them from the control point of view can be classified as MPC, differing in the formulation of the optimal control problem. The algorithms have been tested in various scenario and on different vehicles. Data regarding the possible improvement due to the optimal control strategy exploiting DAS system and V2x connectivity have been gathered from the literature (data sources: [134,135,136,137,138,139,140,141]). With this dataset, the distribution of the expected DAS energy saving, Δ E C C A V μ Δ E C , C A V ; σ Δ E C , C A V , has been estimated. The results show that the data can be fitted with a normal with a mean μ Δ E C , C A V = 4.8 and standard deviation σ Δ E C , C A V = 3.9. This distribution has been employed for the Monte Carlo simulation (see Section 6).

6. Energy Consumption and Impact on Vehicle Usage

In this section, exploiting the data gathered and discussed in the previous sections, a statistical analysis is presented to assess the net energy efficiency of CAVs. The distribution obtained using the data analyses in the literature, according to the methodology described in Section 2, are reported in Table 4. In particular, all the values generating the distributions adopted for the Monte Carlo simulation are reported in order to provide details for the reproducibility of the results.
Among the sensors, the LiDAR is the more demanding from the energy point of view, and the processing unit has the greatest power consumption. Moreover, the processing units, due to a number of different hardware solutions (i.e., ASIC, FPGA, CPU), as discussed in Section 5, are characterised by the highest standard deviation. Once the power consumption distribution is estimated, combining them with the number of sensors and computing units provided using Poisson distributions, the overall CAV hardware power consumption P C A V , H W can be statistically evaluated. The resulting distribution is presented in Figure 11. The obtained data can be approximated with a Lognormal distribution with µ = 6.52 and σ = 0.46.
With those distributions, the Monte Carlo method has been employed to assess the net energy consumption of the CAVs, considering 1e6 sample (Figure 12). Panel A reports the energy consumption distribution of EVs and relative CAV versions. The CAV shows about a 5% (7.4 Wh/km) higher energy consumption with a slightly higher standard deviation. Panel B reports a scatter plot of the consumption of each EV and its CAV version. Points below the bisector line indicate that the CAV version has higher energy consumption than the EV because the energy saving does not compensate for the higher consumption. About 76% of the samples fall in this area, and this is the most likely scenario. In the case of a higher baseline-energy-consumption scenario (>200 Wh/km), the CAV hardware consumption become less influent, reducing the spreading of the point around the bisector line.
A further analysis has been conducted to analyse the influence of the DAS energy saving and CAV hardware power consumption. The Monte Carlo simulation samples have been grouped into two clusters based on the EV energy consumption. The two considered clusters are representative of compact cars (segment C—EV energy consumption E C E V ( 130,150 ) ) and luxury or sport-utility vehicles (segment F—EV energy consumption E C E V ( 180,200 ) ). The differences between CAV and EV energy consumption for the two vehicle classes are reported in Figure 13. The unclear boundaries arise from the statistical nature of the Monte Carlo simulation data. For compact cars, the difference ranges are between −20% and 70%, while for luxury cars between −10 to 40%. Both centroids demonstrate a 6% higher consumption of CAVs. EV energy consumption results are one of the main drivers of the impact of CAV efficiency. The graphs also allow, for a fixed vehicle, finding of the definition of the maximum power of the CAV hardware to avoid worsening the energy efficiency. For example, a compact car with 10% of energy saving, thanks to DAS control strategies, poses a breakeven point around 1000 W (EV and CAV same energy consumption).
Although the CAV power consumption analysis focuses on light-duty vehicles, the data can be used to derive some thoughts on medium-duty and heavy-duty vehicles. For heavy-duty vehicles, an increase of about 10–30% of the CAV hardware can be expected due to a possibly higher number of sensors required to cover a wider surrounding are. Higher energy saving with light-duty vehicles is expected, especially for long-haul heavy-duty trucks. This stems from the typical mission profile (i.e., long range, near-constant speed), which can actually benefit from exploiting drag reduction for truck platoons, which requires short intravehicular distances possible to maintain, safely, only with DAS systems. Optimization studies show that 9% fuel-consumption reduction can be achieved by combining intravehicular distance and lateral position in a truck platoon [142]. Higher CAV hardware energy consumption (700–1700 Wh/km) [143] is less important. Figure 14 shows the Monte Carlo simulation results with the mean and standard deviation of various clusters characterised by different EV energy consumption. The simulation data have been extrapolated through curve fitting to extend, notwithstanding the high R2 obtained, to the results for higher energy-demanding vehicle applications. Two EV consumption thresholds identify three different regions. The first represents light-duty vehicles, the second light-commercial and medium-duty vehicles and the third heavy-duty vehicles.
Light-duty vehicles are likely to worsen the net vehicle energy consumption by adopting autonomous driving technology. The introduction of a fully autonomous system will likely be advantageous from an energy perspective regarding the heavy-duty energy consumption regions, with approximated energy reductions (obtained using extrapolation) of up to 8%. These results are likely improved for the medium-duty sector, and the expected variation is in the order of ±3% compared to the reference EV.

7. Conclusions

A comprehensive analysis and discussion of CAV architectures regarding layout, sensors, and processing has been conducted. The data gathered have been used to assess the net energy consumption of CAV considering both DAS energy saving and hardware power consumption. The main manuscript outcomes are summarised in the following points:
  • CAV architecture likely requires multiple sensors to achieve fully autonomous operation. A proper combination of different sensors can effectively simplify vehicle operations and improve safety and reliability under different and severe operating conditions.
  • Regarding sensors, LiDAR can effectively measure and reconstruct the surrounding environment, but its high cost and weather sensitivity limit its application. Automotive radars suffer low angular accuracy but are often used since they can operate in adverse weather. In this scenario, 4D radar has the potential to overcome this limit and aid in reducing the number of required sensors.
  • Data management is a crucial point. Sensor data storage and processing should be carefully addressed as they strongly influence vehicle performance. In particular, data elaboration and sensor fusion are key pillars which should be further developed to achieve fully operational CAVs. The improvement of the algorithms should be adequately supported by the production of more powerful and energy-efficient processing units.
  • The energy analysis has shown that in an analogy of Maxwell’s demon paradigm, attention should be addressed to the environmental impact of the CAVs. If, as many studies do, the information is considered free, it can be wrongly deduced that the DAS system can effectively reduce the CAV information. However, in reality, the information has a price from both economical and energetical perspectives. In fact, if on one hand, it is possible to leverage data to improve vehicle energy efficiency, on the other hand, producing and elaborating data is energy consuming. Considering both contributions, the hypothetical advantages are not obvious, as the net result is due to their balancing.
  • The Monte Carlo analysis has shown that based on actual data, in about 75% of simulated scenarios, light-duty CAVs consume more energy than EVs. On average, 6% higher energy consumption can be expected by CAVs. Regarding other vehicle classes, the simulation data extrapolation suggests that no or negligible effect could be expected for medium-duty vehicles. Heavy-duty vehicles will probably take advantage of autonomous driving systems due to their higher energy demand, resulting in a lower impact of the DAS system.
Future authors’ work will aim to take a specific use case by further deepening the analyses conducted and with the support of proper designed experimental and numerical activities to gather the required data for the analyses assessed and discussed within this presented study.

Author Contributions

Conceptualization, M.P. and G.D.B.; methodology, M.P.; formal analysis, M.P.; investigation, M.P.; data curation, M.P.; writing—original draft preparation, M.P.; writing—review and editing, B.S., F.D.N., G.T., A.G. and G.D.B.; visualization, M.P.; supervision, G.T., F.D.N., A.G. and G.D.B.; project administration, G.D.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to thank Teoresi Group for providing technical details and mindful discussion regarding paper topics.

Conflicts of Interest

Author Michele Pipicelli, Bernardo Sessa, Francesco De Nola and Gianluca Toscano were employed by the company Teoresi S.P.A. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

ADASAdvanced Driving Assistance System
AECAutomotive Electronics Council
ASICApplication Specific Integrated Circuit
ASILAutomotive Safety Integrity Levels
AVAutonomous Vehicle
BEVBattery Electric Vehicle
CANController Area Network
CAVConnected and Autonomous Vehicle
CCDCharge-Coupled Device
CMOSComplementary Metal-Oxide Semiconductor
DASDriving Automation System
DOADirection of Arrival
DSPDigital Signal Processor
EVElectric Vehicle
FMCWFrequency Modulated Continuous Wave
FoVField of View
FPGAField Programmable Gate Array
GMSLGigabit Multimedia Serial Link
GNSSGlobal Navigation Satellite System
GPSGlobal Positioning System
hFoVHorizontal FoV
HPCHigh Performance Computing
ICTInformation and Communication Technologies
IMUInertial Measurement Unit
LASLASer (file format)
LiDARLight Detection and Ranging
LINLocal Interconnect Network
LQRLinear Quadratic Regulator
LRRLong Range Radar
MEMSMicro Electro-Mechanical Systems
MTBFMean Time Between Failures
MIMOMulti Input Multi Output
MMICMonolithic Microwave Integrated Circuit
MPCModel Predictive Control
MRRMedium Range Radar
NHTSANational Highway Traffic Safety Administration
OCDMAOptical Code Division Multiple Access
OODAObserve Orient Decide and Act
PCDPoint Cloud Data
PIDProportional-Integral-Derivative
RADARRadio Detection and Ranging
SLAMSimultaneous Localization and Mapping
SONARSound Detection and Ranging
SNRSignal-to-Noise Ratio
SRRShort Range Radar
ToFTime of Flight
TOPSTrillion of Operations per second
V2IVehicle-to-infrastructure
V2VVehicle-to-vehicle
V2xVehicle-to-everything
WLTCWorldwide Harmonised Light Vehicles Test Cycle

Appendix A

Table A1. A comparison of the main characteristics of different Camera systems.
Table A1. A comparison of the main characteristics of different Camera systems.
ManufacturerModelResolution [MP]Frame Rate [fps]hFoV × vFOV [°]InterfacePower
Consumption [W]
Note
BoschMulti-camera
system plus
230190 × 140CAN-FD, Flexray, Ethernet30
BoschMulti-purpose
camera
2.645100 × 48
MCNEXLVDS camera130192 × 120LVDS
MCNEX 230114
MCNEX 3.630114
Leopard
imaging
LI-AR0820-
GMSL3-120H
8.340140 × 67GMSL31
IntelD4571 / 0.9330 / 9090 × 65/87 × 58GMSL, USB3 visual/depth
IntelD4551 / 0.9330 / 9090 × 65/87 × 58USB3 visual/depth
LuxonisOAK-D PRO PoE12 / 1.060 / 12095 × 70/127 × 80ethernet8visual/depth
LuxonisOAK-D S2 PoE12 / 1.060 / 12080 × 55/66 × 54ethernet8visual/depth
ValeoSmart Front
Camera
1.7 100 × n/a
ValeoSmart Front
Camera
8n/a120 × n/a
Valeofisheye230195 × 155GMSL2, Ethernet fisheye
TITIDA-05001.360 FPD-Link III1
e-conNileCAM25_CUXVR265 @ 2Mp/120 @ 1Mp105 × 62GMSL211
FLIRLadybug 67215 @ 72 Mp/30 @ 36 Mp360 × 120USB313Spherical cam
Table A2. A comparison of the main characteristics of different LiDAR systems.
Table A2. A comparison of the main characteristics of different LiDAR systems.
ManufacturerModelYearTechnologyWavelength [nm]hFOV [deg]H. Resolution [deg]vFOV [deg]V. Resolution [deg]Frequency [Hz]Max Range [m] @ ReflectityTypical Power [W]
ValeoSCALA 3° gen2023 1200.05260.05 190 @ 10%
SCALA 2° gen2021Rot. mirror9051330.125/0.25100.62520010
RobosenseRS-LIDAR-M12021Solid state9051200.2250.21020015
RS-Ruby2021mechanical9053600.2/0.4400.110/2020045
CeptonVista®-P602019Solid state905600.25 (10 Hz) 0.2722 10200 @ 30%10
Vista®-P90 Solid state905900.38400.3810/20200 @ 30%10
Vista®-X902020Solid state 900.13250.1340200 @ 10%12
Vista®-x120 Solid state 1200.1318/200.13 200 @ 10%
Nova2021 1200.3900.3 30 @ 10%
LuminarIris2022 1200.050–260.051–30250 @ 10%25
InnovizInnovizOne2016 1150.1250.15–20250
InnovizTwo2020 1200.05400.0510–20300
Innoviz3602022Rotating9053600.05640.050.5–2530025
IbeoNext Long range Solid state88511.20.095.60.0725 7–10
Next Short range Solid state885600.47300.3825 7–10
Next Near range Solid state8851200.94600.7525 7–10
LUX 4L 9051100.253.20.82550 @10%7
LUX 8L 9051100.256.40.82550 @10%8
LUX HD 9051100.253.20.82530 @10%7
InnovusionJaguar Prime2020 1550650.09–0.17/0.19–0.33400.136–2030048
VelodyneVelarray H8002020solid-state9051200.26160.2–0.510–25200–170 @10%13
BarajaSpectrum HD252022Spectrum-Scan TM15501200.04250.01254–30250 @ 10%20
QuanergyM8-Core2018 9053600.033–0.132200.033–0.1325–2035 @ 10%/100 @ 80%16
M8-Ultra2018 9053600.033–0.132200.033–0.1325–2070 @ 10%/200 @ 80%16
Continental
SRL121 90527 11 100101.8
HRL1312022 15501280.05280.07510300
3D Flash Flash1064120 30 25
SickMultiscan100 8503600.1256512012 @ 10%/30 @ 90%22
LMS1000 2D8502750.75 15016@ 10%/36 @ 90%18
MRS1000 8502750.257.51.8755016@ 10%/30 @ 90%13
LMS511 2D9051900.1667 10026 @ 10%22
MRS1000P 8502700.257.5 12.5/5016@ 10%/30 @ 90%13
MRS6000 8701200.13150.625 30 @ 10%/75 @ 90%20
LRS4000 2D360250.02 40 @ 10%/130 @ 90%13
MicrovisionMavinTM 9050.086 0.04 30220
Movia 1500.95810.76 14
Movia 6075 40
Movia 12037.5 60
Table A3. A comparison of the main characteristics of different RADAR systems.
Table A3. A comparison of the main characteristics of different RADAR systems.
ManufacturerModelTypeFrequency Range [Ghz]Detect. Range [m]Range acc. [m]Update Rate [Hz]Velocity Precision [m/s]hFOV
[deg]
h. Angle res. [deg]vFOV [deg]v. Angle res. [deg]Power
Consumption [W]
Interface
BoschFront radar 76–772100.1 0.051200.1300.24CAN-FD, Flexray, Ethernet
Front radar Premium 76–773020.1 0.041200.1240.115CAN-FD, Ethernet
Corner
Radar
76–771600.1 0.041500.1300.24CAN-FD, Flexray, Ethernet
ContinentalSRR600Surround76–81180 200.03150 CAN-FD, Ethernet
ARS540LRR76–77300 ~17 1200.1 23
ARS510LRR76–77210 ~20 100 4.8
ARS441LRR76–77250 ~17 18 (250 m)
90 (70 m)
150 (20 m)
8
SRR520SRR76–77100 200.0215
ARS620LRR76–77280 200.02–0.160 CAN-FD, Ethernet
ARS640LRR76–77300 ~17 600.1 0.123
APTIVSSR74D 160 0.1150615 CAN-FD, Ethernet
SSR7+4D 200 0.031503152 CAN-FD, Ethernet
FLR74D 290 0.051202154 CAN-FD, Ethernet
MRR 77160 33 9015
ESR 77174(60) 20 20(90)
SmartMicroUMR-97 77–81120/55/190.5/0.3/0.15200.15100/130/13011525
UMRR-11 76–77175/640.5/0.25200.132/1000.25150.56
DRVEGRD 152 76–77180/660.45/0.16200.07100.25200.56
DRVEGRD 171 76–77240/100/400.6/0.25/0.1200.07/0.07/0.14100.5200.57
LintechCAR30SRR77–81300.18200.251200.120 2CAN
Table A4. Characteristic comparison of ultrasonic sensors.
Table A4. Characteristic comparison of ultrasonic sensors.
ManufacturerModelDetection Rage [m]hFoV × vFOV [°]Frequency [kHz]Measurement Rate [Hz]Power Consumption [W]Interface
Bosch/0.15–5.5/43–604/8//
Valeo/0.15–4.175 × 45 // 6CAN
ContinentalCUS320//////
MAGNA/0.1–5.5/////
SICKUC400.065–6/400101.5digital
UC300.035–530 × 3012051.2digital
UM120.04–0.3530 × 30500301.2analog
UC400.013–0.2520 × 20380200.9digital
UM180.12–1.335 × 35200121.2analog
UM300.6–837 × 378032.4digital
UC120.02–0.3530 × 30500251.2digital

References

  1. Szumska, E.M. Electric Vehicle Charging Infrastructure along Highways in the EU. Energies 2023, 16, 895. [Google Scholar] [CrossRef]
  2. Pipicelli, M.; Sessa, B.; De Nola, F.; Gimelli, A.; Di Blasio, G. Assessment of Battery–Supercapacitor Topologies of an Electric Vehicle under Real Driving Conditions. Vehicles 2023, 5, 424–445. [Google Scholar] [CrossRef]
  3. Pipicelli, M.; Sedarsky, D.; Koopmans, L.; Gimelli, A.; Di Blasio, G. Comparative Assessment of Zero CO2 Powertrain for Light Commercial Vehicles; SAE Technical Paper: Warrendale, PA, USA, 2023; p. 2023-24-0150. [Google Scholar] [CrossRef]
  4. Nastjuk, I.; Herrenkind, B.; Marrone, M.; Brendel, A.B.; Kolbe, L.M. What drives the acceptance of autonomous driving? An investigation of acceptance factors from an end-user’s perspective. Technol. Forecast. Soc. Chang. 2020, 161, 120319. [Google Scholar] [CrossRef]
  5. Payre, W.; Birrell, S.; Parkes, A.M. Although autonomous cars are not yet manufactured, their acceptance already is. Theor. Issues Ergon. Sci. 2021, 22, 567–580. [Google Scholar] [CrossRef]
  6. J3016C: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles—SAE International. (n.d.). Available online: https://www.sae.org/standards/content/j3016_202104/ (accessed on 28 June 2021).
  7. Mercedes-Benz Group. Mercedes-Benz—The Front Runner in Automated Driving and Safety Technologies, Mercedes-Benz Group. 2022. Available online: https://group.mercedes-benz.com/innovation/case/autonomous/drive-pilot-2.html (accessed on 6 June 2022).
  8. Faxér, A.; Jansson, J.; Wilhelmsson, J.; Faleke, M.; Paijkull, M.; Sarasini, S.; Fabricius, V. Shared Shuttle Services S3–Phase 2. 2021. Available online: https://www.drivesweden.net/sites/default/files/2022-10/final-report-for-drive-sweden-projects-s3-092963-2.pdf (accessed on 15 November 2023).
  9. Nemoto, E.H.; Jaroudi, I.; Fournier, G. Introducing Automated Shuttles in the Public Transport of European Cities: The Case of the AVENUE Project. In Advances in Mobility-as-a-Service Systems; Nathanail, E.G., Adamos, G., Karakikes, I., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 272–285. [Google Scholar] [CrossRef]
  10. Stange, V.; Kühn, M.; Vollrath, M. Manual drivers’ experience and driving behavior in repeated interactions with automated Level 3 vehicles in mixed traffic on the highway. Transp. Res. Part F Traffic Psychol. Behav. 2022, 87, 426–443. [Google Scholar] [CrossRef]
  11. Shetty, A.; Yu, M.; Kurzhanskiy, A.; Grembek, O.; Tavafoghi, H.; Varaiya, P. Safety challenges for autonomous vehicles in the absence of connectivity. Transp. Res. Part C Emerg. Technol. 2021, 128, 103133. [Google Scholar] [CrossRef]
  12. Zhao, L.; Malikopoulos, A.A. Enhanced Mobility With Connectivity and Automation: A Review of Shared Autonomous Vehicle Systems. IEEE Intell. Transp. Syst. Mag. 2022, 14, 87–102. [Google Scholar] [CrossRef]
  13. Jafarnejad, S.; Codeca, L.; Bronzi, W.; Frank, R.; Engel, T. A Car Hacking Experiment: When Connectivity Meets Vulnerability. In Proceedings of the 2015 IEEE Globecom Workshops (GC Wkshps), San Diego, CA, USA, 6–10 December 2015; pp. 1–6. [Google Scholar] [CrossRef]
  14. Musa, A.; Pipicelli, M.; Spano, M.; Tufano, F.; De Nola, F.; Di Blasio, G.; Gimelli, A.; Misul, D.A.; Toscano, G. A review of model predictive controls applied to advanced driver-assistance systems. Energies 2021, 14, 7974. [Google Scholar] [CrossRef]
  15. Papantoniou, P.; Kalliga, V.; Antoniou, C. How autonomous vehicles may affect vehicle emissions on motorways. In Proceedings of the Advances in Mobility-as-a-Service Systems: Proceedings of 5th Conference on Sustainable Urban Mobility, Virtual CSUM2020, Home, Greece, 17–19 June 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 296–304. [Google Scholar]
  16. Pelikan, H.R.M. Why Autonomous Driving Is So Hard: The Social Dimension of Traffic. In Proceedings of the Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery, New York, NY, USA, 8–11 March 2021; pp. 81–85. [Google Scholar] [CrossRef]
  17. Khan, M.A.; El Sayed, H.; Malik, S.; Zia, M.T.; Alkaabi, N.; Khan, J. A journey towards fully autonomous driving—Fueled by a smart communication system. Veh. Commun. 2022, 36, 100476. [Google Scholar] [CrossRef]
  18. Kroese, D.P.; Brereton, T.; Taimre, T.; Botev, Z.I. Why the Monte Carlo method is so important today. WIREs Comput. Stat. 2014, 6, 386–392. [Google Scholar] [CrossRef]
  19. Torkjazi, M.; Raz, A.K. Taxonomy for System of Autonomous Systems. In Proceedings of the 2022 17th Annual System of Systems Engineering Conference (SOSE), Rochester, NY, USA, 7–11 June 2022; pp. 198–203. [Google Scholar] [CrossRef]
  20. Mallozzi, P.; Pelliccione, P.; Knauss, A.; Berger, C.; Mohammadiha, N. Autonomous Vehicles: State of the Art, Future Trends, and Challenges. In Automotive Systems and Software Engineering: State of the Art and Future Trends; Dajsuren, Y., van den Brand, M., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 347–367. [Google Scholar] [CrossRef]
  21. Azam, S.; Munir, F.; Sheri, A.M.; Kim, J.; Jeon, M. System, Design and Experimental Validation of Autonomous Vehicle in an Unconstrained Environment. Sensors 2020, 20, 5999. [Google Scholar] [CrossRef] [PubMed]
  22. Gonzalez, D.; Perez, J.; Milanes, V.; Nashashibi, F. A Review of Motion Planning Techniques for Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1135–1145. [Google Scholar] [CrossRef]
  23. Lee, S.; Jung, Y.; Park, Y.-H.; Kim, S.-W. Design of V2X-based vehicular contents centric networks for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2021, 23, 13526–13537. [Google Scholar] [CrossRef]
  24. Marin-Plaza, P.; Hussein, A.; Martin, D.; de la Escalera, A. Global and Local Path Planning Study in a ROS-Based Research Platform for Autonomous Vehicles. J. Adv. Transp. 2018, 2018, e6392697. [Google Scholar] [CrossRef]
  25. Song, X.; Gao, H.; Ding, T.; Gu, Y.; Liu, J.; Tian, K. A Review of the Motion Planning and Control Methods for Automated Vehicles. Sensors 2023, 23, 6140. [Google Scholar] [CrossRef]
  26. Warren, M.E. Automotive LIDAR Technology. In 2019 Symposium on VLSI Circuits; IEEE: New York, NY, USA, 2019; pp. C254–C255. [Google Scholar] [CrossRef]
  27. True Redundancy, Mobileye. (n.d.). Available online: https://www.mobileye.com/technology/true-redundancy/ (accessed on 22 September 2023).
  28. Reschka, A. Safety Concept for Autonomous Vehicles. In MAutonomous Driving: Technical, Legal and Social Aspects; Maurer, M., Gerdes, J.C., Lenz, B., Winner, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 473–496. [Google Scholar] [CrossRef]
  29. Yan, Z.; Sun, L.; Krajnik, T.; Ruichek, Y. EU Long-term Dataset with Multiple Sensors for Autonomous Driving. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 10697–10704. [Google Scholar] [CrossRef]
  30. Everything You Need to Know about Autonom® Shuttle Evo, NAVYA. (n.d.). Available online: https://www.navya.tech/en/everything-you-need-to-know-about-autonom-shuttle-evo/ (accessed on 21 September 2023).
  31. Mobileye DriveTM|Self-Driving System for Autonomous MaaS, Mobileye. (n.d.). Available online: https://www.mobileye.com/newsletter-sign-up/ (accessed on 22 September 2023).
  32. Mobileye SuperVisionTM for Hands-Free ADAS, Mobileye. (n.d.). Available online: https://www.mobileye.com/super-vision/ (accessed on 4 June 2022).
  33. Hardware NVIDIA Drive per Auto a Guida Autonoma, NVIDIA. (n.d.). Available online: https://www.nvidia.com/it-it/self-driving-cars/drive-platform/hardware/ (accessed on 27 September 2023).
  34. Mobileye Self-Driving Mobility Services, Mobileye. (n.d.). Available online: https://www.mobileye.com/mobility-as-a-service/ (accessed on 4 June 2022).
  35. van Dijk, L.; Sporer, G. Functional safety for automotive ethernet networks. J. Traffic Transp. Eng. 2018, 6, 176–182. [Google Scholar] [CrossRef]
  36. Sheeny, M.; De Pellegrin, E.; Mukherjee, S.; Ahrabian, A.; Wang, S.; Wallace, A. RADIATE: A Radar Dataset for Automotive Perception in Bad Weather. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 1–7. [Google Scholar] [CrossRef]
  37. Rasshofer, R.H.; Spies, M.; Spies, H. Influences of weather phenomena on automotive laser radar systems. Adv. Radio Sci. 2011, 9, 49–60. [Google Scholar] [CrossRef]
  38. Espineira, J.P.; Robinson, J.; Groenewald, J.; Chan, P.H.; Donzella, V. Realistic LiDAR With Noise Model for Real-Time Testing of Automated Vehicles in a Virtual Environment. IEEE Sensors J. 2021, 21, 9919–9926. [Google Scholar] [CrossRef]
  39. Zhao, J.; Li, Y.; Zhu, B.; Deng, W.; Sun, B. Method and Applications of Lidar Modeling for Virtual Testing of Intelligent Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 22, 2990–3000. [Google Scholar] [CrossRef]
  40. Reway, F.; Huber, W.; Ribeiro, E.P. Test Methodology for Vision-Based ADAS Algorithms with an Automotive Camera-in-the-Loop. In Proceedings of the 2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Madrid, Spain, 12–14 September 2018; pp. 1–7. [Google Scholar] [CrossRef]
  41. Jegham, I.; Ben Khalifa, A. Pedestrian detection in poor weather conditions using moving camera. In Proceedings of the 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), Hammamet, Tunisia, 30 October 2017–3 November 2017; pp. 358–362. [Google Scholar] [CrossRef]
  42. Anwar, I.; Khosla, A. Vision enhancement through single image fog removal. Eng. Sci. Technol. Int. J. 2017, 20, 1075–1083. [Google Scholar] [CrossRef]
  43. Meng, X.; Liu, Y.; Fan, L.; Fan, J. YOLOv5s-Fog: An Improved Model Based on YOLOv5s for Object Detection in Foggy Weather Scenarios. Sensors 2023, 23, 5321. [Google Scholar] [CrossRef] [PubMed]
  44. Bijelic, M.; Gruber, T.; Ritter, W. A Benchmark for Lidar Sensors in Fog: Is Detection Breaking Down? In Proceedings of the 2018 IEEE Intelligent Vehicles Sympo-sium (IV), Changshu, China, 26–30 June 2018; pp. 760–767. [Google Scholar] [CrossRef]
  45. Kutila, M.; Pyykönen, P.; Holzhüter, H.; Colomb, M.; Duthon, P. Automotive LiDAR performance verification in fog and rain. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1695–1701. [Google Scholar] [CrossRef]
  46. Sindagi, V.A.; Zhou, Y.; Tuzel, O. MVX-Net: Multimodal VoxelNet for 3D Object Detection. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 7276–7282. [Google Scholar] [CrossRef]
  47. Arage, A.; Steffens, W.M.; Kuehnle, G.; Jakoby, R. Effects of water and ice layer on automotive radar. In Proceedings of the German Microwave Conference, Citeseer, Taiwan, China, 8–11 October 2006. [Google Scholar]
  48. Vargas, J.; Alsweiss, S.; Toker, O.; Razdan, R.; Santos, J. An Overview of Autonomous Vehicles Sensors and Their Vulnerability to Weather Conditions. Sensors 2021, 21, 5397. [Google Scholar] [CrossRef] [PubMed]
  49. Herpel, T.; Lauer, C.; German, R.; Salzberger, J. Trade-off between coverage and robustness of automotive environment sensor systems. In Proceedings of the 2008 International Conference on Intelligent Sensors, Sensor Networks and Information Processing, Sydney, NSW, Australia, 15–18 December 2008; pp. 551–556. [Google Scholar] [CrossRef]
  50. Liu, Z.; Jiang, H.; Tan, H.; Zhao, F. An Overview of the Latest Progress and Core Challenge of Autonomous Vehicle Technologies. MATEC Web Conf. 2020, 308, 06002. [Google Scholar] [CrossRef]
  51. Do, T.H.; Yoo, M. Visible light communication based vehicle positioning using LED street light and rolling shutter CMOS sensors. Opt. Commun. 2018, 407, 112–126. [Google Scholar] [CrossRef]
  52. Horaud, R.; Hansard, M.; Evangelidis, G.; Ménier, C. An overview of depth cameras and range scanners based on time-of-flight technologies. Mach. Vis. Appl. 2016, 27, 1005–1020. [Google Scholar] [CrossRef]
  53. Endres, F.; Hess, J.; Sturm, J.; Cremers, D.; Burgard, W. 3-D Mapping With an RGB-D Camera. IEEE Trans. Robot. 2014, 30, 177–187. [Google Scholar] [CrossRef]
  54. Sivaraman, S.; Trivedi, M.M. Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1773–1795. [Google Scholar] [CrossRef]
  55. Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 2012, 32, 323–332. [Google Scholar] [CrossRef]
  56. Ciberlin, J.; Grbic, R.; Teslić, N.; Pilipović, M. Object detection and object tracking in front of the vehicle using front view camera. In Proceedings of the 2019 Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad, Serbia, 29–30 May 2019; pp. 27–32. [Google Scholar] [CrossRef]
  57. Drulea, M.; Szakats, I.; Vatavu, A.; Nedevschi, S. Omnidirectional stereo vision using fisheye lenses. In Proceedings of the 2014 IEEE 10th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Cluj, Romania, 4–6 September 2014; pp. 251–258. [Google Scholar] [CrossRef]
  58. Florea, H.; Petrovai, A.; Giosan, I.; Oniga, F.; Varga, R.; Nedevschi, S. Enhanced Perception for Autonomous Driving Using Semantic and Geometric Data Fusion. Sensors 2022, 22, 5061. [Google Scholar] [CrossRef]
  59. Ignatious, H.A.; Sayed, H.-E.; Khan, M. An overview of sensors in Autonomous Vehicles. Procedia Comput. Sci. 2022, 198, 736–741. [Google Scholar] [CrossRef]
  60. Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  61. Raj, T.; Hashim, F.H.; Huddin, A.B.; Ibrahim, M.F.; Hussain, A. A Survey on LiDAR Scanning Mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
  62. Zhang, F.; Yi, L.; Qu, X. Simultaneous measurements of velocity and distance via a dual-path FMCW lidar system. Opt. Commun. 2020, 474, 126066. [Google Scholar] [CrossRef]
  63. Muckenhuber, S.; Holzer, H.; Bockaj, Z. Automotive Lidar Modelling Approach Based on Material Properties and Lidar Capabilities. Sensors 2020, 20, 3309. [Google Scholar] [CrossRef]
  64. Kim, G.; Eom, J.; Park, Y. Investigation on the occurrence of mutual interference between pulsed terrestrial LIDAR scanners. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Korea, 28 June–1 July 2015; pp. 437–442. [Google Scholar] [CrossRef]
  65. Hwang, I.-P.; Yun, S.-J.; Lee, C.-H. Study on the Frequency-Modulated Continuous-Wave LiDAR Mutual Interference. In Proceedings of the 2019 IEEE 19th Interna-tional Conference on Communication Technology (ICCT), Xi’an, China, 16–19 October 2019; pp. 1053–1056. [Google Scholar] [CrossRef]
  66. Fersch, T.; Weigel, R.; Koelpin, A. A CDMA Modulation Technique for Automotive Time-of-Flight LiDAR Systems. IEEE Sensors J. 2017, 17, 3507–3516. [Google Scholar] [CrossRef]
  67. Hwang, I.-P.; Lee, C.-H. Mutual Interferences of a True-Random LiDAR With Other LiDAR Signals. IEEE Access 2020, 8, 124123–124133. [Google Scholar] [CrossRef]
  68. Yin, W.; He, W.; Gu, G.; Chen, Q. Approach for LIDAR signals with multiple returns. Appl. Opt. 2014, 53, 6963–6969. [Google Scholar] [CrossRef]
  69. Asmann, A.; Stewart, B.; Wallace, A.M. Deep Learning for LiDAR Waveforms with Multiple Returns. In Proceedings of the 2020 28th European Signal Processing Conference (EUSIPCO), Amsterdam, The Netherlands, 18–21 January 2021; pp. 1571–1575. [Google Scholar] [CrossRef]
  70. Heinzler, R.; Schindler, P.; Seekircher, J.; Ritter, W.; Stork, W. Weather Influence and Classification with Automotive Lidar Sensors. In Proceedings of the 2019 IEEE Intelligent Vehicles Sympo-sium (IV), Paris, France, 9–12 June 2019; pp. 1527–1534. [Google Scholar] [CrossRef]
  71. Hecht, J. Lidar for Self-Driving Cars. Opt. Photon- News 2018, 29, 26–33. [Google Scholar] [CrossRef]
  72. Velodyne’s Guide to Lidar Wavelengths, Velodyne Lidar. 2018. Available online: https://velodynelidar.com/blog/guide-to-lidar-wavelengths/ (accessed on 7 June 2022).
  73. McManamon, P.F.; Banks, P.S.; Beck, J.D.; Fried, D.G.; Huntington, A.S.; Watson, E.A. Comparison of flash lidar detector options. Opt. Eng. 2017, 56, 031223. [Google Scholar] [CrossRef]
  74. Li, N.; Ho, C.P.; Xue, J.; Lim, L.W.; Chen, G.; Fu, Y.H.; Lee, L.Y.T. A Progress Review on Solid-State LiDAR and Nanophotonics-Based LiDAR Sensors. Laser Photon- Rev. 2022, 16. [Google Scholar] [CrossRef]
  75. S3 Series, Quanergy. (n.d.). Available online: https://quanergy.com/products/s3/ (accessed on 14 June 2022).
  76. Waldschmidt, C.; Hasch, J.; Menzel, W. Automotive Radar—From First Efforts to Future Systems. IEEE J. Microw. 2021, 1, 135–148. [Google Scholar] [CrossRef]
  77. Norouzian, F.; Hoare, E.G.; Marchetti, E.; Cherniakov, M.; Gashinova, M. Next Generation, Low-THz Automotive Radar—The potential for frequencies above 100 GHz. In Proceedings of the 2019 20th In-ternational Radar Symposium (IRS), Ulm, Germany, 26–28 June 2019; pp. 1–7. [Google Scholar] [CrossRef]
  78. Martinez-Vazquez, M. Overview of design challenges for automotive radar MMICs. In Proceedings of the 2021 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 11–16 December 2021; pp. 4.1.1–4.1.3. [Google Scholar] [CrossRef]
  79. Yamano, S.; Higashida, H.; Shono, M.; Matsui, S.; Tamaki, T.; Yagi, H.; Asanuma, H. 76GHz millimeter wave automobile radar using single chip MMIC. Fujitsu Ten Technol. J. 2004, 23, 12–19. [Google Scholar]
  80. Ritter, P. Toward a fully integrated automotive radar system-on-chip in 22 nm FD-SOI CMOS. Int. J. Microw. Wirel. Technol. 2021, 13, 523–531. [Google Scholar] [CrossRef]
  81. Zhang, A.; Nowruzi, F.E.; Laganiere, R. RADDet: Range-Azimuth-Doppler based Radar Object Detection for Dynamic Road Users. In Proceedings of the IEEE Computer Society, Burnaby, BC, Canada, 26–28 May 2021; pp. 95–102. [Google Scholar] [CrossRef]
  82. Tong, Z.T.Z.; Reuter, R.; Fujimoto, M. Fast chirp FMCW radar in automotive applications. In Proceedings of the IET International Radar Conference 2015, Washington, DC, USA, 11–15 May 2015; pp. 1–4. [Google Scholar] [CrossRef]
  83. Bilik, I.; Bialer, O.; Villeval, S.; Sharifi, H.; Kona, K.; Pan, M.; Persechini, D.; Musni, M.; Geary, K. Geary, Automotive MIMO radar for urban environments. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2–6 May 2016; pp. 1–6. [Google Scholar] [CrossRef]
  84. Sit, Y.L.; Nguyen, T.T.; Sturm, C.; Zwick, T. 2D radar imaging with velocity estimation using a MIMO OFDM-based radar for automotive applications. In Proceedings of the 2013 European Radar Conference, Nuremberg, Germany, 5 October 2013; pp. 145–148. Available online: https://ieeexplore.ieee.org/abstract/document/6689134 (accessed on 19 October 2023).
  85. Vasanelli, C.; Batra, R.; Waldschmidt, C. Optimization of a MIMO radar antenna system for automotive applications. In Proceedings of the 2017 11th European Conference on Antennas and Propagation (EUCAP), Paris, France, 19–24 March 2017; pp. 1113–1117. [Google Scholar] [CrossRef]
  86. Stolz, M.; Wolf, M.; Meinl, F.; Kunert, M.; Menzel, W. A New Antenna Array and Signal Processing Concept for an Automotive 4D Radar. In Proceedings of the 2018 15th European Ra-dar Conference (EuRAD), Madrid, Spain, 26–28 September 2018; pp. 63–66. [Google Scholar] [CrossRef]
  87. Sun, S.; Zhang, Y.D. 4D Automotive Radar Sensing for Autonomous Vehicles: A Sparsity-Oriented Approach. IEEE J. Sel. Top. Signal Process. 2021, 15, 879–891. [Google Scholar] [CrossRef]
  88. Hakobyan, G.; Yang, B. High-Performance Automotive Radar: A Review of Signal Processing Algorithms and Modulation Schemes. IEEE Signal Process. Mag. 2019, 36, 32–44. [Google Scholar] [CrossRef]
  89. Hosur, P.; Shettar, R.B.; Potdar, M. Environmental awareness around vehicle using ultrasonic sensors. In Proceedings of the 2016 International Conference on Ad-vances in Computing, Communications and Informatics (ICACCI), Jaipur, India, 21–24 September 2016; pp. 1154–1159. [Google Scholar] [CrossRef]
  90. Rasshofer, R.H.; Gresser, K. Automotive Radar and Lidar Systems for Next Generation Driver Assistance Functions. Adv. Radio Sci. 2005, 3, 205–209. [Google Scholar] [CrossRef]
  91. Balasubramanian, A.B.; Sastry, K.V.; Magee, D.P.; Taylor, D.G. Transmitter and Receiver Enhancements for Ultrasonic Distance Sensing Systems. IEEE Sensors J. 2022, 22, 10692–10698. [Google Scholar] [CrossRef]
  92. Khan, J. Using ADAS sensors in implementation of novel automotive features for increased safety and guidance. In Proceedings of the 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 11–12 February 2016; IEEE: New York, NY, USA, 2016; pp. 753–758. [Google Scholar] [CrossRef]
  93. Tong, F.; Tso, S.; Xu, T. A high precision ultrasonic docking system used for automatic guided vehicle. Sensors Actuators A Phys. 2005, 118, 183–189. [Google Scholar] [CrossRef]
  94. Canali, C.; De Cicco, G.; Morten, B.; Prudenziati, M.; Taroni, A. A Temperature Compensated Ultrasonic Sensor Operating in Air for Distance and Proximity Measurements. IEEE Trans. Ind. Electron. 1982, IE-29, 336–341. [Google Scholar] [CrossRef]
  95. Xu, W.; Yan, C.; Jia, W.; Ji, X.; Liu, J. Analyzing and Enhancing the Security of Ultrasonic Sensors for Autonomous Vehicles. IEEE Internet Things J. 2018, 5, 5015–5029. [Google Scholar] [CrossRef]
  96. Toa, M.; Whitehead, A. Ultrasonic Sensing Basics; Texas Instruments: Dallas, TX, USA, 2020; pp. 53–75. [Google Scholar]
  97. Li, S.E.; Li, G.; Yu, J.; Liu, C.; Cheng, B.; Wang, J.; Li, K. Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles. Mech. Syst. Signal Process. 2018, 98, 173–189. [Google Scholar] [CrossRef]
  98. Imou, K.; Kaizu, Y.; Yokoyama, S.; Nakamura, T. Ultrasonic Doppler Speed Sensor for Agricultural Vehicles: Effects of Pitch Angle and Measurements of Velocity Vector Components. Agric. Eng. Int. CIGR J. 2008. Available online: https://cigrjournal.org/index.php/Ejounral/article/view/1232 (accessed on 24 September 2023).
  99. Gluck, T.; Kravchik, M.; Chocron, S.; Elovici, Y.; Shabtai, A. Spoofing Attack on Ultrasonic Distance Sensors Using a Continuous Signal. Sensors 2020, 20, 6157. [Google Scholar] [CrossRef]
  100. Lou, J.; Yan, Q.; Hui, Q.; Zeng, H. SoundFence: Securing Ultrasonic Sensors in Vehicles Using Physical-Layer Defense. In Proceedings of the 2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), Rome, Italy, 6–9 July 2021; pp. 1–9. [Google Scholar] [CrossRef]
  101. Joubert, N.; Reid, T.G.R.; Noble, F. Developments in Modern GNSS and Its Impact on Autonomous Vehicle Architectures. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 2029–2036. [Google Scholar] [CrossRef]
  102. Bolla, P.; Borre, K. Performance analysis of dual-frequency receiver using combinations of GPS L1, L5, and L2 civil signals. J. Geodesy 2019, 93, 437–447. [Google Scholar] [CrossRef]
  103. Zhi, Z.; Liu, D.; Liu, L. A performance compensation method for GPS/INS integrated navigation system based on CNN–LSTM during GPS outages. Measurement 2022, 188, 110516. [Google Scholar] [CrossRef]
  104. Xiong, L.; Xia, X.; Lu, Y.; Liu, W.; Gao, L.; Song, S.; Yu, Z. IMU-Based Automated Vehicle Body Sideslip Angle and Attitude Estimation Aided by GNSS Using Parallel Adaptive Kalman Filters. IEEE Trans. Veh. Technol. 2020, 69, 10668–10680. [Google Scholar] [CrossRef]
  105. Liu, W.; Li, Z.; Sun, S.; Gupta, M.K.; Du, H.; Malekian, R.; Sotelo, M.A.; Li, W. Design a Novel Target to Improve Positioning Accuracy of Autonomous Vehicular Navigation System in GPS Denied Environments. IEEE Trans. Ind. Informatics 2021, 17, 7575–7588. [Google Scholar] [CrossRef]
  106. Chen, L.; Zheng, F.; Gong, X.; Jiang, X. GNSS High-Precision Augmentation for Autonomous Vehicles: Requirements, Solution, and Technical Challenges. Remote. Sens. 2023, 15, 1623. [Google Scholar] [CrossRef]
  107. Ma, Y.; Wang, Z.; Yang, H.; Yang, L. Artificial intelligence applications in the development of autonomous vehicles: A survey. Ieee/caa J. Autom. Sin. 2020, 7, 315–329. [Google Scholar] [CrossRef]
  108. Pillmann, J.; Wietfeld, C.; Zarcula, A.; Raugust, T.; Alonso, D.C. Novel common vehicle information model (CVIM) for future automotive vehicle big data marketplaces. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1910–1915. [Google Scholar] [CrossRef]
  109. Chen, Q. Airborne lidar data processing and information extraction. Photogramm. Eng. Remote Sens. 2007, 73, 109. [Google Scholar]
  110. Isenburg, M. LASzip: Lossless compression of LiDAR data. Photogramm. Eng. Remote Sens. 2013, 79, 209–217. [Google Scholar] [CrossRef]
  111. Béjar-Martos, J.A.; Rueda-Ruiz, A.J.; Ogayar-Anguita, C.J.; Segura-Sánchez, R.J.; López-Ruiz, A. Strategies for the Storage of Large LiDAR Datasets—A Performance Comparison. Remote. Sens. 2022, 14, 2623. [Google Scholar] [CrossRef]
  112. Biasizzo, A.; Novak, F. Hardware Accelerated Compression of LIDAR Data Using FPGA Devices. Sensors 2013, 13, 6405–6422. [Google Scholar] [CrossRef] [PubMed]
  113. Zinner, H. Automotive Ethernet and SerDes in Competition. ATZelectronics Worldw. 2020, 15, 40–43. [Google Scholar] [CrossRef]
  114. Zaarane, I.A.; Slimani, W.; Al Okaishi, I.; Atouf, A. Hamdoun, Distance measurement system for autonomous vehicles using stereo camera. Array 2020, 5, 100016. [Google Scholar] [CrossRef]
  115. Masoumian, A.; Rashwan, H.A.; Cristiano, J.; Asif, M.S.; Puig, D. Monocular Depth Estimation Using Deep Learning: A Review. Sensors 2022, 22, 5353. [Google Scholar] [CrossRef] [PubMed]
  116. Wu, W.; Zhang, L.; Wang, Y. A PVT-Robust Analog Baseband With DC Offset Cancellation for FMCW Automotive Radar. IEEE Access 2019, 7, 43249–43257. [Google Scholar] [CrossRef]
  117. He, H.; Li, Y.; Tan, J. Rotational Coordinate Transformation for Visual-Inertial Sensor Fusion. In Social Robotics; Agah, A., Cabibihan, J.-J., Howard, A.M., Salichs, M.A., He, H., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 431–440. [Google Scholar]
  118. Cao, J.; Song, C.; Song, S.; Xiao, F.; Peng, S. Lane Detection Algorithm for Intelligent Vehicles in Complex Road Conditions and Dynamic Environments. Sensors 2019, 19, 3166. [Google Scholar] [CrossRef]
  119. Saxena, A.; Prasad, M.; Gupta, A.; Bharill, N.; Patel, O.P.; Tiwari, A.; Er, M.J.; Ding, W.; Lin, C.-T. A review of clustering techniques and developments. Neurocomputing 2017, 267, 664–681. [Google Scholar] [CrossRef]
  120. Beltran, J.; Guindel, C.; Moreno, F.M.; Cruzado, D.; Garcia, F.; De La Escalera, A. BirdNet: A 3D Object Detection Framework from LiDAR Information. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3517–3523. [Google Scholar] [CrossRef]
  121. Cao, X.; Lan, J.; Li, X.R.; Liu, Y. Extended Object Tracking Using Automotive Radar. In Proceedings of the 2018 21st International Conference on Information Fu-sion (FUSION), Cambridge, UK, 10–13 July 2018; pp. 1–5. [Google Scholar] [CrossRef]
  122. Masmoudi, M.; Ghazzai, H.; Frikha, M.; Massoud, Y. Object Detection Learning Techniques for Autonomous Vehicle Applications. In Proceedings of the 2019 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Cairo, Egypt, 4–6 September 2019; pp. 1–5. [Google Scholar] [CrossRef]
  123. Jahromi, B.S.; Tulabandhula, T.; Cetin, S. Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles. Sensors 2019, 19, 4357. [Google Scholar] [CrossRef]
  124. Duan, S.; Shi, Q.; Wu, J. Multimodal Sensors and ML-Based Data Fusion for Advanced Robots. Adv. Intell. Syst. 2022, 4, 2200213. [Google Scholar] [CrossRef]
  125. Wang, L.; Zhang, X.; Li, J.; Xv, B.; Fu, R.; Chen, H.; Yang, L.; Jin, D.; Zhao, L. Multi-Modal and Multi-Scale Fusion 3D Object Detection of 4D Radar and LiDAR for Autonomous Driving. IEEE Trans. Veh. Technol. 2022, 72, 5628–5641. [Google Scholar] [CrossRef]
  126. Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather. 2020, pp. 11682–11692. Available online: https://openaccess.thecvf.com/content_CVPR_2020/html/Bijelic_Seeing_Through_Fog_Without_Seeing_Fog_Deep_Multimodal_Sensor_Fusion_CVPR_2020_paper.html (accessed on 19 October 2023).
  127. Kocić, J.; Jovičić, N.; Drndarević, V. Sensors and Sensor Fusion in Autonomous Vehicles. In Proceedings of the 2018 26th Telecommunications Forum (TELFOR), Belgrade, Serbia, 20–21 November 2018; pp. 420–425. [Google Scholar] [CrossRef]
  128. Xia, X.; Meng, Z.; Han, X.; Li, H.; Tsukiji, T.; Xu, R.; Zheng, Z.; Ma, J. An automated driving systems data acquisition and analytics platform. Transp. Res. Part C Emerg. Technol. 2023, 151, 104120. [Google Scholar] [CrossRef]
  129. Liu, S.; Tang, J.; Zhang, Z.; Gaudiot, J.-L. Computer Architectures for Autonomous Driving. Computer 2017, 50, 18–25. [Google Scholar] [CrossRef]
  130. Castells-Rufas, D.; Ngo, V.; Borrego-Carazo, J.; Codina, M.; Sanchez, C.; Gil, D.; Carrabina, J. A Survey of FPGA-Based Vision Systems for Autonomous Cars. IEEE Access 2022, 10, 132525–132563. [Google Scholar] [CrossRef]
  131. Sajadi-Alamdari, S.A.; Voos, H.; Darouach, M. Ecological Advanced Driver Assistance System for Optimal Energy Management in Electric Vehicles. IEEE Intell. Transp. Syst. Mag. 2020, 12, 92–109. [Google Scholar] [CrossRef]
  132. Fleming, J.; Yan, X.; Allison, C.; Stanton, N.; Lot, R. Driver Modeling and Implementation of a Fuel-Saving ADAS. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 1233–1238. [Google Scholar] [CrossRef]
  133. Bautista-Montesano, R.; Galluzzi, R.; Mo, Z.; Fu, Y.; Bustamante-Bello, R.; Di, X. Longitudinal Control Strategy for Connected Electric Vehicle with Regenerative Braking in Eco-Approach and Departure. Appl. Sci. 2023, 13, 5089. [Google Scholar] [CrossRef]
  134. Taoudi, A.; Haque, M.S.; Luo, C.; Strzelec, A.; Follett, R.F. Design and Optimization of a Mild Hybrid Electric Vehicle with Energy-Efficient Longitudinal Control. SAE Int. J. Electrified Veh. 2021, 10, 55–78. [Google Scholar] [CrossRef]
  135. Eichenlaub, T.; Rinderknecht, S. Anticipatory Longitudinal Vehicle Control using a LSTM Prediction Model. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 447–452. [Google Scholar] [CrossRef]
  136. Jin, Q.; Wu, G.; Boriboonsomsin, K.; Barth, M.J. Power-Based Optimal Longitudinal Control for a Connected Eco-Driving System. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2900–2910. [Google Scholar] [CrossRef]
  137. Huang, Y.; Ng, E.C.; Zhou, J.L.; Surawski, N.C.; Chan, E.F.; Hong, G. Eco-driving technology for sustainable road transport: A review. Renew. Sustain. Energy Rev. 2018, 93, 596–609. [Google Scholar] [CrossRef]
  138. Fleming, J.; Midgley, W.J. Energy-efficient automated driving: Effect of a naturalistic eco-ACC on a following vehicle. In Proceedings of the 2023 IEEE International Conference on Mechatronics (ICM), Loughborough, UK, 15–17 March 2023; pp. 1–6. [Google Scholar] [CrossRef]
  139. Zhang, Y.; Zhang, Y.; Ai, Z.; Murphey, Y.L.; Zhang, J. Energy Optimal Control of Motor Drive System for Extending Ranges of Electric Vehicles. IEEE Trans. Ind. Electron. 2021, 68, 1728–1738. [Google Scholar] [CrossRef]
  140. Schmied, R.; Waschl, H.; Quirynen, R.; Diehl, M.; del Re, L. Nonlinear MPC for Emission Efficient Cooperative Adaptive Cruise Control. IFAC-PapersOnLine 2015, 48, 160–165. [Google Scholar] [CrossRef]
  141. Themann, P.; Zlocki, A.; Eckstein, L. Energy Efficient Control of Vehicle’s Longitudinal Dynamics Using V2X Communication. ATZ Worldw. 2014, 116, 36–41. [Google Scholar] [CrossRef]
  142. Gungor, O.E.; She, R.; Al-Qadi, I.L.; Ouyang, Y. One for all: Decentralized optimization of lateral position of autonomous trucks in a platoon to improve roadway infrastructure sustainability. Transp. Res. Part C Emerg. Technol. 2020, 120, 102783. [Google Scholar] [CrossRef]
  143. Liimatainen, H.; van Vliet, O.; Aplyn, D. The potential of electric trucks—An international commodity-level analysis. Appl. Energy 2019, 236, 804–814. [Google Scholar] [CrossRef]
Figure 1. Workflow of the statistical assessment of CAV energy consumption.
Figure 1. Workflow of the statistical assessment of CAV energy consumption.
Vehicles 06 00012 g001
Figure 2. Scheme of CAV.
Figure 2. Scheme of CAV.
Vehicles 06 00012 g002
Figure 3. CAV architecture according to OODA loop.
Figure 3. CAV architecture according to OODA loop.
Vehicles 06 00012 g003
Figure 4. AV and CAV sensors’ architecture.
Figure 4. AV and CAV sensors’ architecture.
Vehicles 06 00012 g004
Figure 5. Sensitivity of the camera specifics for object detection.
Figure 5. Sensitivity of the camera specifics for object detection.
Vehicles 06 00012 g005
Figure 6. Example schematic of a LiDAR.
Figure 6. Example schematic of a LiDAR.
Vehicles 06 00012 g006
Figure 7. Example schematic of a FMCW radar layout.
Figure 7. Example schematic of a FMCW radar layout.
Vehicles 06 00012 g007
Figure 8. Ultrasonic distance sensor schematic layout.
Figure 8. Ultrasonic distance sensor schematic layout.
Vehicles 06 00012 g008
Figure 9. Example of CAV data-processing pipeline.
Figure 9. Example of CAV data-processing pipeline.
Vehicles 06 00012 g009
Figure 10. Comparison of different hardware platform performances in DL computation in int8 TOPS.
Figure 10. Comparison of different hardware platform performances in DL computation in int8 TOPS.
Vehicles 06 00012 g010
Figure 11. CAV hardware power consumption distribution.
Figure 11. CAV hardware power consumption distribution.
Vehicles 06 00012 g011
Figure 12. (A) Distribution of energy consumption for EV and CAV. (B) Comparison of the consumption of EV and CAV.
Figure 12. (A) Distribution of energy consumption for EV and CAV. (B) Comparison of the consumption of EV and CAV.
Vehicles 06 00012 g012
Figure 13. Influence of the DAS energy saving and CAV hardware power consumption on CAV and EV energy consumption for two light-duty-vehicle classes.
Figure 13. Influence of the DAS energy saving and CAV hardware power consumption on CAV and EV energy consumption for two light-duty-vehicle classes.
Vehicles 06 00012 g013
Figure 14. Relative net CAV energy consumption for various vehicle classes.
Figure 14. Relative net CAV energy consumption for various vehicle classes.
Vehicles 06 00012 g014
Table 1. Main CAV sensors.
Table 1. Main CAV sensors.
ActivePassive
ExteroceptiveLiDARCamera
Ultrasonic
RADAR
Proprioceptive GNSS
IMU
Wheel encoders
Table 2. Example sensor data rate.
Table 2. Example sensor data rate.
SensorRaw Data Rate [Mb/s]Note
3D Lidar~170014 M points/s, 16 bytes/point
2D Lidar~20165 k point/s, 16 bytes/point
Ultrasonic~3 × 10−420 Hz, 2 bytes/point
Radar~3 × 10−232 points/cycle, 20 Hz, 48 bit/point
4D Radar~0.3256 points/cycle, 20 Hz, 64 bit/point
Camera~3750 2.6 MP, 45 fps, 32-bit, raw
Camera ~9602.0 MP, 30 fps, 16-bit, raw
Table 3. Main specifications of various automotive communication protocols.
Table 3. Main specifications of various automotive communication protocols.
ProtocolWiresBandwidth Max Length Safety CriticalApplication Examples
Automotive Ethernet2up to 10 Gbps10–15NoLiDAR, Radar,
CAN2/4up to 1 Mbps40 mYesWide applications
CAN-FD2up to 5 Mbps 25 mYesElectronic Control Units
LIN320 kbps40 mNoBody, Sensor, Mirrors
FlexRay2/410 Mbps22 mYesx-by-wire, ADAS
PSI52189 kbps12 mYesAirbags, Ultrasonic
GMSL2up to 12 Gbps15 mYesCamera
FPD-Link24.16 Gbps15 mYesCamera
MOST2Up to 150 Mbps-NoMultimedia, infotainment
SENT3333 kbps5 m YesPowertrain
Table 4. Distribution parameters found using the data acquired from the literature and adopted for the Monte Carlo simulation.
Table 4. Distribution parameters found using the data acquired from the literature and adopted for the Monte Carlo simulation.
DistributionTypeParameters
NameValueNameValueNameValue
P c a m e r a Normal μ c a m e r a 4.25 σ c a m e r a 1.03
P l i d a r μ l i d a r 16.00 σ l i d a r 10.36
P r a d a r μ r a d a r 8.98 σ r a d a r 7.28
P u l t r a s o n i c μ u l t r a s o n i c 1.95 σ u l t r a s o n i c 1.70
P c o m p u t i n g μ c o m p u t i n g 362 σ c o m p u t i n g 267
n c a m e r a Poisson λ c a m e r a 6
n l i d a r λ l i d a r 4
n r a d a r λ r a d a r 6
n u l t r a s o n i c λ u l t r a s o n i c 1.1
n c o m p u t i n g λ c o m p u t i n g 7
E C E V Burr α E C , E V 135.05 c E C , E V 18.22 k E C , E V 0.39
Δ E C C A V Normal μ Δ E C , C A V 15.82 σ Δ E C , C A V 3.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pipicelli, M.; Gimelli, A.; Sessa, B.; De Nola, F.; Toscano, G.; Di Blasio, G. Architecture and Potential of Connected and Autonomous Vehicles. Vehicles 2024, 6, 275-304. https://doi.org/10.3390/vehicles6010012

AMA Style

Pipicelli M, Gimelli A, Sessa B, De Nola F, Toscano G, Di Blasio G. Architecture and Potential of Connected and Autonomous Vehicles. Vehicles. 2024; 6(1):275-304. https://doi.org/10.3390/vehicles6010012

Chicago/Turabian Style

Pipicelli, Michele, Alfredo Gimelli, Bernardo Sessa, Francesco De Nola, Gianluca Toscano, and Gabriele Di Blasio. 2024. "Architecture and Potential of Connected and Autonomous Vehicles" Vehicles 6, no. 1: 275-304. https://doi.org/10.3390/vehicles6010012

Article Metrics

Back to TopTop