Next Article in Journal
LSANet: Lightweight Super Resolution via Large Separable Kernel Attention for Edge Remote Sensing
Previous Article in Journal
Integrated Photonics for IoT, RoF, and Distributed Fog–Cloud Computing: A Comprehensive Review
Previous Article in Special Issue
Autonomous Vehicles in Rural Areas: A Review of Challenges, Opportunities, and Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cyberattack Resilience of Autonomous Vehicle Sensor Systems: Evaluating RGB vs. Dynamic Vision Sensors in CARLA

1
Faculty of Computer Science, Electronics and Telecommunications, AGH University of Krakow, 30-059 Krakow, Poland
2
Academic Computer Centre AGH, AGH University of Krakow, 30-950 Krakow, Poland
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(13), 7493; https://doi.org/10.3390/app15137493
Submission received: 24 May 2025 / Revised: 23 June 2025 / Accepted: 30 June 2025 / Published: 3 July 2025
(This article belongs to the Special Issue Intelligent Autonomous Vehicles: Development and Challenges)

Abstract

Autonomous vehicles (AVs) rely on a heterogeneous sensor suite of RGB cameras, LiDAR, GPS/IMU, and emerging event-based dynamic vision sensors (DVS) to perceive and navigate complex environments. However, these sensors can be deceived by realistic cyberattacks, undermining safety. In this work, we systematically implement seven attack vectors in the CARLA simulator—salt and pepper noise, event flooding, depth map tampering, LiDAR phantom injection, GPS spoofing, denial of service, and steering bias control—and measure their impact on a state-of-the-art end-to-end driving agent. We then equip each sensor with tailored defenses (e.g., adaptive median filtering for RGB and spatial clustering for DVS) and integrate a unsupervised anomaly detector (EfficientAD from anomalib) trained exclusively on benign data. Our detector achieves clear separation between normal and attacked conditions (mean RGB anomaly scores of 0.00 vs. 0.38; DVS: 0.61 vs. 0.76), yielding over 95% detection accuracy with fewer than 5% false positives. Defense evaluations reveal that GPS spoofing is fully mitigated, whereas RGB- and depth-based attacks still induce 30–45% trajectory drift despite filtering. Notably, our research-focused evaluation of DVS sensors suggests potential intrinsic resilience advantages in high-dynamic-range scenarios, though their asynchronous output necessitates carefully tuned thresholds. These findings underscore the critical role of multi-modal anomaly detection and demonstrate that DVS sensors exhibit greater intrinsic resilience in high-dynamic-range scenarios, suggesting their potential to enhance AV cybersecurity when integrated with conventional sensors.

1. Introduction

Autonomous vehicles (AVs) have the potential to revolutionize transportation systems worldwide, promising to reduce traffic accidents by up to 90% and improve mobility for disabled individuals [1,2]. These vehicles rely on an array of sensors such as RGB cameras, LiDAR, radar, GPS, and the emerging event-based dynamic vision sensor (DVS) to perceive their surroundings and navigate autonomously [3,4,5,6]. However, the increasing complexity and connectivity of AV systems introduce significant cybersecurity risks that could be exploited by malicious actors [7,8,9,10,11]. The security of sensor systems is paramount, as they form the critical interface between the vehicle and its environment [12]. Recent research has demonstrated that adversaries can manipulate sensor inputs through various attack vectors, including adversarial machine learning [13,14], sensor spoofing [9,15], and denial-of-service attacks [8,16], posing serious safety threats to autonomous navigation systems.
Comprehensive security research has revealed multiple attack vectors targeting AV sensor systems [17,18]. Camera-based perception systems are vulnerable to adversarial examples that can cause misidentification of traffic signs or obstacles [13,19,20], with studies demonstrating that carefully crafted perturbations can achieve attack success rates exceeding 85% under real-world conditions [21]. LiDAR sensors face threats from spoofing attacks that inject false obstacles, leading to unnecessary emergency braking or collision avoidance maneuvers [9,22,23]. GPS systems are susceptible to spoofing attacks that can cause vehicles to deviate from their intended routes by manipulating position data [9,24,25]. Additionally, denial-of-service attacks targeting sensor communication channels can disrupt critical perception capabilities [8,26]. These vulnerabilities highlight the pressing need for comprehensive security analysis and robust defense strategies across all AV sensor modalities [8,9].
While traditional sensors such as RGB cameras and LiDAR have received significant attention regarding their security vulnerabilities [15,27], the integration of a dynamic vision sensor (DVS) into AVs introduces both new opportunities and challenges that remain largely unexplored from a cybersecurity perspective [28]. Unlike conventional frame-based cameras that capture images at fixed intervals, DVSs operate by asynchronously detecting pixel-level brightness changes with microsecond temporal resolution [29,30]. This event-driven approach provides significant advantages, including high dynamic range (> 120 dB), reduced motion blur, and lower data redundancy [31,32,33], making DVSs particularly suitable for high-speed driving scenarios and challenging weather conditions where traditional cameras fail [34,35]. However, their unique operating principles and asynchronous data streams raise critical security questions that have not been systematically addressed: How resilient are DVSs to adversarial attacks compared to traditional cameras? Can their event-based nature be exploited through novel attack vectors such as event flooding or temporal manipulation? Which defense mechanisms are most effective for securing event-based perception systems? Furthermore, how does the integration of a DVS alongside traditional sensors affect the overall security posture of autonomous vehicles?
This paper conducts a comprehensive analysis of cyberattacks targeting AV sensors, placing particular emphasis on evaluating the security implications of incorporating DVS technology into multi-modal sensor suites [36]. We systematically implement and evaluate seven distinct attack vectors: salt-and-pepper noise attacks on RGB cameras [37], event flooding attacks on DVSs, depth map tampering [38], LiDAR phantom object injection [22], GPS spoofing [24,39], denial-of-service attacks [26], and steering bias manipulation [11]. Our experimental methodology leverages the CARLA simulator (version 9.15) and the PCLA (Pretrained CARLA Leaderboard Agent) framework [40] to simulate and assess these attacks within photorealistic autonomous driving environments [41]. Building upon recent advances in DVS-based perception [42], we implement realistic attack parameters derived from existing literature and real-world attack scenarios [13,22]. The CARLA simulator provides validated sensor models and physics simulations that enable rigorous evaluation of attack impacts on autonomous driving performance [36].
Our study systematically examines autonomous agent behavior and sensor responses under various attack conditions, providing quantitative insights into the vulnerabilities and resilience characteristics of modern autonomous driving systems [43]. We implement tailored defense mechanisms for each sensor modality and develop a comprehensive anomaly detection system using the EfficientAD model from the anomalib library [44,45]. Our evaluation encompasses over 160 CARLA episodes across diverse driving scenarios, providing statistical analysis of attack impacts and defense effectiveness [8].
Important Note on DVS Integration: It is crucial to clarify that in our experimental setup, DVS cameras are used exclusively for research purposes and cybersecurity analysis. The DVS sensors are not integrated into the vehicle’s driving decision model, which relies primarily on RGB cameras and other traditional sensors. Our DVS security analysis provides insights into potential vulnerabilities and defense mechanisms for future integration scenarios, but it does not claim that DVS technology currently enhances the cybersecurity of the autonomous driving system under test.
  • We provide a systematic analysis of sensor vulnerabilities in autonomous vehicles, with particular focus on the emerging DVS technology and its unique security characteristics in research contexts.
  • We implement and evaluate realistic attack scenarios in the CARLA simulator with quantified parameters derived from existing literature, demonstrating measurable impacts on autonomous agent behavior and perception accuracy.
  • We develop an effective anomaly detection system using the EfficientAD model from the anomalib library that achieves a > 95% detection accuracy with < 5% false positives for identifying sophisticated attacks on both RGB and DVS camera sensors.
  • We provide quantitative analysis of defense mechanism effectiveness, revealing that while GPS spoofing defenses achieve near-perfect mitigation, RGB and depth sensor defenses show limited effectiveness with 30–45% trajectory drift remaining.
  • We propose a comprehensive multi-layered security framework that combines sensor-specific defenses, anomaly detection, and graceful degradation strategies for resilient autonomous operation.

2. Related Work

The cybersecurity of autonomous vehicles (AVs) is a rapidly evolving field, with extensive research focused on identifying vulnerabilities and developing robust defense mechanisms. This section situates our work within the existing literature, focusing on three key areas: security for multi-sensor fusion systems, recent advances in anomaly detection [46], and the nascent field of dynamic vision sensor (DVS) security. By comparing our approach to recent studies, we highlight the novelty and specific contributions of our research.
Recent studies have increasingly focused on the vulnerabilities of multi-sensor fusion algorithms, which are critical for robust perception in AVs. For instance, recent work by Zhu et al. [47] investigates malicious attacks that specifically target the fusion mechanism, creating inconsistencies between LiDAR and camera data to deceive the perception system. Similarly, other research has explored how to generate “invisible” adversarial attacks that can simultaneously fool both camera and LiDAR systems [48]. While these studies provide crucial insights into the security of integrated systems, they primarily focus on attacking the central fusion logic. In contrast, our work provides a more granular analysis by evaluating the resilience of individual sensors before their data is fed into a fusion model. We systematically implement seven distinct attack vectors on a sensor-by-sensor basis, establishing a foundational understanding of how specific sensor modalities degrade. This approach is complementary and essential, as a resilient fusion system still depends on the integrity of its input data streams.
In parallel with security-focused research, recent work has explored the use of end-to-end reinforcement learning for developing AV control policies in simulation environments [49]. Sakhai and Wielgosz [50] demonstrated that escape maneuvers in urban driving scenarios can be learned using neural policy architectures trained entirely in CARLA. While their work focused on control under dynamic conditions, our study shifts attention to the sensor-level cybersecurity of perception inputs that feed into such learning-based agents.
The development of holistic security architectures, often leveraging artificial intelligence (AI) and blockchain, represents another significant research trend. Bendiab et al. [51] propose a comprehensive framework that uses AI for threat detection and blockchain for secure data exchange. Surveys on AI-based intrusion detection systems further highlight the popularity of machine learning for identifying anomalous vehicle behavior [52]. Our research builds upon this trend by implementing and evaluating a specific, state-of-the-art unsupervised anomaly detection model, EfficientAD, from the anomalib library [44]. A key differentiator of our study is that we train and test this model on both conventional RGB data and, uniquely, on DVS event streams. By demonstrating over 95% detection accuracy, we not only validate the model’s effectiveness but also provide a direct, quantitative comparison of its performance on these two distinct sensor types, which, to our knowledge, has not been systematically explored in this context.
Research into emerging sensors like DVS has gained significant traction, primarily for their performance advantages in challenging conditions. The work by Gehrig and Scaramuzza [32] highlights the benefits of event cameras for low-latency automotive vision, while other studies focus on their high dynamic range and reduced motion blur [28]. Sakhai et al. [53] further demonstrated the effectiveness of DVS in real-time pedestrian detection under adverse weather, reinforcing its practical value in safety-critical AV scenarios. However, the vast majority of DVS research is centered on perception algorithms and performance benchmarks rather than cybersecurity resilience. Our paper addresses this critical gap. We conduct one of the first systematic evaluations of DVS security within an AV context, implementing novel attack vectors like event flooding and directly comparing the sensor’s resilience to that of a traditional RGB camera under attack. Although our DVS integration is for research and analysis purposes and not part of the active driving model, this investigation provides crucial, forward-looking insights into the security profile of this promising technology.
In addressing the limitations inherent in simulation-based studies, we position our work as a necessary and responsible precursor to real-world implementation. The complexity, cost, and safety risks associated with testing cyberattacks on physical vehicles necessitate thorough validation in a controlled environment. By using the leading open-source CARLA simulator [36], we provide a replicable and quantifiable baseline that is essential for guiding future hardware-in-the-loop and on-road experiments, which require initial research studies based on such foundational work. Our findings, particularly on the limited effectiveness of RGB defenses (30–45% trajectory drift) and the complete mitigation of GPS spoofing, offer a clear roadmap for prioritizing the development of real-world defense mechanisms. This work, therefore, serves as a critical stepping stone, providing the initial validation and security analysis required before these advanced sensor systems can be safely deployed on public roads.
In summary, this paper distinguishes itself from prior work through three main contributions. First, it provides the first systematic, comparative analysis of RGB and DVS sensor resilience in a unified cybersecurity context using the CARLA simulator. Second, it introduces and validates a tailored, high-performance anomaly detection system for these specific sensor streams, demonstrating its practical effectiveness. Finally, by quantifying the limitations of existing defenses, it provides a crucial, foundational dataset and analysis that directly informs the next stages of research and paves the way for the development of robust, real-world security frameworks.

3. Materials and Methods

This section describes the experimental setup, methodology, and implementation details used to evaluate the resilience of autonomous vehicles against cyberattacks, with a particular focus on Dynamic Vision Sensors (DVS) and RGB cameras. We provide comprehensive information to enable reproducibility of our experiments and findings.

3.1. Experimental Framework

3.1.1. CARLA Simulator

All experiments were conducted using the CARLA simulator (version 9.15), an open-source simulator for autonomous driving research that provides a realistic urban environment with detailed physics and sensor models. CARLA was chosen for its ability to accurately simulate various sensor modalities including RGB cameras, LiDAR, radar, and event-based sensors. The simulator was run on a Linux Ubuntu 22.04 system with CUDA-enabled GPUs to support the computational requirements of the autonomous driving agents and sensor processing pipelines [36].

3.1.2. PCLA Framework

We utilized the Pretrained CARLA Leaderboard Agent (PCLA) framework, which enables the deployment of autonomous driving agents from the CARLA Leaderboard onto vehicles within the simulator [40]. This framework provides several advantages for our experimental setup:
  • Ability to deploy multiple autonomous driving agents with different architectures and training paradigms
  • Independence from the Leaderboard codebase, allowing compatibility with the latest CARLA version
  • Support for multiple vehicles with different autonomous agents operating simultaneously
The PCLA framework includes nine different high-performing autonomous driving agents trained with 17 distinct training seeds, allowing us to evaluate the impact of cyberattacks across a diverse set of perception and control algorithms.

3.1.3. Autonomous Driving Agent

For our experiments, we focused exclusively on the following autonomous driving agent:
  • NEAT AIM-MT-2D: A neural attention-based end-to-end autonomous driving agent that processes RGB images and depth information to predict vehicle controls directly. We used the NEAT variant that incorporates depth information (neat_aim2ddepth), which enhances the agent’s ability to perceive the 3D structure of the environment and improves its performance in complex driving scenarios.
This agent represents an end-to-end learning approach to autonomous driving that directly maps sensor inputs to control commands, allowing us to assess the impact of attacks on a unified perception-control architecture.

3.2. Sensor Configuration

Our experimental setup incorporated a comprehensive sensor suite to evaluate the resilience of autonomous vehicles against various attack vectors:
  • RGB Cameras: Front-facing RGB camera with 800 × 600 resolution and a 100° field of view, mounted at position (1.3, 0.2, 2.3) relative to the vehicle center, providing visual input for the agent’s perception system.
  • Dynamic Vision Sensor (DVS): Event-based camera with 800 × 600 resolution and a 100° field of view, mounted at position (1.3, 0.0, 2.3), with positive and negative thresholds set to 0.3 to capture brightness changes in the scene with microsecond temporal resolution.
  • Depth Camera: Depth-sensing camera with 800 × 600 resolution and a 100° field of view, aligned with the RGB camera position, providing per-pixel distance measurements for 3D scene understanding.
  • LiDAR: A 64-channel LiDAR sensor with an 85 m range, 600,000 points per second, and a 10 Hz rotation frequency, mounted at position (0.0, 0.0, 2.5), providing detailed 3D point cloud data of the surrounding environment.
  • GPS and IMU: For localization and vehicle state estimation, enabling the agent to maintain awareness of its position and orientation within the environment.
It is important to note that in this experimental setup, the DVS cameras and LiDARs were primarily used for research purposes and data collection and were not directly integrated into the vehicle’s driving decision model.
To better illustrate the RGB sensor integration and data flow within our experimental framework, Figure 1 presents a comprehensive workflow diagram that demonstrates the sensor attachment process, data acquisition pipeline, and the subsequent processing stages. This workflow visualization provides crucial insight into how RGB sensor data is captured, processed, and utilized within the CARLA simulation environment, forming the foundation for our cybersecurity evaluation methodology.
Similarly, the dynamic vision sensor (DVS) follows a comparable workflow architecture to the RGB sensor system, with several key distinctions that reflect the unique characteristics of event-based vision technology. Figure 2 illustrates the DVS sensor attachment and data processing pipeline within our experimental framework.
The DVS data processing pipeline differs from the RGB pipeline in several critical aspects:
  • Event-based data capture generates asynchronous streams of brightness change events rather than synchronized frame sequences;
  • Event-to-image conversion transforms sparse temporal events into dense spatial representations for compatibility with existing vision processing frameworks;
  • Specialized defense mechanisms employ spatial clustering algorithms to detect and filter noise events; and
  • Importantly, DVS data streams are utilized primarily for research analysis and anomaly detection rather than direct integration into the PCLA (Pretrained CARLA Leaderboard Agent) and vehicle control model.
This architectural decision allows us to evaluate the intrinsic security characteristics of event-based vision.
Sensor data was collected at 10 Hz to match the control frequency of the autonomous driving agent. All sensor data was synchronized using the CARLA simulator’s internal clock to ensure temporal consistency across modalities. This comprehensive sensor configuration allowed us to evaluate the impact of attacks on individual sensors as well as the effectiveness of multi-sensor fusion for attack detection and mitigation.

3.3. Attack Implementation

We implemented and evaluated seven distinct cyberattacks targeting the sensor inputs of autonomous vehicles, with attack parameters carefully selected based on existing cybersecurity literature and real-world attack feasibility studies. Each attack represents a realistic threat vector that could be executed by adversaries with moderate technical capabilities:
  • RGB Camera Attacks, Salt-and-Pepper Noise (80% intensity): We implemented a high-density salt-and-pepper noise attack where random pixels were replaced with either black (0) or white (255) values with equal probability [37,54]. The 80% intensity parameter represents an extreme noise level commonly used in image denoising research to test algorithm robustness under severe corruption conditions. This attack simulates severe sensor interference from electromagnetic attacks or physical sensor tampering [15,27].
  • Dynamic Vision Sensor Attacks, Event Flooding (60% spurious events): We implemented an event flooding attack that injected spurious events with random positions and polarities into the DVS output stream [30,33]. The 60% flooding rate was selected to represent a significant degradation scenario that would challenge event-based processing algorithms while remaining within realistic attack capabilities [28,29,34]. This attack exploits the asynchronous nature of DVS devices and could be implemented through electromagnetic interference or direct sensor manipulation [35].
  • Depth Camera Attacks, Depth Map Tampering (5 patches, 50 × 50 pixels): We implemented a patch-based depth tampering attack that modified depth values in random regions of the depth map following established adversarial patch methodologies [38,55]. The attack parameters (5 patches of 50 × 50 pixels with depth offsets of 5–15 units) were selected to balance attack effectiveness with practical implementation constraints [22,56]. This creates false distance measurements that could cause collision avoidance systems to malfunction [9].
  • LiDAR Attacks, Phantom Object Injection (5 clusters, 100–500 points each): We implemented a phantom object injection attack that added false point clusters to the LiDAR point cloud at positions 5–15 m ahead of the vehicle [9,22]. The attack parameters (5 clusters with 100–500 points each) were designed to create detectable objects while remaining within the spoofing capabilities demonstrated in recent LiDAR security research [23,27,57]. This attack could be implemented using laser devices or reflective materials [15,22].
  • GPS Spoofing Attacks, Position Manipulation (±50 m deviation): We implemented a GPS spoofing attack that created discrepancies between actual and perceived vehicle positions [24,25]. The ±50 m deviation threshold represents a significant but realistic spoofing scenario that would noticeably impact navigation while remaining within documented GPS attack capabilities [9,58,59]. The attack maintains separate records of actual and spoofed positions to enable quantitative analysis of navigation impact.
  • Denial of Service (DoS) Attacks, Sensor Rate Limiting (50% packet loss): We implemented a DoS attack that restricted sensor update frequency to simulate network-based attacks on sensor communication channels [11]. The 50% packet loss rate represents severe network conditions that significantly impact autonomous vehicle performance, as demonstrated in recent automotive cybersecurity research [16]. This attack simulates jamming or network flooding scenarios [17,18].
  • Steering Bias Attacks, Control Manipulation (±15° systematic offset): We implemented a steering bias attack that introduced systematic errors into steering commands [11,16]. The ±15° bias parameter was selected to represent a significant control deviation that would be detectable by monitoring systems while demonstrating clear attack impact [17]. This attack simulates compromised control systems or actuator manipulation [18].
All attacks were implemented by intercepting and modifying the sensor data in the callback functions before it was processed by the autonomous driving agent. This approach simulates a realistic attack scenario where an adversary has gained access to the sensor data stream but not necessarily to the internal processing algorithms of the autonomous system.

3.4. Defense Integration

To protect against the implemented attacks, we developed and integrated a multi-layered defense system that combines sensor-specific filtering techniques with machine learning-based anomaly detection. Our defense architecture implements both traditional signal processing methods and deep learning-based anomaly detection to provide comprehensive protection against sensor attacks.

3.4.1. Traditional Defense Mechanisms

We implemented sensor-specific defense mechanisms in the sensor callback functions to provide real-time protection:
  • RGB Camera Defenses: We implemented a decision-based adaptive median filter that dynamically adjusted its kernel size (3 × 3 to 7 × 7) based on the detected noise level. The filter specifically targets salt-and-pepper noise by
    Identifying potential noise pixels (values of 0 or 255),
    Growing the filter window until valid median values are found, and
    Preserving edge details by only replacing confirmed noise pixels.
  • DVS Defense: We introduced a system to track event counts, setting a threshold of 5000 events per frame to identify flooding attacks, along with a dual-phase defense strategy composed of
    Spatial clustering analysis using KD-tree data structures to identify dense noise patterns and
    Morphological operations (dilation) for standard noise smoothing when clustering is not detected.
  • Depth Camera Defense: A multi-stage validation approach including the following:
    Range clipping to valid depth values (0–100 m),
    Gradient-based tampering detection using Sobel operators, and
    Adaptive Gaussian smoothing with intensity based on detected gradient magnitude.
  • LiDAR Defense: A point cloud filtering pipeline with
    Distance-based filtering (maximum range 50 m),
    Density-based clustering using KD-tree analysis to detect phantom objects, and
    Removal of points in clusters exceeding density thresholds.
  • GPS Spoofing Defense: Route consistency validation and velocity checks:
    Position validation against planned route (deviation threshold: 5 m),
    Velocity consistency checks (maximum realistic speed: 20 m/s), and
    Automatic reversion to true GPS coordinates when anomalies are detected.
  • Steering Bias Defense: Statistical anomaly detection for control commands:
    Maintains 50-tick steering command history,
    Z-score analysis for outlier detection (threshold: |Z| > 3), and
    Automatic correction to historical mean when a bias is detected.
  • DoS Defense: Sensor update monitoring and rate limiting:
    Timestamp tracking for all sensor updates,
    Rate limiting enforcement (maximum 20 Hz), and
    Buffering mechanisms for delayed sensor readings.

3.4.2. Machine Learning-Based Anomaly Detection

In addition to traditional filtering methods, we implemented a comprehensive anomaly detection system using the EfficientAd model from the anomalib library. This system provides an additional layer of security by learning normal sensor behavior patterns and detecting deviations that may indicate attacks.
The anomaly detection system operates on RGB and DVS sensor data, providing anomaly scores that can be used to identify potential attacks even when traditional filtering methods may not be sufficient. The system is trained exclusively on normal sensor data, allowing it to detect novel attack patterns without requiring prior knowledge of specific attack types.
These defense mechanisms work in conjunction to provide comprehensive protection against the implemented attack vectors while maintaining real-time performance requirements for autonomous driving applications.

3.5. Anomaly Detection System

We implemented a comprehensive anomaly detection system using the anomalib library to identify potential sensor attacks through machine learning-based analysis. The system uses the EfficientAD model, which demonstrated superior performance for detecting anomalies in autonomous driving sensor data.

3.5.1. EfficientAD Model Architecture

The EfficientAD model was selected as our primary anomaly detection approach due to its effectiveness in detecting sensor attacks without requiring prior knowledge of attack patterns. EfficientAD employs a student–teacher framework that identifies anomalies based on the discrepancy between teacher and student network predictions. The architecture consists of the following:
  • A teacher network that learns robust feature representations from normal sensor data,
  • A student network that attempts to replicate the teacher’s feature extraction capabilities, and
  • A discrepancy measurement module that quantifies differences between teacher and student outputs to generate anomaly scores.
This architecture is particularly suitable for sensor attack detection, as it can identify subtle deviations from normal sensor behavior patterns without requiring examples of attacks during the training phase.

3.5.2. Training and Implementation

The anomaly detection system was implemented with the following methodology:
  • Data Collection and Organization: Sensor data was collected from multiple driving scenarios under normal conditions and organized into training and testing datasets with separate directories for normal and anomalous data.
  • Data Preprocessing: All sensor data underwent standardized preprocessing:
    Images resized to 128 × 128 pixels for computational efficiency,
    Normalization using ImageNet statistics (mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225]), and
    LiDAR point clouds converted to 2D bird’s-eye-view images for compatibility with the image-based model.
  • Model Training Configuration:
    Training performed using PyTorch 1.13.1 with CUDA 11.7 acceleration
    Batch size of 1 with gradient accumulation over 4 batches for memory efficiency
    Training duration of 30 epochs with validation every epoch
    Separate models trained for each sensor type (RGB, DVS, depth, and LiDAR). The detailed training process and loss behavior for the DVS anomaly detection model are illustrated in Figure A1 in Appendix A.
  • Real-time Analysis: The trained models are used to analyze sensor data during autonomous driving episodes, generating anomaly scores that indicate the likelihood of sensor attacks. The system can process sensor data in real-time and provide continuous monitoring of sensor integrity.
The anomaly detection system operates independently of the traditional filtering defenses, providing an additional layer of security that can detect novel attack patterns and complement the rule-based defense mechanisms.

3.6. Evaluation Methodology

We evaluated both the impact of attacks on autonomous driving performance and the effectiveness of our defense mechanisms using a comprehensive set of metrics and visualization tools:

3.6.1. Attack Impact Assessment

The impact of attacks was measured using the following metrics:
  • Trajectory Analysis: We track and compare the vehicle’s actual trajectory against the planned route, calculating point-to-segment distances to measure route deviation. This analysis is particularly important for GPS spoofing attacks where perceived and actual positions may differ significantly.
  • Control Stability: We analyze steering, throttle, and brake commands through detailed time series analysis. For steering bias attacks, we perform statistical analysis of steering angle distributions to detect anomalous patterns.
  • Sensor Performance Metrics:
    RGB Camera: Noise percentage measurements during salt-and-pepper attacks
    DVS: Event count tracking to detect abnormal spikes in event generation
    Depth Camera: Mean depth measurements to identify tampering
    LiDAR: Point cloud density analysis to detect phantom objects
  • Defense Effectiveness: Comparison of performance metrics with and without defensive measures enabled, including
    Adaptive median filtering for RGB noise,
    Spatial clustering analysis for DVS event flooding,
    Gradient-based analysis for depth tampering,
    Density-based filtering for LiDAR attacks, and
    Rate limiting for DoS attacks.

3.6.2. Visualization and Analysis Tools

We developed several visualization tools to analyze attack impacts:
  • Combined Video Generation: Synchronized display of multiple sensor feeds (RGB, DVS, depth, and LiDAR) with attack state indicators
  • Trajectory Plots: Visualization of actual vs. perceived trajectories during GPS spoofing
  • Statistical Analysis: Histograms of steering distributions and sensor metrics under different attack conditions

3.6.3. Experimental Scenarios

We evaluated performance across diverse driving scenarios:
  • Urban Navigation: Complex urban environments with intersections, traffic lights, and other vehicles.
  • Highway Driving: High-speed scenarios with lane changes and overtaking maneuvers.
  • Dynamic Obstacles: Scenarios with pedestrians and other vehicles executing unpredictable maneuvers.
Each scenario was tested with and without attacks to establish baseline performance and measure the impact of different attack types and intensities.
All experiments were conducted using Python 3.7 with PyTorch 1.13.1 and the anomalib library. The CARLA simulator was run on a system with an Intel Core i9-10900K CPU @ 3.70 GHz, 62 GB RAM, and an NVIDIA GeForce RTX 3080 GPU with 10 GB VRAM. All equipment was sourced from x-kom.pl, Krakow, Poland.

4. Results

This section presents the results of our comprehensive evaluation of cyberattacks targeting autonomous vehicle sensors and the effectiveness of our proposed defense mechanisms, based on over 160 experimental episodes conducted in the CARLA simulator, with each episode lasting approximately 6 min across diverse scenarios. Our experimental design evaluated seven distinct attack vectors (salt-and-pepper noise, event flooding, depth map tampering, LiDAR phantom injection, GPS spoofing, denial of service, and steering bias control) as described in the Materials and Methods section. We organize our findings into several key areas: impact analysis of cyberattacks on autonomous driving performance, effectiveness evaluation of defense mechanisms, anomaly detection performance assessment, comparative analysis of RGB and DVS device security, and implications for autonomous vehicle security. Key performance indicators include route completion rates, which exceeded 90% under baseline conditions and remained above 50% under attack scenarios, along with trajectory deviation analysis.

4.1. Impact of Cyberattacks on Autonomous Driving Performance

We evaluated the resilience of autonomous vehicles to various attack vectors by analyzing their impact on driving performance, trajectory adherence, and sensor reliability. Our experiments revealed two distinct outcomes: scenarios where the vehicle completed the route despite attacks (albeit with performance degradation) and scenarios where attacks caused route abandonment or crashes.

4.1.1. Completed Routes Under Attack

In some scenarios, the autonomous vehicle demonstrated resilience by completing the designated route despite being subjected to cyberattacks. Episode 7 provides a notable example of this behavior, as illustrated in Figure 3.
While the vehicle successfully completed the route in Episode 7, the trajectory analysis reveals significant deviations from the optimal path during attacks. RGB camera attacks caused the most pronounced lateral deviations, with the vehicle weaving across lanes before correcting its course. Depth sensor attacks similarly disrupted the vehicle’s ability to maintain a smooth trajectory, though to a lesser extent than RGB attacks. These deviations highlight the vehicle’s compromised perception capabilities even when it manages to reach its destination.
Figure 4 shows the corresponding sensor outputs during normal operation and under different attack conditions for Episode 7. The RGB camera feed exhibits significant salt-and-pepper noise during attacks, severely degrading visual perception. Similarly, the depth sensor distorted distance measurements, potentially causing the autonomous system to misinterpret obstacle distances and road boundaries.
Episode 26 provides another example of route completion despite attacks, as shown in Figure 5. In this case, the vehicle deviated visibly from its intended route during RGB camera attacks but still successfully reached its destination. This highlights a nuanced vulnerability where the vehicle can complete its mission despite taking a potentially dangerous path along a simple pathway.

4.1.2. Route Failures and Crashes

In contrast to the resilience observed in Episodes 7 and 26, several scenarios demonstrated significant failures where attacks caused the vehicle to crash.
Episode 5 demonstrates an even more severe failure mode, where RGB camera attacks caused the vehicle to move erratically before ultimately crashing into a roadside fence. Figure 6 shows the vehicle’s trajectory during this episode, with the abrupt termination point indicating the crash location. The red box in the figure highlights the area where the significant deviation from the planned route occurred, illustrating the severity of the attack’s impact on vehicle navigation.
The sensor data from Episode 5 (Figure 7) reveals the severe degradation of perception capabilities that led to the crash due to the RGB salt-and-pepper noise attack that started in the second column. The RGB camera feed shows extreme noise corruption, while the fence that the vehicle ultimately collided with is visible in the sensor outputs. The red box in the figure highlights the fence structure after the collision, demonstrating the physical impact of the RGB attack on vehicle navigation.
The control inputs during Episode 5 (Figure 8) further illustrate the vehicle’s instability under attack. During RGB attacks, the control system exhibits erratic behavior, alternating between acceleration and full braking as it struggles to interpret the corrupted sensor data, especially during RGB salt-and-pepper noise attacks.
Similarly, Episode 27 demonstrates a multi-stage failure where the vehicle crashed into a traffic light in the intersection during the initial attack phase, mainly during the RGB salt-and-pepper noise attack. Although it temporarily regained its route, it crashed again during a subsequent attack even with defense mechanisms active, as shown in Figure 9 and Figure 10.

4.2. Effectiveness of Defense Mechanisms

Our proposed defense mechanisms aim to mitigate the impact of cyberattacks on the sensors of autonomous vehicles. We evaluated their effectiveness by comparing vehicle performance with and without defenses under identical attack conditions. Figure 11 illustrates the impact of our defense mechanisms on sensor data quality during attacks with defense mechanisms in Episode 7.
Figure 12 and Figure 13 provide a visual comparison of sensor outputs during attacks, demonstrating the effectiveness and limitations of our defense mechanisms. These comparisons show the sensor data quality with and without defenses enabled during identical attack scenarios.
The defense mechanisms demonstrated varying degrees of effectiveness depending on the attack type and intensity:
  • RGB Camera Attacks: Our anomaly detection model for RGB cameras successfully identified attacks on the camera feed, but the subsequent filtering techniques showed limited effectiveness against sophisticated attacks. While some noise reduction was achieved, the filtered images often remained significantly compromised, resulting in continued trajectory deviations even with defenses active.
  • Depth Sensor Attacks: The depth sensor anomaly detection system effectively detected anomalies in depth data, but defense mechanisms for depth sensor attacks demonstrated limited effectiveness against targeted attacks, with filtered depth maps still containing significant distortions that affected the vehicle’s perception capabilities.
  • GPS Spoofing: Our GPS defense mechanism successfully detected and mitigated GPS spoofing attacks by implementing plausibility checks based on route deviation and velocity consistency. The defense mechanism checks if the reported GPS position deviates more than 5.0 m from the planned route or if the calculated speed exceeds 20.0 m/s, reverting to the true transform when these thresholds are exceeded. This was our most effective defense, as it clearly identified spoofed GPS coordinates and prevented the vehicle from making unwarranted course corrections based on falsified location data.
  • Steering Bias: The steering defense mechanism uses statistical anomaly detection with z-score analysis on a rolling window of the last 50 steering commands. When a steering command’s z-score exceeds 3 (indicating a statistical outlier), the system reverts to the mean steering value from the recent history. This approach showed partial effectiveness in identifying malicious steering commands, but sophisticated attacks could still influence vehicle control in certain scenarios.
Our experiments revealed significant limitations in our current defense mechanisms for most sensor types except GPS. In Episode 27, Figure 9, the vehicle crashed during an attack even with defenses active, and across multiple episodes, the defenses for RGB and depth sensors failed to adequately protect the vehicle’s perception system. These findings highlight a critical vulnerability in current autonomous vehicle security approaches.
To address these limitations, we propose an enhanced anomaly detection system that can serve as a fallback mechanism when primary defenses fail. This system would
  • Continuously monitor all sensor inputs for anomalies using our trained detection models;
  • If anomalies were detected, reduce reliance on AI models for driving decisions;
  • Transfer driving control to human operators for a safer experience when sensor integrity is compromised; and
  • Alternatively, selectively shut down compromised sensors and cross-check with remaining functional sensors to maintain autonomous operation.
This approach acknowledges the reality that perfect defense against all attack vectors may not be achievable and instead focuses on rapid detection and appropriate fallback mechanisms to ensure safety. The implementation of such a system would be a promising direction for future research.

4.3. Anomaly Detection Performance

A critical component of our proposed enhanced defense framework is the anomaly detection system, which aims to identify malicious sensor manipulations in real-time and trigger appropriate fallback mechanisms. We evaluated the performance of our anomaly detection models for both RGB and DVS camera sensors using the anomalib library to assess their potential as a foundation for this enhanced security approach.

4.3.1. RGB Camera Anomaly Detection

The RGB camera anomaly detection model demonstrated excellent performance in distinguishing between normal operation and attack conditions. Figure 14 shows the anomaly scores for different operational states in Episode 1.
Statistical analysis of the RGB anomaly detection results revealed the following:
  • Normal Operation: Mean anomaly score of 0.000000 with a standard deviation of 0.000000, establishing a clear baseline for normal behavior.
  • RGB Noise Attacks: Mean anomaly scores of 0.378064 (with defense) and 0.389462 (without defense), both significantly higher than normal operation. This demonstrates the model’s ability to reliably detect RGB camera attacks regardless of whether defenses are active.
  • Our RGB camera anomaly detection model is specifically designed to detect anomalies in RGB camera data only and does not process data from other sensors.
These results demonstrate that our RGB anomaly detection model achieves high sensitivity to relevant attacks while maintaining specificity against unrelated attack vectors.

4.3.2. DVS Camera Anomaly Detection

The DVS camera anomaly detection model showed different characteristics compared to the RGB model, reflecting the unique properties of event-based vision. Figure 15 illustrates the anomaly scores for different operational states in Episode 5.
Statistical analysis of the DVS anomaly detection results revealed:
  • Normal Operation: Mean anomaly score of 0.608308 with a standard deviation of 0.121466, establishing a baseline that reflects the inherent variability of event-based vision data.
  • DVS Noise Attacks: Mean anomaly scores of 0.55 (with defense) and 0.75 (without defense). Interestingly, the score with defense is slightly lower than normal operation, while the score without defense is significantly higher.
  • It is important to note that our DVS anomaly detection model is specifically designed to detect anomalies in DVS data only and does not process data from other sensors. Additionally, DVS cameras are currently used for research purposes only and are not integrated into the vehicle’s driving decision model.
These results highlight the importance of sensor-specific anomaly detection in autonomous vehicles. The DVS anomaly detection model successfully identifies abnormal patterns in DVS data, while other sensor-specific models (such as the RGB anomaly detection model) are responsible for detecting attacks on their respective sensors. Each detection system is specialized and optimized for its particular sensor type, emphasizing the need for a multi-layered security approach.

4.4. Comparative Analysis of DVS and RGB Sensor Security

Our experiments provide valuable insights into the relative security characteristics of traditional RGB cameras and emerging DVS technology:
  • Research Context: It is important to emphasize that the DVS cameras in our study were used exclusively for research purposes and were not integrated into the vehicle’s driving decision model. The data collected from DVS cameras was analyzed separately from the main autonomous driving pipeline.
  • Defense Effectiveness: Our defense mechanisms were evaluated separately for each sensor type. RGB cameras, which are actively used in the driving model, required more aggressive filtering that sometimes reduced image quality. DVS defenses were studied in isolation as a research component.
  • Anomaly Detection: The anomaly detection models for both sensors were designed to work exclusively with their respective sensor data. The RGB model showed clearer separation between normal and attack conditions and directly impacted vehicle safety, while the DVS model’s patterns were analyzed purely for research insights.
  • Future Potential: While not currently used for driving decisions, the complementary nature of RGB and DVS cameras suggests that sensor fusion approaches could potentially enhance security in future implementations.
As shown in Table 1, these findings suggest that incorporating DVS technology alongside traditional sensors can enhance the security posture of autonomous vehicles, particularly in high-dynamic-range scenarios where RGB cameras are more likely to be compromised by environmental factors or deliberate attacks.

5. Discussion

Our comprehensive analysis of cyberattacks targeting autonomous vehicle sensors reveals critical insights into the security vulnerabilities and resilience mechanisms of these systems. In this section, we interpret our findings in the context of existing research and discuss their broader implications for autonomous vehicle security.

5.1. Implications for Autonomous Vehicle Security

The results of our experiments demonstrate that autonomous vehicles remain vulnerable to a range of sensor-based attacks despite advances in perception technologies. The successful completion of routes under attack conditions in Episodes 7 and 26, albeit with significant trajectory deviations, suggests that current autonomous driving systems possess a degree of inherent resilience. However, the catastrophic failures observed in Episodes 5 and 27, where vehicles crashed due to sensor manipulation, highlight the critical nature of these vulnerabilities [60]. An illustrative example of such a critical failure under an RGB attack is presented in Figure A2 in Appendix B.
Particularly concerning is our observation that defense mechanisms showed limited effectiveness against sophisticated attacks on RGB and depth sensors. While our GPS spoofing defenses performed well, the continued vulnerability of visual perception systems represents a significant security gap that malicious actors could exploit [47]. This suggests that the current approach of hardening individual sensors against attacks may be insufficient, and a more holistic security architecture is needed [51].

5.2. Significance of Anomaly Detection Approach

Our anomaly detection system demonstrates promising capabilities for identifying malicious sensor manipulations in real time. The RGB camera anomaly detection model achieved excellent discrimination between normal operation and attack conditions, with attack scenarios consistently producing anomaly scores significantly higher than baseline. This clear separation suggests that anomaly detection could serve as a reliable trigger for fallback mechanisms when primary defenses fail [7].
Interestingly, the DVS camera anomaly detection model exhibited different characteristics, reflecting the unique properties of event-based vision. The higher baseline variability in DVS anomaly scores indicates that event-based sensors may require more sophisticated detection algorithms that account for their asynchronous nature [28]. Nevertheless, the model successfully identified abnormal patterns during attacks, particularly when defenses were not active. Further details on the training process and the observed loss plateau indicating optimal understanding of the training data can be found in Appendix A.
These findings support the potential of anomaly detection as a foundational component of a multi-layered security approach [44]. By continuously monitoring sensor inputs for deviations from expected patterns, autonomous vehicles could rapidly identify potential attacks and implement appropriate countermeasures before safety is compromised.

5.3. Comparative Security of DVS and RGB Sensors

Our comparative analysis of DVS and RGB sensor security reveals important distinctions between these technologies. Traditional RGB cameras, while providing rich visual information, demonstrated significant vulnerability to noise-based attacks that severely degraded perception capabilities. In contrast, the event-based nature of DVS technology offers potential security advantages through its high dynamic range and reduced data redundancy [28].
The complementary characteristics of these sensor types suggest that a fusion approach could enhance overall system security [61]. RGB cameras excel in static scene understanding and color perception, while DVS sensors offer superior performance in high-dynamic-range scenarios and rapid motion detection. By integrating data from both sensor types and implementing cross-validation mechanisms, autonomous vehicles could potentially detect inconsistencies that indicate sensor manipulation [47].
However, it is important to note that our study used DVS cameras exclusively for research purposes, without integration into the vehicle’s driving decision model. Future work should explore the practical challenges of incorporating DVS technology into production autonomous vehicles, including calibration requirements, computational overhead, and integration with existing perception pipelines [62].

5.4. Limitations and Challenges

Our research has several limitations that should be acknowledged. First, all experiments were conducted in simulation using the CARLA environment, which may not fully capture the complexity and unpredictability of real-world driving scenarios. While CARLA provides realistic physics and sensor models, real-world implementation would face additional challenges such as sensor noise, environmental variability, and hardware constraints. To address the inherent limitations of simulation-based studies and bridge the gap towards real-world validation, we have procured an AWS DeepRacer agent and iniVation Dynamic Vision Sensors (DVS), detailed specifications of which are provided in Appendix C.
Second, our attack models represent a subset of possible attack vectors. More sophisticated attacks, such as adversarial machine learning techniques that target specific neural network vulnerabilities, were not fully explored in this study [48]. Such attacks could potentially bypass our anomaly detection systems by generating perturbations that appear normal while causing misclassification.
Third, our defense mechanisms showed limited effectiveness against certain attack types, particularly for RGB and depth sensors. This highlights the challenge of developing robust defenses that can maintain perception quality while mitigating attacks [62]. The trade-off between security and performance remains a significant challenge for autonomous vehicle designers [60].
Finally, the computational requirements of our anomaly detection system may present challenges for real-time implementation on resource-constrained vehicle platforms [52]. While our experiments used high-performance GPUs, production vehicles may have more limited computational capabilities, necessitating the optimization of detection algorithms.

5.5. Future Research Directions

Based on our findings, we identify several promising directions for future research:
  • Enhanced Anomaly Detection: Developing more sophisticated anomaly detection models that can identify subtle attacks while maintaining low false positive rates. This could include exploring deep learning architectures specifically designed for time series sensor data and incorporating contextual information from multiple sensors [45].
  • Sensor Fusion for Security: Investigating how data from complementary sensors (RGB, DVS, LiDAR, and radar) can be fused not only for improved perception but also as a security mechanism [61]. Inconsistencies between different sensor modalities could serve as indicators of potential attacks [47].
  • Graceful Degradation: Designing autonomous systems that can gracefully degrade performance when attacks are detected rather than failing catastrophically. This could involve developing fallback driving policies that rely on uncompromised sensors or implementing safe stop procedures [7].
  • Human-in-the-Loop Security: Exploring how human operators could be effectively integrated into the security framework, particularly for remote monitoring and intervention when anomalies are detected [60]. This raises important questions about interface design, situation awareness, and response time.
  • DVS Integration: Further investigating the potential of DVS technology for enhancing autonomous vehicle security, including developing specialized perception algorithms that leverage the unique properties of event-based vision [28] and exploring hybrid RGB–DVS architectures.
  • Standardized Security Evaluation: Developing standardized benchmarks and evaluation methodologies for assessing the security of autonomous vehicle sensor systems, enabling more systematic comparison of different defense approaches [62].
Addressing these research directions will require interdisciplinary collaboration between experts in computer vision, cybersecurity, autonomous systems, and human factors [51]. By advancing our understanding of sensor vulnerabilities and developing more robust defense mechanisms, we can enhance the security and trustworthiness of autonomous vehicles as they transition from research laboratories to public roads.

Author Contributions

Conceptualization, M.S.; methodology, M.S., K.S., and M.K.S.O.; software, M.S., K.S., and M.K.S.O.; validation, M.S., K.S., and M.K.S.O.; formal analysis, M.S., K.S., and M.K.S.O.; investigation, M.S. and K.S.; resources, M.S.; data curation, K.S. and M.K.S.O.; writing—original draft preparation, M.S., K.S., and M.K.S.O.; writing—review and editing, M.S. and M.W.; visualization, M.S. and K.S.; supervision, M.S. and M.W.; project administration, M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The code for experiments is accessed on 29 June 2025. The data presented in this study are openly available in Robust-Sensor-Cybersecurity-for-Autonomous-Vehicles repository at https://github.com/MustafaSakhai/Robust-Sensor-Cybersecurity-for-Autonomous-Vehicles.

Acknowledgments

The authors would like to thank all individuals and institutions who supported this work indirectly.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AVAutonomous Vehicle
CUDACompute Unified Device Architecture
DoSDenial of Service
DVSDynamic Vision Sensors
EfficientAdEfficient Anomaly Detection
GPUGraphics Processing Unit
GPSGlobal Positioning System
IMUInertial Measurement Unit
LiDARLight Detection and Ranging
NEATNeural Attention
PCLAPretrained CARLA Leaderboard Agent
RGBRed Green Blue
VRAMVideo Random Access Memory

Appendix A. Training Process for DVS Anomaly Detection Model

The unsupervised anomaly detection model for the dynamic vision sensor (DVS), based on the EfficientAD architecture from the anomalib library [44], was trained exclusively on benign data collected from normal driving scenarios within the CARLA simulator. The training objective is for the model to learn a compact, normative representation of event-based data streams under non-attack conditions. Deviations from this learned representation are then flagged as anomalies.
The initial model was trained for 40 epochs, as illustrated in Figure A1. A key observation from the training process is the behavior of the loss function over time. The training loss rapidly decreases during the initial epochs, indicating that the model is effectively learning the underlying patterns of normal DVS event data. However, after approximately 30 epochs, the loss begins to plateau and then slightly increase. This trend suggests that the model has reached its optimal understanding of the training data and continuing the training process further could lead to overfitting, where the model starts to memorize the specific training examples rather than generalizing the patterns of normal behavior.
This observation is critical for tuning the model effectively. To prevent overfitting and ensure the model remains generalized for detecting a wide range of unforeseen anomalies, training was concluded after this optimal point was identified. This ensures that the resulting model is both sensitive to genuine attacks and robust against the inherent variability of normal DVS data streams, thereby minimizing false positives while maintaining high detection accuracy.
Figure A1. Training loss for the initial DVS front sensor anomaly detection model over 40 epochs. The loss decreases significantly as the model learns normative data patterns, but begins to rise after approximately 30 epochs, indicating the onset of overfitting.
Figure A1. Training loss for the initial DVS front sensor anomaly detection model over 40 epochs. The loss decreases significantly as the model learns normative data patterns, but begins to rise after approximately 30 epochs, indicating the onset of overfitting.
Applsci 15 07493 g0a1

Appendix B. Illustrative Example of Critical Failure Under RGB Attack

To underscore the potential severity of sensor-based cyberattacks, we present a critical failure instance from one of our simulation runs. Figure A2 captures a third-person perspective of the ego-vehicle moments before a collision during a high-intensity (80%) salt-and-pepper noise attack on its front-facing RGB camera. In this scenario, the perception system of the autonomous agent is completely compromised by the noise, rendering it unable to process the visual information from the intersection ahead. As a result, the agent proceeds erratically into the intersection, veering into the lane of oncoming vehicles waiting on the opposite side and making a collision imminent.
This frame vividly illustrates the catastrophic consequences that can arise from a targeted sensor attack and highlights a worst-case scenario where the vehicle not only fails its navigation task but becomes a significant safety hazard. This example reinforces the central argument for the necessity of comprehensive, simulation-based security evaluations. Identifying such critical failure modes in a controlled, virtual environment is an indispensable step before any system can be considered for testing in real-world scenarios, where the safety stakes are immeasurably higher.
Figure A2. A critical failure moment captured during an RGB salt-and-pepper noise attack. The ego-vehicle (blue) enters an intersection and veers into the opposing lane of traffic due to compromised visual sensors, leading to an imminent collision with vehicles on the opposite side. This highlights the severe safety risks of sensor attacks.
Figure A2. A critical failure moment captured during an RGB salt-and-pepper noise attack. The ego-vehicle (blue) enters an intersection and veers into the opposing lane of traffic due to compromised visual sensors, leading to an imminent collision with vehicles on the opposite side. This highlights the severe safety risks of sensor attacks.
Applsci 15 07493 g0a2

Appendix C. DeepRacer Agent Specifications for Hardware-in-the-Loop Validation

To facilitate the crucial transition from simulation-based validation to real-world testing and ultimately to full-scale autonomous vehicles, our research initiative has acquired an AWS DeepRacer agent. This acquisition marks a significant step towards a phased validation approach, ensuring that our findings regarding cyberattack resilience, operational integrity, and safety protocols can be rigorously tested in a tangible, miniature autonomous platform. The DeepRacer will serve as an intermediate hardware-in-the-loop stage in future research, allowing us to evaluate the practical implications of our defense mechanisms and anomaly detection systems in a controlled physical environment before deployment in full-scale vehicles. Furthermore, to enhance our real-world experimentation capabilities, we are also acquiring iniVation DAVIS 346 or DAVIS 240 with the following resolutions: DAVIS 240, 240 × 180; DAVIS 346, 346 × 260. These will be attached to the DeepRacer agent.
Figure A3. Real-life photographs of the AWS DeepRacer agent acquired for hardware-in-the-loop validation, showcasing its compact design and integrated sensors.
Figure A3. Real-life photographs of the AWS DeepRacer agent acquired for hardware-in-the-loop validation, showcasing its compact design and integrated sensors.
Applsci 15 07493 g0a3
The detailed specifications of the AWS DeepRacer agent are as follows:
  • Chassis: 1/18th scale 4WD monster truck chassis
  • CPU: Intel Atom Processor
  • Memory: 4 GB RAM
  • Storage: 32 GB (expandable via USB)
  • Wi-Fi: 802.11ac
  • Camera: 2 × 4 MP camera with MJPEG encoding
  • LiDAR: 360-degree 12-meter scanning radius LiDAR sensor
  • Software: Ubuntu OS 16.04.3 LTS, Intel® OpenVINO toolkit, ROS Kinetic
  • Drive Battery: 7.4 V/1100 mAh lithium polymer
  • Compute Battery: 13.600 mAh USB-C PD
  • Ports: 4× USB-A, 1× USB-C, 1× Micro-USB, 1× HDMI
  • Integrated Sensors: Accelerometer and Gyroscope
This platform provides a robust environment to bridge the gap between our extensive CARLA simulator experiments and future real-world deployments, allowing for the comprehensive assessment of our proposed solutions’ efficacy under physical conditions. All equipment was sourced from Amazon.com (https://aws.amazon.com/deepracer/, accessed on 29 June 2025), USA.

References

  1. Litman, T. Autonomous Vehicle Implementation Predictions: Implications for Transport Planning; Victoria Transport Policy Institute: Victoria, BC, Canada, 2020. [Google Scholar]
  2. Fagnant, D.J.; Kockelman, K.M. Preparing a Nation for Autonomous Vehicles: Opportunities, Barriers and Policy Recommendations. Transp. Res. Part A Policy Pract. 2015, 77, 167–181. [Google Scholar] [CrossRef]
  3. Sakhai, M.; Sithu, K.; Oke, M.K.S.; Mazurek, S.; Wielgosz, M. Evaluating Synthetic vs. Real Dynamic Vision Sensor Data for SNN-Based Object Classification. In Proceedings of the KU KDM 2025 Conference, Zakopane, Poland, 2–5 April 2025. [Google Scholar]
  4. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  5. Janai, J.; Güney, F.; Behl, A.; Geiger, A. Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art. Found. Trends Comput. Graph. Vis. 2017, 12, 1–308. [Google Scholar] [CrossRef]
  6. Rajamani, R. Vehicle Dynamics and Control, 2nd ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
  7. Giannaros, A.; Karras, A.; Theodorakopoulos, L.; Karras, C.; Kranias, P.; Schizas, N.; Kalogeratos, G.; Tsolis, D. Autonomous Vehicles: Sophisticated Attacks, Safety Issues, Challenges, Open Topics, Blockchain, and Future Directions. J. Cybersecur. Priv. 2023, 3, 493–543. [Google Scholar] [CrossRef]
  8. Hussain, M.; Hong, J.-E. Reconstruction-Based Adversarial Attack Detection in Vision-Based Autonomous Driving Systems. Mach. Learn. Knowl. Extr. 2023, 5, 1589–1611. [Google Scholar] [CrossRef]
  9. Petit, J.; Shladover, S.E. Potential Cyberattacks on Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2014, 16, 546–556. [Google Scholar] [CrossRef]
  10. Parkinson, S.; Ward, P.; Wilson, K.; Miller, J. Cyber Threats Facing Autonomous and Connected Vehicles: Future Challenges. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2898–2915. [Google Scholar] [CrossRef]
  11. Checkoway, S.; McCoy, D.; Kantor, B.; Anderson, D.; Shacham, H.; Savage, S.; Koscher, K.; Czeskis, A.; Roesner, F.; Kohno, T. Comprehensive Experimental Analyses of Automotive Attack Surfaces. In Proceedings of the 20th USENIX Security Symposium, San Francisco, CA, USA, 8–12 August 2011. [Google Scholar]
  12. Cui, J.; Liew, L.S.; Sabaliauskaite, G.; Zhou, F. A Review on Safety Failures, Security Attacks, and Available Countermeasures for Autonomous Vehicles. Ad Hoc Netw. 2019, 90, 101823. [Google Scholar] [CrossRef]
  13. Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; Xiao, C.; Prakash, A.; Kohno, T.; Song, D. Robust Physical-World Attacks on Deep Learning Visual Classification. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1625–1634. [Google Scholar] [CrossRef]
  14. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
  15. Yan, C.; Xu, W.; Liu, J. Can You Trust Autonomous Vehicles: Contactless Attacks against Sensors of Self-Driving Vehicle. In Proceedings of the 24th USENIX Security Symposium, Austin, TX, USA, 13 August 2016. [Google Scholar]
  16. Koscher, K.; Czeskis, A.; Roesner, F.; Patel, S.; Kohno, T.; Checkoway, S.; McCoy, D.; Kantor, B.; Anderson, D.; Shacham, H.; et al. Experimental Security Analysis of a Modern Automobile. In Proceedings of the 2010 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 16–19 May 2010; pp. 447–462. [Google Scholar] [CrossRef]
  17. Miller, C.; Valasek, C. Remote Exploitation of an Unaltered Passenger Vehicle. In Proceedings of the Black Hat USA, Las Vegas, NV, USA, 1–6 August 2015. [Google Scholar]
  18. Woo, S.; Jo, H.J.; Lee, D.H. A Practical Wireless Attack on the Connected Car and Security Protocol for In-Vehicle CAN. IEEE Trans. Intell. Transp. Syst. 2015, 16, 993–1006. [Google Scholar] [CrossRef]
  19. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing Properties of Neural Networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
  20. Lin, P.; Javanmardi, E.; Nakazato, J.; Tsukada, M. Potential Field-based Path Planning with Interactive Speed Optimization for Autonomous Vehicles. arXiv 2023, arXiv:2306.06987. [Google Scholar] [CrossRef]
  21. Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial Examples in the Physical World. arXiv 2016, arXiv:1607.02533. [Google Scholar]
  22. Cao, Y.; Jia, J.; Cong, G.; Na, M.; Xu, W. Adversarial Sensor Attack on LiDAR-Based Perception in Autonomous Driving. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; pp. 2267–2281. [Google Scholar] [CrossRef]
  23. Sun, J.; Cao, Y.; Chen, Q.; Mao, Z.M. Towards Robust LiDAR-Based Perception in Autonomous Driving: General Black-Box Adversarial Sensor Attack and Countermeasures. In Proceedings of the 29th USENIX Security Symposium, Virtual, 12–14 August 2020; pp. 877–894. [Google Scholar]
  24. Humphreys, T.E.; Ledvina, B.M.; Psiaki, M.L.; O’Hanlon, B.W.; Kintner, P.M., Jr. Assessing the Spoofing Threat: Development of a Portable GPS Civilian Spoofer. In Proceedings of the 21st International Technical Meeting of the Satellite Division of The Institute of Navigation, Savannah, GA, USA, 16–19 September 2008; pp. 2314–2325. [Google Scholar]
  25. Tippenhauer, N.O.; Pöpper, C.; Rasmussen, K.B.; Capkun, S. On the Requirements for Successful GPS Spoofing Attacks. In Proceedings of the 18th ACM Conference on Computer and Communications Security, Chicago, IL, USA, 17–21 October 2011; pp. 75–86. [Google Scholar] [CrossRef]
  26. Studnia, I.; Nicomette, V.; Alata, E.; Deswarte, Y.; Kaâniche, M.; Laarouchi, Y. Survey on Security Threats and Protection Mechanisms in Embedded Automotive Networks. In Proceedings of the 43rd Annual IEEE/IFIP Conference on Dependable Systems and Networks Workshop, Budapest, Hungary, 24–27 June 2013; pp. 1–8. [Google Scholar] [CrossRef]
  27. Nassi, B.; Nassi, D.; Ben-Netanel, R.; Mirsky, Y.; Drokin, O.; Elovici, Y. Phantom of the ADAS: Securing Advanced Driver-Assistance Systems from Split-Second Phantom Attacks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; pp. 293–308. [Google Scholar] [CrossRef]
  28. Gallego, G.; Delbruck, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.J.; Conradt, J.; Daniilidis, K.; et al. Event-Based Vision: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 154–180. [Google Scholar] [CrossRef]
  29. Wang, Y.; Du, B.; Shen, Y.; Wu, K.; Zhao, G.; Sun, J.; Wen, H. EV-Gait: Event-Based Robust Gait Recognition Using Dynamic Vision Sensors. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6351–6360. [Google Scholar] [CrossRef]
  30. Lichtsteiner, P.; Posch, C.; Delbruck, T. A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor. IEEE J. Solid-State Circuits 2008, 43, 566–576. [Google Scholar] [CrossRef]
  31. Suh, Y.; Choi, S.; Kim, J.; Kim, H.; Lee, J.; Kim, S.; Lee, J.; Kim, J. A 1280 × 960 Dynamic Vision Sensor with a 4.95-μm Pixel Pitch and Motion Artifact Minimization. In Proceedings of the 2020 IEEE International Symposium on Circuits and Systems, Seville, Spain, 10–21 October 2020; pp. 1–5. [Google Scholar] [CrossRef]
  32. Gehrig, D.; Scaramuzza, D. Low-Latency Automotive Vision with Event Cameras. Nature 2024, 629, 1034–1040. [Google Scholar] [CrossRef]
  33. Brandli, C.; Berner, R.; Yang, M.; Liu, S.-C.; Delbruck, T. A 240 × 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor. IEEE J. Solid-State Circuits 2014, 49, 2333–2341. [Google Scholar] [CrossRef]
  34. Rebecq, H.; Ranftl, R.; Koltun, V.; Scaramuzza, D. Events-to-Video: Bringing Modern Computer Vision to Event Cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3857–3866. [Google Scholar] [CrossRef]
  35. Mueggler, E.; Rebecq, H.; Gallego, G.; Delbruck, T.; Scaramuzza, D. The Event-Camera Dataset and Simulator: Event-Based Data for Pose Estimation, Visual Odometry, and SLAM. Int. J. Robot. Res. 2017, 36, 142–149. [Google Scholar] [CrossRef]
  36. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. In Proceedings of the 1st Annual Conference on Robot Learning, Mountain View, CA, USA, 13–15 November 2017; pp. 1–16. [Google Scholar]
  37. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  38. Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA, 22–24 May 2017; pp. 39–57. [Google Scholar] [CrossRef]
  39. Jafarnia-Jahromi, A.; Broumandan, A.; Nielsen, J.; Lachapelle, G. GPS Vulnerability to Spoofing Threats and a Review of Antispoofing Techniques. Int. J. Navig. Obs. 2012, 2012, 127072. [Google Scholar] [CrossRef]
  40. Tehrani, M.J.; Kim, J.; Tonella, P. PCLA: A Framework for Testing Autonomous Agents in the CARLA Simulator. arXiv 2025, arXiv:2503.09385. [Google Scholar]
  41. Codevilla, F.; Müller, M.; López, A.; Koltun, V.; Dosovitskiy, A. End-to-End Driving via Conditional Imitation Learning. In Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, QLD, Australia, 21–25 May 2018; pp. 4693–4700. [Google Scholar] [CrossRef]
  42. Zhu, A.Z.; Yuan, L.; Chaney, K.; Daniilidis, K. Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 989–997. [Google Scholar] [CrossRef]
  43. Boloor, A.; He, X.; Gill, C.; Vorobeychik, Y.; Zhang, X. Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models. In Proceedings of the IEEE International Conference on Embedded Software and Systems, Shanghai, China, 2–3 June 2019; pp. 1–8. [Google Scholar] [CrossRef]
  44. Akcay, S.; Ameln, D.; Vaidya, A.; Lakshmanan, B.; Ahuja, N.; Genc, U. Anomalib: A Deep Learning Library for Anomaly Detection. In Proceedings of the IEEE International Conference on Image Processing, Bordeaux, France, 16–19 October 2022. [Google Scholar] [CrossRef]
  45. Roth, K.; Pemula, L.; Zepeda, J.; Schölkopf, B.; Brox, T.; Gehler, P. Towards Total Recall in Industrial Anomaly Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 14298–14308. [Google Scholar] [CrossRef]
  46. Bogdoll, D.; Hendl, J.; Zöllner, J.M. Anomaly Detection in Autonomous Driving: A Survey. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, New Orleans, LA, USA, 19–20 June 2022; pp. 4488–4499. [Google Scholar] [CrossRef]
  47. Zhu, Y.; Miao, C.; Xue, H.; Yu, Y.; Su, L.; Qiao, C. Malicious Attacks against Multi-Sensor Fusion in Autonomous Driving. In Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, Washington, DC, USA, 18–22 November 2024; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  48. Cao, Y.; Wang, N.; Xiao, C.; Yang, D.; Fang, J.; Yang, R.; Chen, Q.A.; Liu, M.; Li, B. Invisible for Both Camera and LiDAR: Security of Multi-Sensor Fusion Based Perception in Autonomous Driving Under Physical-World Attacks. In Proceedings of the IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 24–27 May 2021; pp. 176–194. [Google Scholar] [CrossRef]
  49. Kołomański, M.; Sakhai, M.; Nowak, J.; Wielgosz, M. Towards End-to-End Chase in Urban Autonomous Driving Using Reinforcement Learning. In Intelligent Systems and Applications, IntelliSys 2022; Arai, K., Ed.; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2022; Volume 544. [Google Scholar] [CrossRef]
  50. Sakhai, M.; Wielgosz, M. Towards End-to-End Escape in Urban Autonomous Driving Using Reinforcement Learning. In Intelligent Systems and Applications. IntelliSys 2023; Arai, K., Ed.; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2023; Volume 823. [Google Scholar] [CrossRef]
  51. Bendiab, G.; Hameurlaine, A.; Germanos, G.; Kolokotronis, N.; Shiaeles., S. Autonomous Vehicles Security: Challenges and Solutions Using Blockchain and Artificial Intelligence. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3614–3637. [Google Scholar] [CrossRef]
  52. Rajapaksha, S.; Kalutarage, H.; Al-Kadri, M.K.S.O. AI-Based Intrusion Detection Systems for In-Vehicle Networks: A Survey. ACM Comput. Surv. 2023, 55, 237. [Google Scholar] [CrossRef]
  53. Sakhai, M.; Mazurek, S.; Caputa, J.; Argasiński, J.K.; Wielgosz, M. Spiking Neural Networks for Real-Time Pedestrian Street-Crossing Detection Using Dynamic Vision Sensors in Simulated Adverse Weather Conditions. Electronics 2024, 13, 4280. [Google Scholar] [CrossRef]
  54. Pratt, W.K. Digital Image Processing: PIKS Scientific Inside, 4th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  55. Brown, T.B.; Mané, D.; Roy, A.; Abadi, M.; Gilmer, J. Adversarial Patch. arXiv 2017, arXiv:1712.09665. [Google Scholar]
  56. Thys, S.; Van Ranst, W.; Goedemé, T. Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 49–55. [Google Scholar] [CrossRef]
  57. Shin, H.; Kim, D.; Kwon, Y.; Kim, Y. Illusion and Dazzle: Adversarial Optical Channel Exploits against Lidars for Automotive Applications. In Proceedings of the International Conference on Cryptographic Hardware and Embedded Systems, Taipei, Taiwan, 25–28 September 2017; pp. 445–467. [Google Scholar] [CrossRef]
  58. Kerns, A.J.; Shepard, D.P.; Bhatti, J.A.; Humphreys, T.E. Unmanned Aircraft Capture and Control via GPS Spoofing. J. Field Robot. 2014, 31, 617–636. [Google Scholar] [CrossRef]
  59. Psiaki, M.L.; Humphreys, T.E. GNSS Spoofing and Detection. Proc. IEEE 2016, 104, 1258–1270. [Google Scholar] [CrossRef]
  60. Kim, K.; Kim, J.S.; Jeong, S.; Park, J.H.; Kim, H.K. Cybersecurity for autonomous vehicles: Review of attacks and defense. Comput. Secur. 2021, 103, 102150. [Google Scholar] [CrossRef]
  61. Cai, Y.; Qin, T.; Ou, Y.; Wei, R. Intelligent Systems in Motion: A Comprehensive Review on Multi-Sensor Fusion and Information Processing From Sensing to Navigation in Path Planning. Int. J. Semant. Web Inf. Syst. 2023, 19, 1–35. [Google Scholar] [CrossRef]
  62. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
Figure 1. RGB sensor attachment workflow diagram showing the complete data acquisition and processing pipeline in the CARLA simulation environment. The workflow illustrates sensor initialization, data capture, preprocessing stages, and integration with the autonomous driving agent.
Figure 1. RGB sensor attachment workflow diagram showing the complete data acquisition and processing pipeline in the CARLA simulation environment. The workflow illustrates sensor initialization, data capture, preprocessing stages, and integration with the autonomous driving agent.
Applsci 15 07493 g001
Figure 2. DVS sensor attachment workflow diagram demonstrating the processing pipeline.
Figure 2. DVS sensor attachment workflow diagram demonstrating the processing pipeline.
Applsci 15 07493 g002
Figure 3. Episode 7 trajectory showing route completion despite attacks. Note the trajectory deviations from the road during RGB and depth sensor attacks compared to normal operation.
Figure 3. Episode 7 trajectory showing route completion despite attacks. Note the trajectory deviations from the road during RGB and depth sensor attacks compared to normal operation.
Applsci 15 07493 g003
Figure 4. Sensor outputs without defense mechanisms during attacks in Episode 7. Note the severe degradation of RGB and DVS data.
Figure 4. Sensor outputs without defense mechanisms during attacks in Episode 7. Note the severe degradation of RGB and DVS data.
Applsci 15 07493 g004
Figure 5. Episode 26 trajectory showing visible route deviation during RGB camera attacks, though the vehicle still successfully completed its route.
Figure 5. Episode 26 trajectory showing visible route deviation during RGB camera attacks, though the vehicle still successfully completed its route.
Applsci 15 07493 g005
Figure 6. Episode 5 trajectory showing erratic movement and eventual crash during RGB camera attacks.
Figure 6. Episode 5 trajectory showing erratic movement and eventual crash during RGB camera attacks.
Applsci 15 07493 g006
Figure 7. Sensor outputs for Episode 5 showing the roadside fence (visible in normal operation) that the vehicle crashed into during RGB camera attacks.
Figure 7. Sensor outputs for Episode 5 showing the roadside fence (visible in normal operation) that the vehicle crashed into during RGB camera attacks.
Applsci 15 07493 g007
Figure 8. Control inputs and sensor data for Episode 5, showing unstable control behavior during RGB attacks, including full braking events.
Figure 8. Control inputs and sensor data for Episode 5, showing unstable control behavior during RGB attacks, including full braking events.
Applsci 15 07493 g008
Figure 9. Episode 27 trajectory showing crashes during both the initial attack phase and the attack-with-defense phase.
Figure 9. Episode 27 trajectory showing crashes during both the initial attack phase and the attack-with-defense phase.
Applsci 15 07493 g009
Figure 10. Control inputs and sensor data for Episode 27, showing the vehicle’s response during crashes in both attack phases.
Figure 10. Control inputs and sensor data for Episode 27, showing the vehicle’s response during crashes in both attack phases.
Applsci 15 07493 g010
Figure 11. Sensor outputs with defense mechanisms during attacks in Episode 7.
Figure 11. Sensor outputs with defense mechanisms during attacks in Episode 7.
Applsci 15 07493 g011
Figure 12. Comparison of RGB sensor outputs during attacks with and without defense mechanisms enabled. The figure reveals that RGB defenses provide minimal improvement against sophisticated attacks, with sensor outputs remaining significantly compromised when defenses are active.
Figure 12. Comparison of RGB sensor outputs during attacks with and without defense mechanisms enabled. The figure reveals that RGB defenses provide minimal improvement against sophisticated attacks, with sensor outputs remaining significantly compromised when defenses are active.
Applsci 15 07493 g012
Figure 13. Comparison of DVS outputs during attacks with and without defense mechanisms enabled. Despite visible degradation under attack conditions, the DVS maintains object detectability and structural information, demonstrating superior resilience compared to RGB cameras. This intrinsic robustness of event-based vision suggests potential advantages for autonomous vehicle security applications.
Figure 13. Comparison of DVS outputs during attacks with and without defense mechanisms enabled. Despite visible degradation under attack conditions, the DVS maintains object detectability and structural information, demonstrating superior resilience compared to RGB cameras. This intrinsic robustness of event-based vision suggests potential advantages for autonomous vehicle security applications.
Applsci 15 07493 g013
Figure 14. RGB camera anomaly scores across different operational states in Episode 1. Note the clear separation between normal operation and RGB noise attack conditions.
Figure 14. RGB camera anomaly scores across different operational states in Episode 1. Note the clear separation between normal operation and RGB noise attack conditions.
Applsci 15 07493 g014
Figure 15. DVS camera anomaly scores across different operational states in Episode 5. Note the elevated scores during attack conditions without defenses.
Figure 15. DVS camera anomaly scores across different operational states in Episode 5. Note the elevated scores during attack conditions without defenses.
Applsci 15 07493 g015
Table 1. Summary of Sensor Attacks and Defense Effectiveness.
Table 1. Summary of Sensor Attacks and Defense Effectiveness.
Sensor TypeAttack TypeResults Without DefenseResults with Defense
RGB CameraSalt-and-Pepper Noise (80%)Severe trajectory deviations; vehicle weaving across lanes; crashes in some episodesLimited effectiveness; 30–45% trajectory drift still present; filtered images remain compromised
Dynamic Vision Sensor (DVS)Event Flooding (60% spurious events)False motion perception; higher anomaly scores (0.76)Limited effectiveness; anomaly detection detects attack phase; event filter still compromised; lower anomaly scores (0.55)
Depth CameraDepth Map Tampering (random patches)Misinterpreted obstacle distances; affected perception of road boundariesLimited effectiveness; filtered depth maps still contain significant distortions
LiDARPhantom Object Injection (5 clusters)False obstacle detectionPartial mitigation through point cloud filtering; density-based clustering removes some phantom objects
GPSPosition ManipulationSignificant deviation between reported and actual position; incorrect routing decisionsHighly effective mitigation; route deviation and velocity consistency checks prevented course corrections based on falsified data
Sensor UpdateDenial of Service (rate limiting)Delayed/missed sensor readings; affected perception and decision-makingPartial mitigation through sensor update monitoring and buffering mechanisms
Control SystemSteering BiasSystematic errors in steering commands; trajectory deviationPartial effectiveness; sophisticated attacks could still influence vehicle control
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sakhai, M.; Sithu, K.; Oke, M.K.S.; Wielgosz, M. Cyberattack Resilience of Autonomous Vehicle Sensor Systems: Evaluating RGB vs. Dynamic Vision Sensors in CARLA. Appl. Sci. 2025, 15, 7493. https://doi.org/10.3390/app15137493

AMA Style

Sakhai M, Sithu K, Oke MKS, Wielgosz M. Cyberattack Resilience of Autonomous Vehicle Sensor Systems: Evaluating RGB vs. Dynamic Vision Sensors in CARLA. Applied Sciences. 2025; 15(13):7493. https://doi.org/10.3390/app15137493

Chicago/Turabian Style

Sakhai, Mustafa, Kaung Sithu, Min Khant Soe Oke, and Maciej Wielgosz. 2025. "Cyberattack Resilience of Autonomous Vehicle Sensor Systems: Evaluating RGB vs. Dynamic Vision Sensors in CARLA" Applied Sciences 15, no. 13: 7493. https://doi.org/10.3390/app15137493

APA Style

Sakhai, M., Sithu, K., Oke, M. K. S., & Wielgosz, M. (2025). Cyberattack Resilience of Autonomous Vehicle Sensor Systems: Evaluating RGB vs. Dynamic Vision Sensors in CARLA. Applied Sciences, 15(13), 7493. https://doi.org/10.3390/app15137493

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop