Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = A/V classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2369 KB  
Article
Enhancing Intrusion Detection in Autonomous Vehicles Using Ontology-Driven Mitigation
by Manale Boughanja, Zineb Bakraouy, Tomader Mazri and Ahmed Srhir
World Electr. Veh. J. 2025, 16(12), 642; https://doi.org/10.3390/wevj16120642 - 24 Nov 2025
Viewed by 313
Abstract
With the increasing complexity of Autonomous Vehicle networks, enhanced cyber security has become a critical challenge. Traditional security techniques often struggle to adapt dynamically to evolving threats. Overcoming these limitations, this paper presents a novel domain ontology to structure knowledge concerning AV security [...] Read more.
With the increasing complexity of Autonomous Vehicle networks, enhanced cyber security has become a critical challenge. Traditional security techniques often struggle to adapt dynamically to evolving threats. Overcoming these limitations, this paper presents a novel domain ontology to structure knowledge concerning AV security threats, intrusion characteristics, and corresponding mitigation techniques. Unlike previous work, which mainly focused on static classifications or direct integration within Intrusion Detection Systems, our approach has the distinctive feature of creating a formalized and coherent semantic representation. The ontology was designed using Protégé 4.3 and Web Ontology Language (OWL), modeled from the core cyber security concepts of AVs, and it provides a more nuanced threat classification and significantly superior automated reasoning capability. An important feature of our design is that the ontology formalization was done independently of any real-time IDS integration. A PoC was carried out to prove that the ontology could select the most appropriate method of mitigation, using as input the output of machine-learning-based IDS; SPARQL queries retrieve mitigation instance, type, and effectiveness. This design choice enables us to concentrate strictly on validating the foundational semantic coherence and reasoning power of the knowledge structure, hence providing a robust and reliable analytical framework for further reactive and predictive security applications. The experimental evaluation confirms enhanced effectiveness in knowledge organization and reduces inconsistencies in security threat analysis. Specifically, class classification was performed in 1.049 s, while consistency check required just 0.044 s, hence validating the model’s robustness against classification principles and concept inferences. This work thus paves the way for the development of more intelligent and adaptive security frameworks. In the future, research will be focused on the integration with real-time security monitoring and IDS frameworks and on the study of optimization techniques, such as genetic algorithms, to improve the real-time selection of the countermeasures. Full article
(This article belongs to the Section Automated and Connected Vehicles)
Show Figures

Graphical abstract

26 pages, 1586 KB  
Article
Adaptive Vision–Language Transformer for Multimodal CNS Tumor Diagnosis
by Inzamam Mashood Nasir, Hend Alshaya, Sara Tehsin and Wided Bouchelligua
Biomedicines 2025, 13(12), 2864; https://doi.org/10.3390/biomedicines13122864 - 24 Nov 2025
Viewed by 355
Abstract
Objectives: Correctly identifying Central Nervous System (CNS) tumors through MRI is complicated by utilization of divergent MRI acquisition protocols, unequal tumor morphology, and a difficulty in systematically combining imaging with clinical information. This study presents the Adaptive Vision–Language Transformer (AVLT), a multimodal [...] Read more.
Objectives: Correctly identifying Central Nervous System (CNS) tumors through MRI is complicated by utilization of divergent MRI acquisition protocols, unequal tumor morphology, and a difficulty in systematically combining imaging with clinical information. This study presents the Adaptive Vision–Language Transformer (AVLT), a multimodal diagnostic infrastructure designed to integrate multi-sequence MRI with clinical descriptions while improving robustness and interpretability to domain shifts. Methods: AVLT integrates the MRI sequence (T1, T1c, T2, FLAIR) and clinical note text in a joint process using normalized cross-attention to establish association of visual patch embeddings with clinical token representations. An Adaptive Normalization Module (ANM) functions to mitigate distribution shift across datasets by adapting the statistics of domain-specific features. Auxiliary semantic and alignment losses were incorporated to enhance stability of multimodal fusion. Results: On all datasets, AVLT provided superior classification accuracy relative to CNN-, transformer-, radiogenomic-, and multimodal fusion-based models. The AVLT model accuracy was 84.6% on BraTS (OS), 92.4% on TCGA-GBM/LGG, 89.5% on REMBRANDT, and 90.8% on GLASS. AvLT AUC values are at least above 90 for all domains. Conclusions: AVLT provides a reliable, generalizable, and clinically interpretable method for accurate diagnosis of CNS tumors. Full article
(This article belongs to the Special Issue Diagnosis, Pathogenesis and Treatment of CNS Tumors (2nd Edition))
Show Figures

Figure 1

15 pages, 2252 KB  
Article
Evaluating the Effectiveness of Machine Learning for Alzheimer’s Disease Prediction Using Applied Explainability
by Chih-Hao Huang, Feras A. Batarseh and Aman Ullah
Biophysica 2025, 5(4), 54; https://doi.org/10.3390/biophysica5040054 - 12 Nov 2025
Viewed by 420
Abstract
Early and accurate diagnosis of Alzheimer’s disease (AD) is critical for patient outcomes yet presents a significant clinical challenge. This study evaluates the effectiveness of four machine learning models—Logistic Regression, Random Forest, Support Vector Machine, and a Feed-Forward Neural Network—for the five-class classification [...] Read more.
Early and accurate diagnosis of Alzheimer’s disease (AD) is critical for patient outcomes yet presents a significant clinical challenge. This study evaluates the effectiveness of four machine learning models—Logistic Regression, Random Forest, Support Vector Machine, and a Feed-Forward Neural Network—for the five-class classification of AD stages. We systematically compare model performance under two conditions, one including cognitive assessment data and one without, to quantify the diagnostic value of these functional tests. To ensure transparency, we use SHapley Additive exPlanations (SHAPs) to interpret the model predictions. Results show that the inclusion of cognitive data is paramount for accuracy. The RF model performed best, achieving an accuracy of 84.4% with cognitive data included. Without this, performance for all models dropped significantly. SHAP analysis revealed that in the presence of cognitive data, models primarily rely on functional scores like the Clinical Dementia Rating—Sum of Boxes. In their absence, models correctly identify key biological markers, including PET (positron emission tomography) imaging of amyloid burden (FBB, AV45) and hippocampal atrophy, as the next-best predictors. This work underscores the indispensable role of cognitive assessments in AD classification and demonstrates that explainable AI can validate model behavior against clinical knowledge, fostering trust in computational diagnostic tools. Full article
(This article belongs to the Special Issue Advances in Computational Biophysics)
Show Figures

Figure 1

15 pages, 2087 KB  
Article
XAI-Informed Comparative Safety Performance Assessment of Human-Driven Crashes and Automated Vehicle Failures
by Hyeonseo Kim, Sari Kim and Sehyun Tak
Sustainability 2025, 17(21), 9615; https://doi.org/10.3390/su17219615 - 29 Oct 2025
Cited by 1 | Viewed by 475
Abstract
Current Automated Vehicle (AV) technologies still face challenges in operating safely across diverse road environments, as existing infrastructure is not yet fully adapted to AV-specific requirements. While many previous studies have relied on simulations, real-world data is crucial for accurately assessing AV safety [...] Read more.
Current Automated Vehicle (AV) technologies still face challenges in operating safely across diverse road environments, as existing infrastructure is not yet fully adapted to AV-specific requirements. While many previous studies have relied on simulations, real-world data is crucial for accurately assessing AV safety and understanding the impact of road characteristics. To address this gap, this study analyzes human-driven vehicle (HDV) crashes and AV failures using machine learning and explainable AI (XAI), providing insights into how road design can be improved to facilitate AV integration into existing infrastructure. Using XGBoost-based frequency modeling, the study achieved accuracy ranging from 0.6389 to 0.9770, depending on the specific model. The findings indicate that road geometry and traffic characteristics play a significant role in road safety, while the impact of road infrastructure varies across different road classifications. In particular, traffic characteristics were identified as key contributors to HDV crashes, whereas road geometry was the most critical factor in AV failures. By leveraging real-world AV failure data, this study overcomes the limitations of simulation-based research, improving the reliability of safety assessments. It provides a comprehensive evaluation of road safety across different road types and traffic flow conditions while simultaneously analyzing HDV crashes and AV failures. The findings offer critical insights into the challenges of mixed-traffic environments, where AVs and HDVs must coexist, highlighting the need for adaptive road design and infrastructure strategies to enhance safety for all road users. Full article
(This article belongs to the Special Issue Smart Infrastructure Management and Sustainable Urban Development)
Show Figures

Figure 1

26 pages, 1495 KB  
Article
FlashLightNet: An End-to-End Deep Learning Framework for Real-Time Detection and Classification of Static and Flashing Traffic Light States
by Laith Bani Khaled, Mahfuzur Rahman, Iffat Ara Ebu and John E. Ball
Sensors 2025, 25(20), 6423; https://doi.org/10.3390/s25206423 - 17 Oct 2025
Cited by 1 | Viewed by 1484
Abstract
Accurate traffic light detection and classification are fundamental for autonomous vehicle (AV) navigation and real-time traffic management in complex urban environments. Existing systems often fall short of reliably identifying and classifying traffic light states in real-time, including their flashing modes. This study introduces [...] Read more.
Accurate traffic light detection and classification are fundamental for autonomous vehicle (AV) navigation and real-time traffic management in complex urban environments. Existing systems often fall short of reliably identifying and classifying traffic light states in real-time, including their flashing modes. This study introduces FlashLightNet, a novel end-to-end deep learning framework that integrates the nano version of You Only Look Once, version 10m (YOLOv10n) for traffic light detection, Residual Neural Networks 18 (ResNet-18) for feature extraction, and a Long Short-Term Memory (LSTM) network for temporal state classification. The proposed framework is designed to robustly detect and classify traffic light states, including conventional signals (red, green, and yellow) and flashing signals (flash red and flash yellow), under diverse and challenging conditions such as varying lighting, occlusions, and environmental noise. The framework has been trained and evaluated on a comprehensive custom dataset of traffic light scenarios organized into temporal sequences to capture spatiotemporal dynamics. The dataset has been prepared by taking videos of traffic lights at different intersections of Starkville, Mississippi, and Mississippi State University, consisting of red, green, yellow, flash red, and flash yellow. In addition, simulation-based video datasets with different flashing rates—2, 3, and 4 s—for traffic light states at several intersections were created using RoadRunner, further enhancing the diversity and robustness of the dataset. The YOLOv10n model achieved a mean average precision (mAP) of 99.2% in traffic light detection, while the ResNet-18 and LSTM combination classified traffic light states (red, green, yellow, flash red, and flash yellow) with an F1-score of 96%. Full article
(This article belongs to the Special Issue Deep Learning Technology and Image Sensing: 2nd Edition)
Show Figures

Figure 1

25 pages, 4937 KB  
Article
Machine Learning-Driven XR Interface Using ERP Decoding
by Abdul Rehman, Mira Lee, Yeni Kim, Min Seong Chae and Sungchul Mun
Electronics 2025, 14(19), 3773; https://doi.org/10.3390/electronics14193773 - 24 Sep 2025
Viewed by 643
Abstract
This study introduces a machine learning–driven extended reality (XR) interaction framework that leverages electroencephalography (EEG) for decoding consumer intentions in immersive decision-making tasks, demonstrated through functional food purchasing within a simulated autonomous vehicle setting. Recognizing inherent limitations in traditional “Preference vs. Non-Preference” EEG [...] Read more.
This study introduces a machine learning–driven extended reality (XR) interaction framework that leverages electroencephalography (EEG) for decoding consumer intentions in immersive decision-making tasks, demonstrated through functional food purchasing within a simulated autonomous vehicle setting. Recognizing inherent limitations in traditional “Preference vs. Non-Preference” EEG paradigms for immersive product evaluation, we propose a novel and robust “Rest vs. Intention” classification approach that significantly enhances cognitive signal contrast and improves interpretability. Eight healthy adults participated in immersive XR product evaluations within a simulated autonomous driving environment using the Microsoft HoloLens 2 headset (Microsoft Corp., Redmond, WA, USA). Participants assessed 3D-rendered multivitamin supplements systematically varied in intrinsic (ingredient, origin) and extrinsic (color, formulation) attributes. Event-related potentials (ERPs) were extracted from 64-channel EEG recordings, specifically targeting five neurocognitive components: N1 (perceptual attention), P2 (stimulus salience), N2 (conflict monitoring), P3 (decision evaluation), and LPP (motivational relevance). Four ensemble classifiers (Extra Trees, LightGBM, Random Forest, XGBoost) were trained to discriminate cognitive states under both paradigms. The ‘Rest vs. Intention’ approach achieved high cross-validated classification accuracy (up to 97.3% in this sample), and area under the curve (AUC > 0.97) SHAP-based interpretability identified dominant contributions from the N1, P2, and N2 components, aligning with neurophysiological processes of attentional allocation and cognitive control. These findings provide preliminary evidence of the viability of ERP-based intention decoding within a simulated autonomous-vehicle setting. Our framework serves as an exploratory proof-of-concept foundation for future development of real-time, BCI-enabled in-transit commerce systems, while underscoring the need for larger-scale validation in authentic AV environments and raising important considerations for ethics and privacy in neuromarketing applications. Full article
(This article belongs to the Special Issue Connected and Autonomous Vehicles in Mixed Traffic Systems)
Show Figures

Figure 1

26 pages, 608 KB  
Article
The Influence of Digital Capabilities on Elderly Pedestrians’ Road-Sharing Acceptance with Autonomous Vehicles: A Case Study of Wuhan, China
by Zhiwei Liu, Wenli Ouyang and Jie Wu
Appl. Sci. 2025, 15(18), 10097; https://doi.org/10.3390/app151810097 - 16 Sep 2025
Viewed by 873
Abstract
While autonomous vehicles (AVs) are increasingly integrated into urban mobility, little is known about how digital capability shapes elderly pedestrians’ willingness to share roads with these technologies. This is especially true in the absence of explicit vehicle–pedestrian communication mechanisms. To address this gap, [...] Read more.
While autonomous vehicles (AVs) are increasingly integrated into urban mobility, little is known about how digital capability shapes elderly pedestrians’ willingness to share roads with these technologies. This is especially true in the absence of explicit vehicle–pedestrian communication mechanisms. To address this gap, we combine the Theory of Planned Behavior (TPB) with the Pedestrian Behavior Questionnaire (PBQ) and segment elderly pedestrians using Latent Class Analysis (LCA). A sample of 750 older adults in Wuhan, China, was divided into two latent groups: digitally disengaged (70.8%) and digitally engaged (29.2%). Classification was based on four indicators: smart device usage, online social interaction, online entertainment, and online economic behavior. We then applied ordered logit models to estimate group-specific determinants of AV road-sharing acceptance. Results reveal clear heterogeneity across digital capability levels. For digitally disengaged seniors, positive pedestrian behaviors significantly increased willingness (β = 0.316, p = 0.001). Prior accident experience reduced willingness (0 accident: β = 0.435, p = 0.021; 1–2 accidents: β = −0.518, p = 0.012). For digitally engaged seniors, perceived behavioral control showed a marginally positive effect (β = 0.353, p = 0.066). Errors had a significant positive effect (β = 0.540, p = 0.009). Positive behaviors had a significant negative effect (β = −0.414, p = 0.007). These patterns indicate that digital capability not only modulates the strength of TPB pathways but also reshapes behavior–intention linkages captured by PBQ dimensions. Methodologically, the study contributes an integrated TPB–PBQ–LCA–OLM framework. This framework identifies digital capability as a critical moderator of AV acceptance among elderly pedestrians. Practically, the findings suggest differentiated strategies. For digitally disengaged users, interventions should build digital literacy and reinforce safe walking norms. For digitally engaged users, strategies should prioritize transparent AV intent signaling and features that enhance perceived control. Full article
Show Figures

Figure 1

15 pages, 3711 KB  
Article
Improved Shell Color Index for Chicken Eggs with Blue-green Shells Based on Machine Learning Analysis
by Huanhuan Wang, Yinghui Wei, Lei Zhang, Ying Ge, Hang Liu and Xuedong Zhang
Foods 2025, 14(17), 3027; https://doi.org/10.3390/foods14173027 - 29 Aug 2025
Viewed by 1006
Abstract
Shell color is a commercially valuable trait in eggs, and blue-green eggshells typically exhibit multiple color subtypes. To explore the relationship between the CIELab system and visual color classification and develop simplified discrimination indices, 2274 blue-green eggs across seven batches were selected. The [...] Read more.
Shell color is a commercially valuable trait in eggs, and blue-green eggshells typically exhibit multiple color subtypes. To explore the relationship between the CIELab system and visual color classification and develop simplified discrimination indices, 2274 blue-green eggs across seven batches were selected. The L*, a*, and b* values of each egg were measured, and average visual classification (AveObs) was calculated from four numeric categories (Light = 1, Blue = 2, Green = 3, Olive = 4) separately assigned by four observers. After batch correction using ComBat, four algorithms—linear discriminant analysis (LDA), random forest (RF), support vector machine (SVM), and neural network (NNET)—were compared. Correction substantially reduced the coefficients of variation of the L*, a*, and b* values. Correlations emerged: L* and b* (−0.722), a* and b* (0.451), and L* and a* (−0.088), while correlations of the L*, a*, and b* values with AveObs were −0.713, 0.218, and 0.771, respectively. The LDA model achieved superior comprehensive performance across all data scenarios, with the highest accuracy and efficiency as compared to the SVM, NNET, and RF models. Among the LDA functions, LD1 explained 78.53% of the variance, with L*, a*, and b* coefficients of −0.134, 0.063, and 0.349, respectively (ratio ≈ 1:0.47:2.60). Simplified formulas based on the L*, a*, and b* values were constructed and compared to the existing indices C* (=a*2+b*2) and SCI (=L* − a* − b*). The correlation between L* − 2b* and AveObs was −0.803, similar to those for C* (0.797) and SCI (−0.782), while the correlation between L* − 4C* and AveObs was −0.810, significantly higher than that for SCI (p < 0.05). In conclusion, the LDA model demonstrated optimal performance in predicting color classification, and L* − 4C* is an ideal index for grading of blue-green eggs. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

18 pages, 3850 KB  
Article
Operational Evaluation of Mixed Flow on Highways Considering Trucks and Autonomous Vehicles Based on an Improved Car-Following Decision Framework
by Nan Kang, Chun Qian, Yiyan Zhou and Wenting Luo
Sustainability 2025, 17(14), 6450; https://doi.org/10.3390/su17146450 - 15 Jul 2025
Cited by 2 | Viewed by 836
Abstract
This study proposes a new method to improve the accuracy of car-following models in predicting the mobility of mixed traffic flow involving trucks and automated vehicles (AVs). A classification is developed to categorize car-following behaviors into eight distinct modes based on vehicle type [...] Read more.
This study proposes a new method to improve the accuracy of car-following models in predicting the mobility of mixed traffic flow involving trucks and automated vehicles (AVs). A classification is developed to categorize car-following behaviors into eight distinct modes based on vehicle type (passenger car/truck) and autonomy level (human-driven vehicle [HDV]/AV) for parameter calibration and simulation. The car-following model parameters are calibrated based on the HighD dataset, and the models are selected through minimizing statistical error. A cellular-automaton-based simulation platform is implemented in MATLAB (R2023b), and a decision framework is developed for the simulation. Key findings demonstrate that mode-specific parameter calibration improves model accuracy, achieving an average error reduction of 80% compared to empirical methods. The simulation results reveal a positive correlation between the AV penetration rate and traffic flow stability, which consequently enhances capacity. Specifically, a full transition from 0% to 100% AV penetration increases traffic capacity by 50%. Conversely, elevated truck penetration rates degrade traffic flow stability, reducing the average speed by 75.37% under full truck penetration scenarios. Additionally, higher AV penetration helps stabilize traffic flow, leading to reduced speed fluctuations and lower emissions, while higher truck proportions contribute to higher emissions due to increased traffic instability. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

17 pages, 2783 KB  
Article
Performance Evaluation of Four Deep Learning-Based CAD Systems and Manual Reading for Pulmonary Nodules Detection, Volume Measurement, and Lung-RADS Classification Under Varying Radiation Doses and Reconstruction Methods
by Sifan Chen, Lingqi Gao, Maolu Tan, Ke Zhang and Fajin Lv
Diagnostics 2025, 15(13), 1623; https://doi.org/10.3390/diagnostics15131623 - 26 Jun 2025
Viewed by 1271
Abstract
Background: Optimization of pulmonary nodule detection across varied imaging protocols remains challenging. We evaluated four DL-CAD systems and manual reading with volume rendering (VR) for performance under varying radiation doses and reconstruction methods. VR refers to a post-processing technique that generates 3D images [...] Read more.
Background: Optimization of pulmonary nodule detection across varied imaging protocols remains challenging. We evaluated four DL-CAD systems and manual reading with volume rendering (VR) for performance under varying radiation doses and reconstruction methods. VR refers to a post-processing technique that generates 3D images by assigning opacity and color to CT voxels based on Hounsfield units. Methods: An anthropomorphic phantom with 169 artificial nodules was scanned at three dose levels using two kernels and three reconstruction algorithms (1080 image sets). Performance metrics included sensitivity, specificity, volume error (AVE), and Lung-RADS classification accuracy. Results: DL-CAD systems demonstrated high sensitivity across dose levels and reconstruction settings, with three fully automatic DL-CAD systems (0.92–0.95) outperforming manual CT readings (0.72), particularly for sub-centimeter nodules. However, DL-CAD systems exhibited limitations in volume measurement and Lung-RADS classification accuracy, especially for part-solid nodules. VR-enhanced manual reading outperformed original CT interpretation in nodule detection, particularly benefiting less-experienced radiologists under suboptimal imaging conditions. Conclusions: These findings underscore the potential of DL-CAD for lung cancer screening and the clinical value of VR in low-dose settings, but they highlight the need for improved classification algorithms. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

27 pages, 1973 KB  
Article
The Impact of Travel Behavior Factors on the Acceptance of Carsharing and Autonomous Vehicles: A Machine Learning Analysis
by Jamil Hamadneh and Noura Hamdan
World Electr. Veh. J. 2025, 16(7), 352; https://doi.org/10.3390/wevj16070352 - 25 Jun 2025
Viewed by 946
Abstract
The rapid evolution of the transport industry requires a deep understanding of user preferences for emerging mobility solutions, particularly carsharing (CS) and autonomous vehicles (AVs). This study employs machine learning techniques to model transport mode choice, with a focus on traffic safety perceptions [...] Read more.
The rapid evolution of the transport industry requires a deep understanding of user preferences for emerging mobility solutions, particularly carsharing (CS) and autonomous vehicles (AVs). This study employs machine learning techniques to model transport mode choice, with a focus on traffic safety perceptions of people towards CS and privately shared autonomous vehicles (PSAVs). A stated preference (SP) survey is conducted to collect data on travel behavior, incorporating key attributes such as trip time, trip cost, waiting and walking time, privacy, cybersecurity, and surveillance concerns. Sociodemographic factors, such as income, gender, education, employment status, and trip purpose, are also examined. Three gradient boosting models—CatBoost, XGBoost, and LightGBM are applied to classify user choices. The performance of models is evaluated using accuracy, precision, and F1-score. The XGBoost demonstrates the highest accuracy (77.174%) and effectively captures the complexity of mode choice behavior. The results indicate that CS users are easily classified, while PSAV users present greater classification challenges due to variations in safety perceptions and technological acceptance. From a traffic safety perspective, the results emphasize that companionship, comfort, privacy, cybersecurity, safety in using CS and PSAVs, and surveillance significantly influence CS and PSAV acceptance, which leads to the importance of trust in adopting AVs. The findings suggest that ensuring public trust occurs through robust safety regulations and transparent data security policies. Furthermore, the envisaged benefits of shared autonomous mobility are alleviating congestion and promoting sustainability. Full article
Show Figures

Figure 1

44 pages, 5969 KB  
Article
iRisk: Towards Responsible AI-Powered Automated Driving by Assessing Crash Risk and Prevention
by Naomi Y. Mbelekani and Klaus Bengler
Electronics 2025, 14(12), 2433; https://doi.org/10.3390/electronics14122433 - 14 Jun 2025
Cited by 2 | Viewed by 1462
Abstract
Advanced technology systems and neuroelectronics for crash risk assessment and anticipation may be a promising field for advancing responsible automated driving on urban roads. In principle, there are prospects of an artificially intelligent (AI)-powered automated vehicle (AV) system that tracks the degree of [...] Read more.
Advanced technology systems and neuroelectronics for crash risk assessment and anticipation may be a promising field for advancing responsible automated driving on urban roads. In principle, there are prospects of an artificially intelligent (AI)-powered automated vehicle (AV) system that tracks the degree of perceived crash risk (as either low, mid, or high) and perceived safety. As a result, communicating (verbally or nonverbally) this information to the user based on human factor aspects should be reflected. As humans and vehicle automation systems are prone to error, we need to design advanced information and communication technologies that monitor risks and act as a mediator when necessary. One possible approach is towards designing a crash risk classification and management system. This would be through responsible AI that monitors the user’s mental states associated with risk-taking behaviour and communicates this information to the user, in conjunction with the driving environment and AV states. This concept is based on a literature review and industry experts’ perspectives on designing advanced technology systems that support users in preventing crash risk encounters due to long-term effects. Equally, learning strategies for responsible automated driving on urban roads were designed. In a sense, this paper offers the reader a meticulous discussion on conceptualising a safety-inspired ‘ergonomically responsible AI’ concept in the form of an intelligent risk assessment system (iRisk) and an AI-powered Risk information Human–Machine Interface (AI rHMI) as a useful concept for responsible automated driving and safe human–automation interaction. Full article
Show Figures

Figure 1

16 pages, 2108 KB  
Article
One Possible Path Towards a More Robust Task of Traffic Sign Classification in Autonomous Vehicles Using Autoencoders
by Ivan Martinović, Tomás de Jesús Mateo Sanguino, Jovana Jovanović, Mihailo Jovanović and Milena Djukanović
Electronics 2025, 14(12), 2382; https://doi.org/10.3390/electronics14122382 - 11 Jun 2025
Cited by 3 | Viewed by 1244
Abstract
The increasing deployment of autonomous vehicles (AVs) has exposed critical vulnerabilities in traffic sign classification systems, particularly against adversarial attacks that can compromise safety. This study proposes a dual-purpose defense framework based on convolutional autoencoders to enhance robustness against two prominent white-box attacks: [...] Read more.
The increasing deployment of autonomous vehicles (AVs) has exposed critical vulnerabilities in traffic sign classification systems, particularly against adversarial attacks that can compromise safety. This study proposes a dual-purpose defense framework based on convolutional autoencoders to enhance robustness against two prominent white-box attacks: Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Experiments on the German Traffic Sign Recognition Benchmark (GTSRB) dataset show that, although these attacks can significantly degrade system performance, the proposed models are capable of partially recovering lost accuracy. Notably, the defense demonstrates strong capabilities in both detecting and reconstructing manipulated traffic signs, even under low-perturbation scenarios. Additionally, a feature-based autoencoder is introduced, which—despite a high false positive rate—achieves perfect detection in critical conditions, a tradeoff considered acceptable in safety-critical contexts. These results highlight the potential of autoencoder-based architectures as a foundation for resilient AV perception while underscoring the need for hybrid models integrating visual-language frameworks for real-time, fail-safe operation. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

24 pages, 4094 KB  
Article
How Much Longer Can We Tolerate Further Loss of Farmland Without Proper Planning? The Agrivoltaic Case in the Apulia Region (Italy)
by Anna Rita Bernadette Cammerino, Michela Ingaramo, Lorenzo Piacquadio and Massimo Monteleone
Agronomy 2025, 15(5), 1177; https://doi.org/10.3390/agronomy15051177 - 13 May 2025
Cited by 3 | Viewed by 1875
Abstract
The energy transition from fossil fuels to renewable sources is a key goal for the European Union, among others. Despite significant progress, Italy lags far behind the EU’s target of generating 55% of its electricity from renewables by 2030. The Apulia region in [...] Read more.
The energy transition from fossil fuels to renewable sources is a key goal for the European Union, among others. Despite significant progress, Italy lags far behind the EU’s target of generating 55% of its electricity from renewables by 2030. The Apulia region in Italy needs to achieve an additional 7.4 GW of installed renewable energy capacity compared to 2021. Renewable energy installations, particularly photovoltaic systems, require land that may compete with other uses like agriculture. This can lead to land-use changes that disrupt agricultural activities. Agrivoltaics (AV) offer a possible solution by allowing energy production and food growing on the same land, which can help alleviate conflicts between energy and food needs, although concerns about landscape impact remain. This study emphasizes the need for effective spatial planning to manage these risks of land use changes and quantify possible agricultural land occupation. A GIS-based analysis was conducted in Apulia using a three-step approach to assess land use and potential AV opportunities: (a) the land protection system identified by the Apulian Landscape Plan was used to obtain a Constraint Map; (b) the agricultural land use and capability classification together with land slope and exposure was used to obtain the AV Availability Map; and (c) agricultural land conversion scenarios were developed to quantify the potential capacity of future AV installations. The results showed that a 0.25% occupation of utilized agricultural land would allow a regional installed AV capacity of 1.3 GW, while doubling this percentage would double the installed capacity to 2.6 GW. The areas potentially involved by AV installations would be 3.25 and 6.50 thousand hectares, reaching 17.5% and 35.0% of the 2030 total renewable energy target. These figures should be considered a reasonable range of AV development in the region, which can contribute both to the energy transition and the support of the agricultural sector, especially in marginal areas. Full article
(This article belongs to the Section Farming Sustainability)
Show Figures

Figure 1

17 pages, 1557 KB  
Article
MultiDistiller: Efficient Multimodal 3D Detection via Knowledge Distillation for Drones and Autonomous Vehicles
by Binghui Yang, Tao Tao, Wenfei Wu, Yongjun Zhang, Xiuyuan Meng and Jianfeng Yang
Drones 2025, 9(5), 322; https://doi.org/10.3390/drones9050322 - 22 Apr 2025
Cited by 1 | Viewed by 2156
Abstract
Real-time 3D object detection is a cornerstone for the safe operation of drones and autonomous vehicles (AVs)—drones must avoid millimeter-scale power lines in cluttered airspace, while AVs require instantaneous recognition of pedestrians and vehicles in dynamic urban environments. Although significant progress has been [...] Read more.
Real-time 3D object detection is a cornerstone for the safe operation of drones and autonomous vehicles (AVs)—drones must avoid millimeter-scale power lines in cluttered airspace, while AVs require instantaneous recognition of pedestrians and vehicles in dynamic urban environments. Although significant progress has been made in detection methods based on point clouds, cameras, and multimodal fusion, the computational complexity of existing high-precision models struggles to meet the real-time requirements of vehicular edge devices. Additionally, during the model lightweighting process, issues such as multimodal feature coupling failure and the imbalance between classification and localization performance often arise. To address these challenges, this paper proposes a knowledge distillation framework for multimodal 3D object detection, incorporating attention guidance, rank-aware learning, and interactive feature supervision to achieve efficient model compression and performance optimization. Specifically: To enhance the student model’s ability to focus on key channel and spatial features, we introduce attention-guided feature distillation, leveraging a bird’s-eye view foreground mask and a dual-attention mechanism. To mitigate the degradation of classification performance when transitioning from two-stage to single-stage detectors, we propose ranking-aware category distillation by modeling anchor-level distribution. To address the insufficient cross-modal feature extraction capability, we enhance the student network’s image features using the teacher network’s point cloud spatial priors, thereby constructing a LiDAR-image cross-modal feature alignment mechanism. Experimental results demonstrate the effectiveness of the proposed approach in multimodal 3D object detection. On the KITTI dataset, our method improves network performance by 4.89% even after reducing the number of channels by half. Full article
Show Figures

Figure 1

Back to TopTop