Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,212)

Search Parameters:
Keywords = autonomous driving

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 5777 KiB  
Article
ACNet: An Attention–Convolution Collaborative Semantic Segmentation Network on Sensor-Derived Datasets for Autonomous Driving
by Qiliang Zhang, Kaiwen Hua, Zi Zhang, Yiwei Zhao and Pengpeng Chen
Sensors 2025, 25(15), 4776; https://doi.org/10.3390/s25154776 (registering DOI) - 3 Aug 2025
Abstract
In intelligent vehicular networks, the accuracy of semantic segmentation in road scenes is crucial for vehicle-mounted artificial intelligence to achieve environmental perception, decision support, and safety control. Although deep learning methods have made significant progress, two main challenges remain: first, the difficulty in [...] Read more.
In intelligent vehicular networks, the accuracy of semantic segmentation in road scenes is crucial for vehicle-mounted artificial intelligence to achieve environmental perception, decision support, and safety control. Although deep learning methods have made significant progress, two main challenges remain: first, the difficulty in balancing global and local features leads to blurred object boundaries and misclassification; second, conventional convolutions have limited ability to perceive irregular objects, causing information loss and affecting segmentation accuracy. To address these issues, this paper proposes a global–local collaborative attention module and a spider web convolution module. The former enhances feature representation through bidirectional feature interaction and dynamic weight allocation, reducing false positives and missed detections. The latter introduces an asymmetric sampling topology and six-directional receptive field paths to effectively improve the recognition of irregular objects. Experiments on the Cityscapes, CamVid, and BDD100K datasets, collected using vehicle-mounted cameras, demonstrate that the proposed method performs excellently across multiple evaluation metrics, including mIoU, mRecall, mPrecision, and mAccuracy. Comparative experiments with classical segmentation networks, attention mechanisms, and convolution modules validate the effectiveness of the proposed approach. The proposed method demonstrates outstanding performance in sensor-based semantic segmentation tasks and is well-suited for environmental perception systems in autonomous driving. Full article
(This article belongs to the Special Issue AI-Driving for Autonomous Vehicles)
Show Figures

Figure 1

16 pages, 4587 KiB  
Article
FAMNet: A Lightweight Stereo Matching Network for Real-Time Depth Estimation in Autonomous Driving
by Jingyuan Zhang, Qiang Tong, Na Yan and Xiulei Liu
Symmetry 2025, 17(8), 1214; https://doi.org/10.3390/sym17081214 - 1 Aug 2025
Viewed by 143
Abstract
Accurate and efficient stereo matching is fundamental to real-time depth estimation from symmetric stereo cameras in autonomous driving systems. However, existing high-accuracy stereo matching networks typically rely on computationally expensive 3D convolutions, which limit their practicality in real-world environments. In contrast, real-time methods [...] Read more.
Accurate and efficient stereo matching is fundamental to real-time depth estimation from symmetric stereo cameras in autonomous driving systems. However, existing high-accuracy stereo matching networks typically rely on computationally expensive 3D convolutions, which limit their practicality in real-world environments. In contrast, real-time methods often sacrifice accuracy or generalization capability. To address these challenges, we propose FAMNet (Fusion Attention Multi-Scale Network), a lightweight and generalizable stereo matching framework tailored for real-time depth estimation in autonomous driving applications. FAMNet consists of two novel modules: Fusion Attention-based Cost Volume (FACV) and Multi-scale Attention Aggregation (MAA). FACV constructs a compact yet expressive cost volume by integrating multi-scale correlation, attention-guided feature fusion, and channel reweighting, thereby reducing reliance on heavy 3D convolutions. MAA further enhances disparity estimation by fusing multi-scale contextual cues through pyramid-based aggregation and dual-path attention mechanisms. Extensive experiments on the KITTI 2012 and KITTI 2015 benchmarks demonstrate that FAMNet achieves a favorable trade-off between accuracy, efficiency, and generalization. On KITTI 2015, with the incorporation of FACV and MAA, the prediction accuracy of the baseline model is improved by 37% and 38%, respectively, and a total improvement of 42% is achieved by our final model. These results highlight FAMNet’s potential for practical deployment in resource-constrained autonomous driving systems requiring real-time and reliable depth perception. Full article
Show Figures

Figure 1

24 pages, 650 KiB  
Article
Investigating Users’ Acceptance of Autonomous Buses by Examining Their Willingness to Use and Willingness to Pay: The Case of the City of Trikala, Greece
by Spyros Niavis, Nikolaos Gavanas, Konstantina Anastasiadou and Paschalis Arvanitidis
Urban Sci. 2025, 9(8), 298; https://doi.org/10.3390/urbansci9080298 (registering DOI) - 1 Aug 2025
Viewed by 160
Abstract
Autonomous vehicles (AVs) have emerged as a promising sustainable urban mobility solution, expected to lead to enhanced road safety, smoother traffic flows, less traffic congestion, improved accessibility, better energy utilization and environmental performance, as well as more efficient passenger and freight transportation, in [...] Read more.
Autonomous vehicles (AVs) have emerged as a promising sustainable urban mobility solution, expected to lead to enhanced road safety, smoother traffic flows, less traffic congestion, improved accessibility, better energy utilization and environmental performance, as well as more efficient passenger and freight transportation, in terms of time and cost, due to better fleet management and platooning. However, challenges also arise, mostly related to data privacy, security and cyber-security, high acquisition and infrastructure costs, accident liability, even possible increased traffic congestion and air pollution due to induced travel demand. This paper presents the results of a survey conducted among 654 residents who experienced an autonomous bus (AB) service in the city of Trikala, Greece, in order to assess their willingness to use (WTU) and willingness to pay (WTP) for ABs, through testing a range of factors based on a literature review. Results useful to policy-makers were extracted, such as that the intention to use ABs was mostly shaped by psychological factors (e.g., users’ perceptions of usefulness and safety, and trust in the service provider), while WTU seemed to be positively affected by previous experience in using ABs. In contrast, sociodemographic factors were found to have very little effect on the intention to use ABs, while apart from personal utility, users’ perceptions of how autonomous driving will improve the overall life standards in the study area also mattered. Full article
Show Figures

Figure 1

30 pages, 1038 KiB  
Article
Permissibility, Moral Emotions, and Perceived Moral Agency in Autonomous Driving Dilemmas: An Investigation of Pedestrian-Sacrifice and Driver-Sacrifice Scenarios in the Third-Person Perspective
by Chaowu Dong, Xuqun You and Ying Li
Behav. Sci. 2025, 15(8), 1038; https://doi.org/10.3390/bs15081038 - 30 Jul 2025
Viewed by 156
Abstract
Automated vehicles controlled by artificial intelligence are becoming capable of making moral decisions independently. This study investigates the differences in participants’ perceptions of the moral decision-maker’s permissibility when viewing scenarios (pre-test) and after witnessing the outcomes of moral decisions (post-test). It also investigates [...] Read more.
Automated vehicles controlled by artificial intelligence are becoming capable of making moral decisions independently. This study investigates the differences in participants’ perceptions of the moral decision-maker’s permissibility when viewing scenarios (pre-test) and after witnessing the outcomes of moral decisions (post-test). It also investigates how permissibility, ten typical moral emotions, and perceived moral agency fluctuate when AI and the human driver make deontological or utilitarian decisions in a pedestrian-sacrificing dilemma (Experiment 1, N = 254) and a driver-sacrificing dilemma (Experiment 2, N = 269) from a third-person perspective. Moreover, by conducting binary logistic regression, this study examined whether these factors could predict the non-decrease in permissibility ratings. In both experiments, participants preferred to delegate decisions to human drivers rather than to AI, and they generally preferred utilitarianism over deontology. The results of perceived moral emotions and moral agency provide evidence. Moreover, Experiment 2 elicited greater variations in permissibility, moral emotions, and perceived moral agency compared to Experiment 1. Moreover, deontology and gratitude could positively predict the non-decrease in permissibility ratings in Experiment 1, while contempt had a negative influence. In Experiment 2, the human driver and disgust were significant negative predictor factors, while perceived moral agency had a positive influence. These findings deepen the comprehension of the dynamic processes of autonomous driving’s moral decision-making and facilitate understanding of people’s attitudes toward moral machines and their underlying reasons, providing a reference for developing more sophisticated moral machines. Full article
19 pages, 4196 KiB  
Article
Corridors of Suitable Distribution of Betula platyphylla Sukaczev Forest in China Under Climate Warming
by Bingying Xie, Huayong Zhang, Xiande Ji, Bingjian Zhao, Yanan Wei, Yijie Peng and Zhao Liu
Sustainability 2025, 17(15), 6937; https://doi.org/10.3390/su17156937 (registering DOI) - 30 Jul 2025
Viewed by 143
Abstract
Betula. platyphylla Sukaczev (B. platyphylla) forest is an important montane forest type. Global warming has impacted its distribution. However, how it affects suitable distribution across ecoregions and corresponding biodiversity protection measures remains unclear. This study used the Maxent model to analyze [...] Read more.
Betula. platyphylla Sukaczev (B. platyphylla) forest is an important montane forest type. Global warming has impacted its distribution. However, how it affects suitable distribution across ecoregions and corresponding biodiversity protection measures remains unclear. This study used the Maxent model to analyze the suitable distribution and driving variables of B. platyphylla forest in China and its four ecoregions. The minimum cumulative resistance (MCR) model was applied to construct corridors nationwide. Results show that B. platyphylla forest in China is currently mainly distributed in the four ecoregions; specifically, in Gansu and Shaanxi Province in Northwest China, Heilongjiang Province in Northeast China, Sichuan Province in Southwest China, and Hebei Province and Inner Mongolia Autonomous Region in North China. Precipitation and temperature are the main factors affecting suitable distribution. With global warming, the suitable areas in China including the North, Northwest China ecoregions are projected to expand, while Northeast and Southwest China ecoregions will decline. Based on the suitable areas, we considered 45 corridors in China, spanning the four ecoregions. Our results help understand dynamic changes in the distribution of B. platyphylla forest in China under global warming, providing scientific guidance for montane forests’ sustainable development. Full article
(This article belongs to the Section Sustainable Forestry)
Show Figures

Figure 1

31 pages, 11269 KiB  
Review
Advancements in Semantic Segmentation of 3D Point Clouds for Scene Understanding Using Deep Learning
by Hafsa Benallal, Nadine Abdallah Saab, Hamid Tairi, Ayman Alfalou and Jamal Riffi
Technologies 2025, 13(8), 322; https://doi.org/10.3390/technologies13080322 - 30 Jul 2025
Viewed by 385
Abstract
Three-dimensional semantic segmentation is a fundamental problem in computer vision with a wide range of applications in autonomous driving, robotics, and urban scene understanding. The task involves assigning semantic labels to each point in a 3D point cloud, a data representation that is [...] Read more.
Three-dimensional semantic segmentation is a fundamental problem in computer vision with a wide range of applications in autonomous driving, robotics, and urban scene understanding. The task involves assigning semantic labels to each point in a 3D point cloud, a data representation that is inherently unstructured, irregular, and spatially sparse. In recent years, deep learning has become the dominant framework for addressing this task, leading to a broad variety of models and techniques designed to tackle the unique challenges posed by 3D data. This survey presents a comprehensive overview of deep learning methods for 3D semantic segmentation. We organize the literature into a taxonomy that distinguishes between supervised and unsupervised approaches. Supervised methods are further classified into point-based, projection-based, voxel-based, and hybrid architectures, while unsupervised methods include self-supervised learning strategies, generative models, and implicit representation techniques. In addition to presenting and categorizing these approaches, we provide a comparative analysis of their performance on widely used benchmark datasets, discuss key challenges such as generalization, model transferability, and computational efficiency, and examine the limitations of current datasets. The survey concludes by identifying potential directions for future research in this rapidly evolving field. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

17 pages, 2007 KiB  
Article
Optimizing Pretrained Autonomous Driving Models Using Deep Reinforcement Learning
by Vasileios Kochliaridis and Ioannis Vlahavas
Appl. Sci. 2025, 15(15), 8411; https://doi.org/10.3390/app15158411 - 29 Jul 2025
Viewed by 113
Abstract
Vision-based end-to-end navigation systems have shown impressive capabilities, especially when combined with Imitation Learning (IL) and advanced Deep Learning architectures, such as Transformers. One such example is CIL++, a Transformer-based architecture that learns to map navigation states to vehicle controls based on expert [...] Read more.
Vision-based end-to-end navigation systems have shown impressive capabilities, especially when combined with Imitation Learning (IL) and advanced Deep Learning architectures, such as Transformers. One such example is CIL++, a Transformer-based architecture that learns to map navigation states to vehicle controls based on expert demonstrations only. Nevertheless, reliance on experts’ datasets limits generalization and can lead to failures in unknown circumstances. Deep Reinforcement Learning (DRL) can address this issue by fine-tuning the pretrained policy, using a reward function that aims to improve its weaknesses through interaction with the environment. However, fine-tuning with DRL can lead to the Catastrophic Forgetting (CF) problem, where a policy forgets the expert behaviors learned from the demonstrations as it learns to optimize the new reward function. In this paper, we present CILRLv3, a DRL-based training method that is immune to CF, enabling pretrained navigation agents to improve their driving skills across new scenarios. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

4 pages, 976 KiB  
Proceeding Paper
Developing a Risk Recognition System Based on a Large Language Model for Autonomous Driving
by Donggyu Min and Dong-Kyu Kim
Eng. Proc. 2025, 102(1), 7; https://doi.org/10.3390/engproc2025102007 - 29 Jul 2025
Viewed by 114
Abstract
Autonomous driving systems have the potential to reduce traffic accidents dramatically; however, conventional modules often struggle to accurately detect risks in complex environments. This study presents a novel risk recognition system that integrates the reasoning capabilities of a large language model (LLM), specifically [...] Read more.
Autonomous driving systems have the potential to reduce traffic accidents dramatically; however, conventional modules often struggle to accurately detect risks in complex environments. This study presents a novel risk recognition system that integrates the reasoning capabilities of a large language model (LLM), specifically GPT-4, with traffic engineering domain knowledge. By incorporating surrogate safety measures such as time-to-collision (TTC) alongside traditional sensor and image data, our approach enhances the vehicle’s ability to interpret and react to potentially dangerous situations. Utilizing the realistic 3D simulation environment of CARLA, the proposed framework extracts comprehensive data—including object identification, distance, TTC, and vehicle dynamics—and reformulates this information into natural language inputs for GPT-4. The LLM then provides risk assessments with detailed justifications, guiding the autonomous vehicle to execute appropriate control commands. The experimental results demonstrate that the LLM-based module outperforms conventional systems by maintaining safer distances, achieving more stable TTC values, and delivering smoother acceleration control during dangerous scenarios. This fusion of LLM reasoning with traffic engineering principles not only improves the reliability of risk recognition but also lays a robust foundation for future real-time applications and dataset development in autonomous driving safety. Full article
Show Figures

Figure 1

32 pages, 6323 KiB  
Article
Design, Implementation and Evaluation of an Immersive Teleoperation Interface for Human-Centered Autonomous Driving
by Irene Bouzón, Jimena Pascual, Cayetana Costales, Aser Crespo, Covadonga Cima and David Melendi
Sensors 2025, 25(15), 4679; https://doi.org/10.3390/s25154679 - 29 Jul 2025
Viewed by 298
Abstract
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to [...] Read more.
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to support remote interventions in emergency scenarios. Built on a modular ROS2 architecture, the system allows seamless transition between simulated and physical platforms, enabling safe and reproducible testing. The experimental results show a high task success rate and user satisfaction, highlighting the importance of intuitive controls, gesture recognition accuracy, and low-latency feedback. Our findings contribute to the understanding of human-robot interaction (HRI) in immersive teleoperation contexts and provide insights into the role of multisensory feedback and control modalities in building trust and situational awareness for remote operators. Ultimately, this approach is intended to support the broader acceptability of autonomous driving technologies by enhancing human supervision, control, and confidence. Full article
(This article belongs to the Special Issue Human-Centred Smart Manufacturing - Industry 5.0)
Show Figures

Figure 1

25 pages, 2518 KiB  
Article
An Efficient Semantic Segmentation Framework with Attention-Driven Context Enhancement and Dynamic Fusion for Autonomous Driving
by Jia Tian, Peizeng Xin, Xinlu Bai, Zhiguo Xiao and Nianfeng Li
Appl. Sci. 2025, 15(15), 8373; https://doi.org/10.3390/app15158373 - 28 Jul 2025
Viewed by 301
Abstract
In recent years, a growing number of real-time semantic segmentation networks have been developed to improve segmentation accuracy. However, these advancements often come at the cost of increased computational complexity, which limits their inference efficiency, particularly in scenarios such as autonomous driving, where [...] Read more.
In recent years, a growing number of real-time semantic segmentation networks have been developed to improve segmentation accuracy. However, these advancements often come at the cost of increased computational complexity, which limits their inference efficiency, particularly in scenarios such as autonomous driving, where strict real-time performance is essential. Achieving an effective balance between speed and accuracy has thus become a central challenge in this field. To address this issue, we present a lightweight semantic segmentation model tailored for the perception requirements of autonomous vehicles. The architecture follows an encoder–decoder paradigm, which not only preserves the capability for deep feature extraction but also facilitates multi-scale information integration. The encoder leverages a high-efficiency backbone, while the decoder introduces a dynamic fusion mechanism designed to enhance information interaction between different feature branches. Recognizing the limitations of convolutional networks in modeling long-range dependencies and capturing global semantic context, the model incorporates an attention-based feature extraction component. This is further augmented by positional encoding, enabling better awareness of spatial structures and local details. The dynamic fusion mechanism employs an adaptive weighting strategy, adjusting the contribution of each feature channel to reduce redundancy and improve representation quality. To validate the effectiveness of the proposed network, experiments were conducted on a single RTX 3090 GPU. The Dynamic Real-time Integrated Vision Encoder–Segmenter Network (DriveSegNet) achieved a mean Intersection over Union (mIoU) of 76.9% and an inference speed of 70.5 FPS on the Cityscapes test dataset, 74.6% mIoU and 139.8 FPS on the CamVid test dataset, and 35.8% mIoU with 108.4 FPS on the ADE20K dataset. The experimental results demonstrate that the proposed method achieves an excellent balance between inference speed, segmentation accuracy, and model size. Full article
Show Figures

Figure 1

1 pages, 127 KiB  
Correction
Correction: Feng et al. Enhancing Autonomous Driving Perception: A Practical Approach to Event-Based Object Detection in CARLA and ROS. Vehicles 2025, 7, 53
by Jingxiang Feng, Peiran Zhao, Haoran Zheng, Jessada Konpang, Adisorn Sirikham and Phuri Kalnaowakul
Vehicles 2025, 7(3), 78; https://doi.org/10.3390/vehicles7030078 - 28 Jul 2025
Viewed by 70
Abstract
In the published publication [...] Full article
46 pages, 125285 KiB  
Article
ROS-Based Autonomous Driving System with Enhanced Path Planning Node Validated in Chicane Scenarios
by Mohamed Reda, Ahmed Onsy, Amira Y. Haikal and Ali Ghanbari
Actuators 2025, 14(8), 375; https://doi.org/10.3390/act14080375 - 27 Jul 2025
Viewed by 169
Abstract
In modern vehicles, Autonomous Driving Systems (ADSs) are designed to operate partially or fully without human intervention. The ADS pipeline comprises multiple layers, including sensors, perception, localization, mapping, path planning, and control. The Robot Operating System (ROS) is a widely adopted framework that [...] Read more.
In modern vehicles, Autonomous Driving Systems (ADSs) are designed to operate partially or fully without human intervention. The ADS pipeline comprises multiple layers, including sensors, perception, localization, mapping, path planning, and control. The Robot Operating System (ROS) is a widely adopted framework that supports the modular development and integration of these layers. Among them, the path-planning and control layers remain particularly challenging due to several limitations. Classical path planners often struggle with non-smooth trajectories and high computational demands. Meta-heuristic optimization algorithms have demonstrated strong theoretical potential in path planning; however, they are rarely implemented in real-time ROS-based systems due to integration challenges. Similarly, traditional PID controllers require manual tuning and are unable to adapt to system disturbances. This paper proposes a ROS-based ADS architecture composed of eight integrated nodes, designed to address these limitations. The path-planning node leverages a meta-heuristic optimization framework with a cost function that evaluates path feasibility using occupancy grids from the Hector SLAM and obstacle clusters detected through the DBSCAN algorithm. A dynamic goal-allocation strategy is introduced based on the LiDAR range and spatial boundaries to enhance planning flexibility. In the control layer, a modified Pure Pursuit algorithm is employed to translate target positions into velocity commands based on the drift angle. Additionally, an adaptive PID controller is tuned in real time using the Differential Evolution (DE) algorithm, ensuring robust speed regulation in the presence of external disturbances. The proposed system is practically validated on a four-wheel differential drive robot across six scenarios. Experimental results demonstrate that the proposed planner significantly outperforms state-of-the-art methods, ranking first in the Friedman test with a significance level less than 0.05, confirming the effectiveness of the proposed architecture. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

19 pages, 7674 KiB  
Article
Development of Low-Cost Single-Chip Automotive 4D Millimeter-Wave Radar
by Yongjun Cai, Jie Bai, Hui-Liang Shen, Libo Huang, Bing Rao and Haiyang Wang
Sensors 2025, 25(15), 4640; https://doi.org/10.3390/s25154640 - 26 Jul 2025
Viewed by 404
Abstract
Traditional 3D millimeter-wave radars lack target height information, leading to identification failures in complex scenarios. Upgrading to 4D millimeter-wave radars enables four-dimensional information perception, enhancing obstacle detection and improving the safety of autonomous driving. Given the high cost-sensitivity of in-vehicle radar systems, single-chip [...] Read more.
Traditional 3D millimeter-wave radars lack target height information, leading to identification failures in complex scenarios. Upgrading to 4D millimeter-wave radars enables four-dimensional information perception, enhancing obstacle detection and improving the safety of autonomous driving. Given the high cost-sensitivity of in-vehicle radar systems, single-chip 4D millimeter-wave radar solutions, despite technical challenges in imaging, are of great research value. This study focuses on developing single-chip 4D automotive millimeter-wave radar, covering system architecture design, antenna optimization, signal processing algorithm creation, and performance validation. The maximum measurement error is approximately ±0.2° for azimuth angles within the range of ±30° and around ±0.4° for elevation angles within the range of ±13°. Extensive road testing has demonstrated that the designed radar is capable of reliably measuring dynamic targets such as vehicles, pedestrians, and bicycles, while also accurately detecting static infrastructure like overpasses and traffic signs. Full article
Show Figures

Figure 1

26 pages, 12786 KiB  
Article
EMB System Design and Clamping Force Tracking Control Research
by Junyi Zou, Haojun Yan, Yunbing Yan and Xianping Huang
Modelling 2025, 6(3), 72; https://doi.org/10.3390/modelling6030072 - 25 Jul 2025
Viewed by 317
Abstract
The electromechanical braking (EMB) system is an important component of intelligent vehicles and is also the core actuator for longitudinal dynamic control in autonomous driving motion control. Therefore, we propose a new mechanism layout form for EMB and a feedforward second-order linear active [...] Read more.
The electromechanical braking (EMB) system is an important component of intelligent vehicles and is also the core actuator for longitudinal dynamic control in autonomous driving motion control. Therefore, we propose a new mechanism layout form for EMB and a feedforward second-order linear active disturbance rejection controller based on clamping force. This solves the problem of excessive axial distance in traditional EMB and reduces the axial distance by 30%, while concentrating the PCB control board for the wheels on the EMB housing. This enables the ABS and ESP functions to be integrated into the EMB system, further enhancing the integration of line control and active safety functions. A feedforward second-order linear active disturbance rejection controller (LADRC) based on the clamping force of the brake caliper is proposed. Compared with the traditional clamping force control methods three-loop PID and adaptive fuzzy PID, it improves the response speed, steady-state error, and anti-interference ability. Moreover, the LADRC has more advantages in parameter adjustment. Simulation results show that the response speed is increased by 130 ms, the overshoot is reduced by 9.85%, and the anti-interference ability is increased by 41.2%. Finally, the feasibility of this control algorithm was verified through the EMB hardware-in-the-loop test bench. Full article
Show Figures

Figure 1

21 pages, 9651 KiB  
Article
Self-Supervised Visual Tracking via Image Synthesis and Domain Adversarial Learning
by Gu Geng, Sida Zhou, Jianing Tang, Xinming Zhang, Qiao Liu and Di Yuan
Sensors 2025, 25(15), 4621; https://doi.org/10.3390/s25154621 - 25 Jul 2025
Viewed by 189
Abstract
With the widespread use of sensors in applications such as autonomous driving and intelligent security, stable and efficient target tracking from diverse sensor data has become increasingly important. Self-supervised visual tracking has attracted increasing attention due to its potential to eliminate reliance on [...] Read more.
With the widespread use of sensors in applications such as autonomous driving and intelligent security, stable and efficient target tracking from diverse sensor data has become increasingly important. Self-supervised visual tracking has attracted increasing attention due to its potential to eliminate reliance on costly manual annotations; however, existing methods often train on incomplete object representations, resulting in inaccurate localization during inference. In addition, current methods typically struggle when applied to deep networks. To address these limitations, we propose a novel self-supervised tracking framework based on image synthesis and domain adversarial learning. We first construct a large-scale database of real-world target objects, then synthesize training video pairs by randomly inserting these targets into background frames while applying geometric and appearance transformations to simulate realistic variations. To reduce domain shift introduced by synthetic content, we incorporate a domain classification branch after feature extraction and adopt domain adversarial training to encourage feature alignment between real and synthetic domains. Experimental results on five standard tracking benchmarks demonstrate that our method significantly enhances tracking accuracy compared to existing self-supervised approaches without introducing any additional labeling cost. The proposed framework not only ensures complete target coverage during training but also shows strong scalability to deeper network architectures, offering a practical and effective solution for real-world tracking applications. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

Back to TopTop