Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (99)

Search Parameters:
Keywords = radar–camera fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 31351 KiB  
Article
Adaptive Fusion of LiDAR Features for 3D Object Detection in Autonomous Driving
by Mingrui Wang, Dongjie Li, Josep R. Casas and Javier Ruiz-Hidalgo
Sensors 2025, 25(13), 3865; https://doi.org/10.3390/s25133865 - 21 Jun 2025
Viewed by 1067
Abstract
In the field of autonomous driving, cooperative perception through vehicle-to-vehicle communication significantly enhances environmental understanding by leveraging multi-sensor data, including LiDAR, cameras, and radar. However, traditional early or late fusion methods face challenges such as high bandwidth and computational resources, which make it [...] Read more.
In the field of autonomous driving, cooperative perception through vehicle-to-vehicle communication significantly enhances environmental understanding by leveraging multi-sensor data, including LiDAR, cameras, and radar. However, traditional early or late fusion methods face challenges such as high bandwidth and computational resources, which make it difficult to balance data transmission efficiency with the accuracy of perception of the surrounding environment, especially for the detection of smaller objects such as pedestrians. To address these challenges, this paper proposes a novel cooperative perception framework based on two-stage intermediate-level sensor feature fusion specifically designed for complex traffic scenarios where pedestrians and vehicles coexist. In such scenarios, the model demonstrates superior performance in detecting small objects like pedestrians compared to mainstream perception methods while also improving the cooperative perception accuracy for medium and large objects such as vehicles. Furthermore, to thoroughly validate the reliability of the proposed model, we conducted both qualitative and quantitative experiments on mainstream simulated and real-world datasets. The experimental results demonstrate that our approach outperforms state-of-the-art perception models in terms of mAP, achieving up to a 4.1% improvement in vehicle detection accuracy and a remarkable 29.2% enhancement in pedestrian detection accuracy. Full article
(This article belongs to the Special Issue Sensor Fusion in Positioning and Navigation)
Show Figures

Figure 1

26 pages, 24577 KiB  
Article
Infra-3DRC-FusionNet: Deep Fusion of Roadside Mounted RGB Mono Camera and Three-Dimensional Automotive Radar for Traffic User Detection
by Shiva Agrawal, Savankumar Bhanderi and Gordon Elger
Sensors 2025, 25(11), 3422; https://doi.org/10.3390/s25113422 - 29 May 2025
Cited by 1 | Viewed by 665
Abstract
Mono RGB cameras and automotive radar sensors provide a complementary information set that makes them excellent candidates for sensor data fusion to obtain robust traffic user detection. This has been widely used in the vehicle domain and recently introduced in roadside-mounted smart infrastructure-based [...] Read more.
Mono RGB cameras and automotive radar sensors provide a complementary information set that makes them excellent candidates for sensor data fusion to obtain robust traffic user detection. This has been widely used in the vehicle domain and recently introduced in roadside-mounted smart infrastructure-based road user detection. However, the performance of the most commonly used late fusion methods often degrades when the camera fails to detect road users in adverse environmental conditions. The solution is to fuse the data using deep neural networks at the early stage of the fusion pipeline to use the complete data provided by both sensors. Research has been carried out in this area, but is limited to vehicle-based sensor setups. Hence, this work proposes a novel deep neural network to jointly fuse RGB mono-camera images and 3D automotive radar point cloud data to obtain enhanced traffic user detection for the roadside-mounted smart infrastructure setup. Projected radar points are first used to generate anchors in image regions with a high likelihood of road users, including areas not visible to the camera. These anchors guide the prediction of 2D bounding boxes, object categories, and confidence scores. Valid detections are then used to segment radar points by instance, and the results are post-processed to produce final road user detections in the ground plane. The trained model is evaluated for different light and weather conditions using ground truth data from a lidar sensor. It provides a precision of 92%, recall of 78%, and F1-score of 85%. The proposed deep fusion methodology has 33%, 6%, and 21% absolute improvement in precision, recall, and F1-score, respectively, compared to object-level spatial fusion output. Full article
(This article belongs to the Special Issue Multi-sensor Integration for Navigation and Environmental Sensing)
Show Figures

Figure 1

30 pages, 20203 KiB  
Article
Multi-Feature Fusion Method Based on Adaptive Dilation Convolution for Small-Object Detection
by Lin Cao, Jin Wu, Zongmin Zhao, Chong Fu and Dongfeng Wang
Sensors 2025, 25(10), 3182; https://doi.org/10.3390/s25103182 - 18 May 2025
Cited by 1 | Viewed by 560
Abstract
This paper addresses the challenge of small-object detection in traffic surveillance by proposing a hybrid network architecture that combines attention mechanisms with convolutional layers. The network introduces an innovative attention mechanism into the YOLOv8 backbone, which effectively enhances the detection accuracy and robustness [...] Read more.
This paper addresses the challenge of small-object detection in traffic surveillance by proposing a hybrid network architecture that combines attention mechanisms with convolutional layers. The network introduces an innovative attention mechanism into the YOLOv8 backbone, which effectively enhances the detection accuracy and robustness of small objects through fine-grained and coarse-grained attention routing on feature maps. During the feature fusion stage, we employ adaptive dilated convolution, which dynamically adjusts the dilation rate spatially based on frequency components. This adaptive convolution kernel helps preserve the details of small objects while strengthening their feature representation. It also expands the receptive field, which is beneficial for capturing contextual information and the overall features of small objects. Our method demonstrates an improvement in Average Precision (AP) by 1% on the UA-DETRAC-test dataset and 3% on the VisDrone-test dataset when compared to state-of-the-art methods. The experiments indicate that the new architecture achieves significant performance improvements across various evaluation metrics. To fully leverage the potential of our approach, we conducted extended research on radar–camera systems. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

12 pages, 1987 KiB  
Communication
Clutter Mitigation in Indoor Radar Sensors Using Sensor Fusion Technology
by Srishti Singh, Ha-Neul Lee, Yuna Park, Sungho Kim, Si-Hyun Park and Jong-Ryul Yang
Sensors 2025, 25(10), 3113; https://doi.org/10.3390/s25103113 - 14 May 2025
Viewed by 694
Abstract
A methodology utilizing low-resolution camera data is proposed to mitigate clutter effects on radar sensors in smart indoor environments. The proposed technique suppresses clutter in distance–velocity (range–Doppler) images obtained from millimeter-wave radar by estimating clutter locations using approximate spatial information derived from low-resolution [...] Read more.
A methodology utilizing low-resolution camera data is proposed to mitigate clutter effects on radar sensors in smart indoor environments. The proposed technique suppresses clutter in distance–velocity (range–Doppler) images obtained from millimeter-wave radar by estimating clutter locations using approximate spatial information derived from low-resolution camera images. Notably, the inherent blur present in low-resolution images closely corresponds to the distortion patterns induced by clutter in radar signals, making such data particularly suitable for effective sensor fusion. Experimental validation was conducted in indoor path-tracking scenarios involving a moving subject within a 10 m range. Performance was quantitatively evaluated against baseline range–Doppler maps obtained using radar data alone, without clutter mitigation. The results show that our approach improves the signal-to-noise ratio by 2 dB and increases the target detection rate by 8.6% within the critical 4–6 m range, with additional gains observed under constrained velocity conditions. Full article
(This article belongs to the Special Issue Waveform for Joint Radar and Communications)
Show Figures

Figure 1

23 pages, 6679 KiB  
Article
Fusion Ranging Method of Monocular Camera and Millimeter-Wave Radar Based on Improved Extended Kalman Filtering
by Ye Chen, Qirui Cui and Shungeng Wang
Sensors 2025, 25(10), 3045; https://doi.org/10.3390/s25103045 - 12 May 2025
Viewed by 675
Abstract
To address the limitations of single-sensor systems in environmental perception, such as the difficulty in comprehensively capturing complex environmental information and insufficient detection accuracy and robustness in dynamic environments, this study proposes a distance measurement method based on the fusion of millimeter-wave (MMW) [...] Read more.
To address the limitations of single-sensor systems in environmental perception, such as the difficulty in comprehensively capturing complex environmental information and insufficient detection accuracy and robustness in dynamic environments, this study proposes a distance measurement method based on the fusion of millimeter-wave (MMW) radar and monocular camera. Initially, a monocular ranging model was constructed based on object detection algorithms. Subsequently, the pixel-distance joint dual-constraint matching algorithm is employed to accomplish cross-modal matching between the MMW radar and the monocular camera. Furthermore, an adaptive fuzzy extended Kalman filter (AFEKF) algorithm was established to fuse the ranging data acquired from the monocular camera and MMW radar. Experimental results demonstrate that the AFEKF algorithm achieved an average root mean square error (RMSE) of 0.2131 m across 15 test datasets. Compared to the raw MMW radar data, inverse variance weighting (IVW) filtering, and traditional extended Kalman filter (EKF), the AFEKF algorithm improved the average RMSE by 10.54%, 11.10%, and 22.57%, respectively. The AFEKF algorithm improves the extended Kalman filter by integrating an adaptive fuzzy mechanism, providing a reliable and effective solution for enhancing localization accuracy and system stability. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

28 pages, 3675 KiB  
Review
Advancements in Millimeter-Wave Radar Technologies for Automotive Systems: A Signal Processing Perspective
by Boxun Yan and Ian P. Roberts
Electronics 2025, 14(7), 1436; https://doi.org/10.3390/electronics14071436 - 2 Apr 2025
Cited by 1 | Viewed by 3032
Abstract
This review paper provides a comprehensive examination of millimeter-wave radar technologies in automotive systems, reviewing their advancements through signal processing innovations. The evolution of radar systems, from conventional platforms to mmWave technologies, has significantly enhanced capabilities such as high-resolution imaging, real-time tracking, and [...] Read more.
This review paper provides a comprehensive examination of millimeter-wave radar technologies in automotive systems, reviewing their advancements through signal processing innovations. The evolution of radar systems, from conventional platforms to mmWave technologies, has significantly enhanced capabilities such as high-resolution imaging, real-time tracking, and multi-object detection. Signal processing advancements, including constant false alarm rate detection, multiple-input–multiple-output systems, and machine learning-based techniques, are explored for their roles in improving radar performance under dynamic and challenging environments. The integration of mmWave radar with complementary sensing technologies such as LiDAR and cameras facilitates robust environmental perception essential for advanced driver-assistance systems and autonomous vehicles. This review also calls attention to key challenges, including environmental interference, material penetration, and sensor fusion, while addressing innovative solutions such as adaptive signal processing and sensor integration. Emerging applications of joint communication–radar systems further presents the potential of mmWave radar in autonomous driving and vehicle-to-everything communications. By synthesizing recent developments and identifying future directions, this review stresses the critical role of mmWave radar in advancing vehicular safety, efficiency, and autonomy. Full article
Show Figures

Figure 1

22 pages, 5414 KiB  
Article
ARC-LIGHT: Algorithm for Robust Characterization of Lunar Surface Imaging for Ground Hazards and Trajectory
by Alexander Cushen, Ariana Bueno, Samuel Carrico, Corrydon Wettstein, Jaykumar Ishvarbhai Adalja, Mengxiang Shi, Naila Garcia, Yuliana Garcia, Mirko Gamba and Christopher Ruf
Aerospace 2025, 12(3), 177; https://doi.org/10.3390/aerospace12030177 - 24 Feb 2025
Cited by 1 | Viewed by 1287
Abstract
Safe and reliable lunar landings are crucial for future exploration of the Moon. The regolith ejected by a lander’s rocket exhaust plume represents a significant obstacle in achieving this goal. It prevents spacecraft from reliably utilizing their navigation sensors to monitor their trajectory [...] Read more.
Safe and reliable lunar landings are crucial for future exploration of the Moon. The regolith ejected by a lander’s rocket exhaust plume represents a significant obstacle in achieving this goal. It prevents spacecraft from reliably utilizing their navigation sensors to monitor their trajectory and spot emerging surface hazards as they near the surface. As part of NASA’s 2024 Human Lander Challenge (HuLC), the team at the University of Michigan developed an innovative concept to help mitigate this issue. We developed and implemented a machine learning (ML)-based sensor fusion system, ARC-LIGHT, that integrates sensor data from the cameras, lidars, or radars that landers already carry but disable during the final landing phase. Using these data streams, ARC-LIGHT will remove erroneous signals and recover a useful detection of the surface features to then be used by the spacecraft to correct its descent profile. It also offers a layer of redundancy for other key sensors, like inertial measurement units. The feasibility of this technology was validated through development of a prototype algorithm, which was trained on data from a purpose-built testbed that simulates imaging through a dusty environment. Based on these findings, a development timeline, risk analysis, and budget for ARC-LIGHT to be deployed on a lunar landing was created. Full article
(This article belongs to the Special Issue Lunar, Planetary, and Small-Body Exploration)
Show Figures

Figure 1

52 pages, 4917 KiB  
Review
Exploring the Unseen: A Survey of Multi-Sensor Fusion and the Role of Explainable AI (XAI) in Autonomous Vehicles
by De Jong Yeong, Krishna Panduru and Joseph Walsh
Sensors 2025, 25(3), 856; https://doi.org/10.3390/s25030856 - 31 Jan 2025
Cited by 7 | Viewed by 8527
Abstract
Autonomous vehicles (AVs) rely heavily on multi-sensor fusion to perceive their environment and make critical, real-time decisions by integrating data from various sensors such as radar, cameras, Lidar, and GPS. However, the complexity of these systems often leads to a lack of transparency, [...] Read more.
Autonomous vehicles (AVs) rely heavily on multi-sensor fusion to perceive their environment and make critical, real-time decisions by integrating data from various sensors such as radar, cameras, Lidar, and GPS. However, the complexity of these systems often leads to a lack of transparency, posing challenges in terms of safety, accountability, and public trust. This review investigates the intersection of multi-sensor fusion and explainable artificial intelligence (XAI), aiming to address the challenges of implementing accurate and interpretable AV systems. We systematically review cutting-edge multi-sensor fusion techniques, along with various explainability approaches, in the context of AV systems. While multi-sensor fusion technologies have achieved significant advancement in improving AV perception, the lack of transparency and explainability in autonomous decision-making remains a primary challenge. Our findings underscore the necessity of a balanced approach to integrating XAI and multi-sensor fusion in autonomous driving applications, acknowledging the trade-offs between real-time performance and explainability. The key challenges identified span a range of technical, social, ethical, and regulatory aspects. We conclude by underscoring the importance of developing techniques that ensure real-time explainability, specifically in high-stakes applications, to stakeholders without compromising safety and accuracy, as well as outlining future research directions aimed at bridging the gap between high-performance multi-sensor fusion and trustworthy explainability in autonomous driving systems. Full article
(This article belongs to the Special Issue Advances in Physical, Chemical, and Biosensors)
Show Figures

Figure 1

17 pages, 22331 KiB  
Article
Depth Estimation Based on MMwave Radar and Camera Fusion with Attention Mechanisms and Multi-Scale Features for Autonomous Driving Vehicles
by Zhaohuan Zhu, Feng Wu, Wenqing Sun, Quanying Wu, Feng Liang and Wuhan Zhang
Electronics 2025, 14(2), 300; https://doi.org/10.3390/electronics14020300 - 13 Jan 2025
Cited by 1 | Viewed by 1883
Abstract
Autonomous driving vehicles have strong path planning and obstacle avoidance capabilities, which provide great support to avoid traffic accidents. Autonomous driving has become a research hotspot worldwide. Depth estimation is a key technology in autonomous driving as it provides an important basis for [...] Read more.
Autonomous driving vehicles have strong path planning and obstacle avoidance capabilities, which provide great support to avoid traffic accidents. Autonomous driving has become a research hotspot worldwide. Depth estimation is a key technology in autonomous driving as it provides an important basis for accurately detecting traffic objects and avoiding collisions in advance. However, the current difficulties in depth estimation include insufficient estimation accuracy, difficulty in acquiring depth information using monocular vision, and an important challenge of fusing multiple sensors for depth estimation. To enhance depth estimation performance in complex traffic environments, this study proposes a depth estimation method in which point clouds and images obtained from MMwave radar and cameras are fused. Firstly, a residual network is established to extract the multi-scale features of the MMwave radar point clouds and the corresponding image obtained simultaneously from the same location. Correlations between the radar points and the image are established by fusing the extracted multi-scale features. A semi-dense depth estimation is achieved by assigning the depth value of the radar point to the most relevant image region. Secondly, a bidirectional feature fusion structure with additional fusion branches is designed to enhance the richness of the feature information. The information loss during the feature fusion process is reduced, and the robustness of the model is enhanced. Finally, parallel channel and position attention mechanisms are used to enhance the feature representation of the key areas in the fused feature map, the interference of irrelevant areas is suppressed, and the depth estimation accuracy is enhanced. The experimental results on the public dataset nuScenes show that, compared with the baseline model, the proposed method reduces the average absolute error (MAE) by 4.7–6.3% and the root mean square error (RMSE) by 4.2–5.2%. Full article
Show Figures

Figure 1

21 pages, 20775 KiB  
Article
Sensor Fusion Method for Object Detection and Distance Estimation in Assisted Driving Applications
by Stefano Favelli, Meng Xie and Andrea Tonoli
Sensors 2024, 24(24), 7895; https://doi.org/10.3390/s24247895 - 10 Dec 2024
Cited by 5 | Viewed by 3040
Abstract
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between [...] Read more.
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between a vehicle equipped with sensors and different road objects on its path using the fusion of data from cameras, radars, and LiDARs. The target application is an Advanced Driving Assistance System (ADAS) that benefits from the integration of the sensors’ attributes to plan the vehicle’s speed according to real-time road occupation and distance from obstacles. Based on geometrical projection, a low-level sensor fusion approach is proposed to map 3D point clouds into 2D camera images. The fusion information is used to estimate the distance of objects detected and labeled by a Yolov7 detector. The open-source pipeline implemented in ROS consists of a sensors’ calibration method, a Yolov7 detector, 3D point cloud downsampling and clustering, and finally a 3D-to-2D transformation between the reference frames. The goal of the pipeline is to perform data association and estimate the distance of the identified road objects. The accuracy and performance are evaluated in real-world urban scenarios with commercial hardware. The pipeline running on an embedded Nvidia Jetson AGX achieves good accuracy on object identification and distance estimation, running at 5 Hz. The proposed framework introduces a flexible and resource-efficient method for data association from common automotive sensors and proves to be a promising solution for enabling effective environment perception ability for assisted driving. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion Technology in Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 5445 KiB  
Article
CaLiJD: Camera and LiDAR Joint Contender for 3D Object Detection
by Jiahang Lyu, Yongze Qi, Suilian You, Jin Meng, Xin Meng, Sarath Kodagoda and Shifeng Wang
Remote Sens. 2024, 16(23), 4593; https://doi.org/10.3390/rs16234593 - 6 Dec 2024
Cited by 1 | Viewed by 1349
Abstract
Three-dimensional object detection has been a key area of research in recent years because of its rich spatial information and superior performance in addressing occlusion issues. However, the performance of 3D object detection still lags significantly behind that of 2D object detection, owing [...] Read more.
Three-dimensional object detection has been a key area of research in recent years because of its rich spatial information and superior performance in addressing occlusion issues. However, the performance of 3D object detection still lags significantly behind that of 2D object detection, owing to challenges such as difficulties in feature extraction and a lack of texture information. To address this issue, this study proposes a 3D object detection network, CaLiJD (Camera and Lidar Joint Contender for 3D object Detection), guided by two-dimensional detection results. CaLiJD creatively integrates advanced channel attention mechanisms with a novel bounding-box filtering method to improve detection accuracy, especially for small and occluded objects. Bounding boxes are detected by the 2D and 3D networks for the same object in the same scene as an associated pair. The detection results that satisfy the criteria are then fed into the fusion layer for training. In this study, a novel fusion network is proposed. It consists of numerous convolutions arranged in both sequential and parallel forms and includes a Grouped Channel Attention Module for extracting interactions among multi-channel information. Moreover, a novel bounding-box filtering mechanism was introduced, incorporating the normalized distance from the object to the radar as a filtering criterion within the process. Experiments were conducted using the KITTI 3D object detection benchmark. The results showed that a substantial improvement in mean Average Precision (mAP) was achieved by CaLiJD compared with the baseline single-modal 3D detection model, with an enhancement of 7.54%. Moreover, the improvement achieved by our method surpasses that of other classical fusion networks by an additional 0.82%. In particular, CaLiJD achieved mAP values of 73.04% and 59.86%, respectively, thus demonstrating state-of-the-art performance for challenging small-object detection tasks such as those involving cyclists and pedestrians. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Figure 1

26 pages, 28365 KiB  
Article
Three-Dimensional Geometric-Physical Modeling of an Environment with an In-House-Developed Multi-Sensor Robotic System
by Su Zhang, Minglang Yu, Haoyu Chen, Minchao Zhang, Kai Tan, Xufeng Chen, Haipeng Wang and Feng Xu
Remote Sens. 2024, 16(20), 3897; https://doi.org/10.3390/rs16203897 - 20 Oct 2024
Cited by 1 | Viewed by 1494
Abstract
Environment 3D modeling is critical for the development of future intelligent unmanned systems. This paper proposes a multi-sensor robotic system for environmental geometric-physical modeling and the corresponding data processing methods. The system is primarily equipped with a millimeter-wave cascaded radar and a multispectral [...] Read more.
Environment 3D modeling is critical for the development of future intelligent unmanned systems. This paper proposes a multi-sensor robotic system for environmental geometric-physical modeling and the corresponding data processing methods. The system is primarily equipped with a millimeter-wave cascaded radar and a multispectral camera to acquire the electromagnetic characteristics and material categories of the target environment and simultaneously employs light detection and ranging (LiDAR) and an optical camera to achieve a three-dimensional spatial reconstruction of the environment. Specifically, the millimeter-wave radar sensor adopts a multiple input multiple output (MIMO) array and obtains 3D synthetic aperture radar images through 1D mechanical scanning perpendicular to the array, thereby capturing the electromagnetic properties of the environment. The multispectral camera, equipped with nine channels, provides rich spectral information for material identification and clustering. Additionally, LiDAR is used to obtain a 3D point cloud, combined with the RGB images captured by the optical camera, enabling the construction of a three-dimensional geometric model. By fusing the data from four sensors, a comprehensive geometric-physical model of the environment can be constructed. Experiments conducted in indoor environments demonstrated excellent spatial-geometric-physical reconstruction results. This system can play an important role in various applications, such as environment modeling and planning. Full article
Show Figures

Graphical abstract

52 pages, 18006 KiB  
Review
A Survey of the Real-Time Metaverse: Challenges and Opportunities
by Mohsen Hatami, Qian Qu, Yu Chen, Hisham Kholidy, Erik Blasch and Erika Ardiles-Cruz
Future Internet 2024, 16(10), 379; https://doi.org/10.3390/fi16100379 - 18 Oct 2024
Cited by 30 | Viewed by 9408
Abstract
The metaverse concept has been evolving from static, pre-rendered virtual environments to a new frontier: the real-time metaverse. This survey paper explores the emerging field of real-time metaverse technologies, which enable the continuous integration of dynamic, real-world data into immersive virtual environments. We [...] Read more.
The metaverse concept has been evolving from static, pre-rendered virtual environments to a new frontier: the real-time metaverse. This survey paper explores the emerging field of real-time metaverse technologies, which enable the continuous integration of dynamic, real-world data into immersive virtual environments. We examine the key technologies driving this evolution, including advanced sensor systems (LiDAR, radar, cameras), artificial intelligence (AI) models for data interpretation, fast data fusion algorithms, and edge computing with 5G networks for low-latency data transmission. This paper reveals how these technologies are orchestrated to achieve near-instantaneous synchronization between physical and virtual worlds, a defining characteristic that distinguishes the real-time metaverse from its traditional counterparts. The survey provides a comprehensive insight into the technical challenges and discusses solutions to realize responsive dynamic virtual environments. The potential applications and impact of real-time metaverse technologies across various fields are considered, including live entertainment, remote collaboration, dynamic simulations, and urban planning with digital twins. By synthesizing current research and identifying future directions, this survey provides a foundation for understanding and advancing the rapidly evolving landscape of real-time metaverse technologies, contributing to the growing body of knowledge on immersive digital experiences and setting the stage for further innovations in the Metaverse transformative field. Full article
Show Figures

Figure 1

27 pages, 13977 KiB  
Review
Advanced Sensor Technologies in CAVs for Traditional and Smart Road Condition Monitoring: A Review
by Masoud Khanmohamadi and Marco Guerrieri
Sustainability 2024, 16(19), 8336; https://doi.org/10.3390/su16198336 - 25 Sep 2024
Cited by 9 | Viewed by 5689
Abstract
This paper explores new sensor technologies and their integration within Connected Autonomous Vehicles (CAVs) for real-time road condition monitoring. Sensors like accelerometers, gyroscopes, LiDAR, cameras, and radar that have been made available on CAVs are able to detect anomalies on roads, including potholes, [...] Read more.
This paper explores new sensor technologies and their integration within Connected Autonomous Vehicles (CAVs) for real-time road condition monitoring. Sensors like accelerometers, gyroscopes, LiDAR, cameras, and radar that have been made available on CAVs are able to detect anomalies on roads, including potholes, surface cracks, or roughness. This paper also describes advanced data processing techniques of data detected with sensors, including machine learning algorithms, sensor fusion, and edge computing, which enhance accuracy and reliability in road condition assessment. Together, these technologies support instant road safety and long-term maintenance cost reduction with proactive maintenance strategies. Finally, this article provides a comprehensive review of the state-of-the-art future directions of condition monitoring systems for traditional and smart roads. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

19 pages, 31372 KiB  
Article
A Target Detection Algorithm Based on Fusing Radar with a Camera in the Presence of a Fluctuating Signal Intensity
by Yanqiu Yang, Xianpeng Wang, Xiaoqin Wu, Xiang Lan, Ting Su and Yuehao Guo
Remote Sens. 2024, 16(18), 3356; https://doi.org/10.3390/rs16183356 - 10 Sep 2024
Cited by 3 | Viewed by 1970
Abstract
Radar point clouds will experience variations in density, which may cause incorrect alerts during clustering. In turn, it will diminish the precision of the decision-level fusion method. To address this problem, a target detection algorithm based on fusing radar with a camera in [...] Read more.
Radar point clouds will experience variations in density, which may cause incorrect alerts during clustering. In turn, it will diminish the precision of the decision-level fusion method. To address this problem, a target detection algorithm based on fusing radar with a camera in the presence of a fluctuating signal intensity is proposed in this paper. It introduces a snow ablation optimizer (SAO) for solving the optimal parameters of the density-based spatial clustering of applications with noise (DBSCAN). Subsequently, the enhanced DBSCAN clusters radar point clouds, and the valid clusters are fused with monocular camera targets. The experimental results indicate that the suggested fusion method can attain a Balance-score ranging from 0.97 to 0.99, performing outstandingly in preventing missed detections and false alarms. Additionally, the fluctuation range of the Balance-score is within 0.02, indicating the algorithm has an excellent robustness. Full article
(This article belongs to the Special Issue Technical Developments in Radar—Processing and Application)
Show Figures

Figure 1

Back to TopTop