Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (61)

Search Parameters:
Keywords = ground-based mobile LiDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3879 KB  
Article
Bluetooth Low Energy-Based Docking Solution for Mobile Robots
by Kyuman Lee
Electronics 2026, 15(2), 483; https://doi.org/10.3390/electronics15020483 - 22 Jan 2026
Viewed by 73
Abstract
Existing docking methods for mobile robots rely on a LiDAR sensor or image processing using a camera. Although both demonstrate excellent performance in terms of sensing distance and spatial resolution, they are sensitive to environmental effects, such as illumination and occlusion, and are [...] Read more.
Existing docking methods for mobile robots rely on a LiDAR sensor or image processing using a camera. Although both demonstrate excellent performance in terms of sensing distance and spatial resolution, they are sensitive to environmental effects, such as illumination and occlusion, and are expensive. Some environments or conditions require low-power, low-cost novel docking solutions that are less sensitive to the environment. In this study, we propose a guidance and navigation solution for a mobile robot to dock into a docking station using the values of the angle of arrival and received signal strength indicator between the mobile robot and the docking station, measured via wireless communication based on Bluetooth low energy (BLE). This proposed algorithm is a LiDAR- and camera-free docking solution. The proposed algorithm is used to run an actual mobile robot and BLE transceiver hardware, and the obtained result is significantly close to the ground truth for docking. Full article
Show Figures

Figure 1

23 pages, 52765 KB  
Article
GNSS NRTK, UAS-Based SfM Photogrammetry, TLS and HMLS Data for a 3D Survey of Sand Dunes in the Area of Caleri (Po River Delta, Italy)
by Massimo Fabris and Michele Monego
Land 2026, 15(1), 95; https://doi.org/10.3390/land15010095 - 3 Jan 2026
Viewed by 316
Abstract
Coastal environments are fragile ecosystems threatened by various factors, both natural and anthropogenic. The preservation and protection of these environments, and in particular, the sand dune systems, which contribute significantly to the defense of the inland from flooding, require continuous monitoring. To this [...] Read more.
Coastal environments are fragile ecosystems threatened by various factors, both natural and anthropogenic. The preservation and protection of these environments, and in particular, the sand dune systems, which contribute significantly to the defense of the inland from flooding, require continuous monitoring. To this end, high-resolution and high-precision multitemporal data acquired with various techniques can be used, such as, among other things, the global navigation satellite system (GNSS) using the network real-time kinematic (NRTK) approach to acquire 3D points, UAS-based structure-from-motion photogrammetry (SfM), terrestrial laser scanning (TLS), and handheld mobile laser scanning (HMLS)-based light detection and ranging (LiDAR). These techniques were used in this work for the 3D survey of a portion of vegetated sand dunes in the Caleri area (Po River Delta, northern Italy) to assess their applicability in complex environments such as coastal vegetated dune systems. Aerial-based and ground-based acquisitions allowed us to produce point clouds, georeferenced using common ground control points (GCPs), measured both with the GNSS NRTK method and the total station technique. The 3D data were compared to each other to evaluate the accuracy and performance of the different techniques. The results provided good agreement between the different point clouds, as the standard deviations of the differences were lower than 9.3 cm. The GNSS NRTK technique, used with the kinematic approach, allowed for the acquisition of the bare-ground surface but at a cost of lower resolution. On the other hand, the HMLS represented the poorest ability in the penetration of vegetation, providing 3D points with the highest elevation value. UAS-based and TLS-based point clouds provided similar average values, with significant differences only in dense vegetation caused by a very different platform of acquisition and point of view. Full article
(This article belongs to the Special Issue Digital Earth and Remote Sensing for Land Management, 2nd Edition)
Show Figures

Figure 1

32 pages, 3663 KB  
Article
Technology Acceptance and Perceived Learning Outcomes in Construction Surveying Education: A Comparative Analysis Using UTAUT and Bloom’s Taxonomy
by Ri Na, Dyala Aljagoub, Tianjiao Zhao and Xi Lin
Educ. Sci. 2026, 16(1), 45; https://doi.org/10.3390/educsci16010045 - 30 Dec 2025
Viewed by 296
Abstract
Rapid adoption of digital surveying technologies in construction has highlighted the need for engineering education to equip students with technological competency as well as higher-order problem-solving skills. This experiment explores undergraduate students’ acceptance of emerging surveying technologies and their perceived learning results within [...] Read more.
Rapid adoption of digital surveying technologies in construction has highlighted the need for engineering education to equip students with technological competency as well as higher-order problem-solving skills. This experiment explores undergraduate students’ acceptance of emerging surveying technologies and their perceived learning results within a constructivist framework of experiential learning. Thirty-six students in a required construction surveying class interacted with traditional and advanced technologies such as total stations, terrestrial laser scanning, drones, and mobile LiDAR through structured, semi-structured, and unstructured lab activities. Data were gathered based on two post-course surveys: a technology acceptance survey grounded in Unified Theory of Acceptance and Use of Technology (UTAUT) and a self-perceived cognitive learning outcome survey through Bloom’s Taxonomy. Qualitative analysis along with quantitative analysis indicated a gap between technology acceptance and perceived learning gains. Laser scanner had the greatest acceptance scores followed by other advanced tools. Total station (widespread in hands-on lab activities) was perceived to have been most influential in terms of enhancing learning. Lower-order skills were strengthened in structured labs, while higher-order thinking emerged more unevenly in open-ended labs. These findings underscore that the mode of student engagement with technology matters more for learning than the sophistication of the tools themselves. By embedding UTAUT and Bloom’s Taxonomy in an authentic learning environment, this experiment provides engineering educators a mechanism to assess technology-enhanced learning and identifies strategies to facilitate higher-order skills aligned with industry needs. Full article
(This article belongs to the Special Issue Technology-Enhanced Education for Engineering Students)
Show Figures

Figure 1

18 pages, 8006 KB  
Article
Optimal Low-Cost MEMS INS/GNSS Integrated Georeferencing Solution for LiDAR Mobile Mapping Applications
by Nasir Al-Shereiqi, Mohammed El-Diasty and Ghazi Al-Rawas
Sensors 2025, 25(24), 7683; https://doi.org/10.3390/s25247683 - 18 Dec 2025
Viewed by 1124
Abstract
Mobile mapping systems using LiDAR technology are becoming a reliable surveying technique to generate accurate point clouds. Mobile mapping systems integrate several advanced surveying technologies. This research investigated the development of a low-cost, accurate Microelectromechanical System (MEMS)-based INS/GNSS georeferencing system for LiDAR mobile [...] Read more.
Mobile mapping systems using LiDAR technology are becoming a reliable surveying technique to generate accurate point clouds. Mobile mapping systems integrate several advanced surveying technologies. This research investigated the development of a low-cost, accurate Microelectromechanical System (MEMS)-based INS/GNSS georeferencing system for LiDAR mobile mapping applications, enabling the generation of accurate point clouds. The challenge of using the MEMS IMU is that it is contaminated by high levels of noise and bias instability. To overcome this issue, new denoising and filtering methods were developed using a wavelet neural network (WNN) and an optimal maximum likelihood estimator (MLE) method to achieve an accurate MEMS-based INS/GNSS integration navigation solution for LiDAR mobile mapping applications. Moreover, the final accuracy of the MEMS-based INS/GNSS navigation solution was compared with the ASPRS standards for geospatial data production. It was found that the proposed WNN denoising method improved the MEMS-based INS/GNSS integration accuracy by approximately 11%, and that the optimal MLE method achieved approximately 12% higher accuracy than the forward-only navigation solution without GNSS outages. The proposed WNN denoising outperforms the current state-of-the-art Long Short-Term Memory (LSTM)–Recurrent Neural Network (RNN), or LSTM-RNN, denoising model. Additionally, it was found that, depending on the sensor–object distance, the accuracy of the optimal MLE-based MEMS INS/GNSS navigation solution with WNN denoising ranged from 1 to 3 cm for ground mapping and from 1 to 9 cm for building mapping, which can fulfill the ASPRS standards of classes 1 to 3 and classes 1 to 9 for ground and building mapping cases, respectively. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

40 pages, 8121 KB  
Article
A Multi-Platform Electronic Travel Aid Integrating Proxemic Sensing for the Visually Impaired
by Nathan Naidoo and Mehrdad Ghaziasgar
Technologies 2025, 13(12), 550; https://doi.org/10.3390/technologies13120550 - 26 Nov 2025
Viewed by 494
Abstract
Visual impairment (VI) affects over two billion people globally, with prevalence increasing due to preventable conditions. To address mobility and navigation challenges, this study presents a multi-platform, multi-sensor Electronic Travel Aid (ETA) integrating a combination of ultrasonic, LiDAR, and vision-based sensing across head-, [...] Read more.
Visual impairment (VI) affects over two billion people globally, with prevalence increasing due to preventable conditions. To address mobility and navigation challenges, this study presents a multi-platform, multi-sensor Electronic Travel Aid (ETA) integrating a combination of ultrasonic, LiDAR, and vision-based sensing across head-, torso-, and cane-mounted nodes. Grounded in orientation and mobility (OM) principles, the system delivers context-aware haptic and auditory feedback to enhance perception and independence for users with VI. The ETA employs a hardware–software co-design approach guided by proxemic theory, comprising three autonomous components—Glasses, Belt, and Cane nodes—each optimized for a distinct spatial zone while maintaining overlap for redundancy. Embedded ESP32 microcontrollers enable low-latency sensor fusion providing real-time multi-modal user feedback. Static and dynamic experiments using a custom-built motion rig evaluated detection accuracy and feedback latency under repeatable laboratory conditions. Results demonstrate millimetre-level accuracy and sub-30 ms proximity-to-feedback latency across all nodes. The Cane node’s dual LiDAR achieved a coefficient of variation at most 0.04%, while the Belt and Glasses nodes maintained mean detection errors below 1%. The validated tri-modal ETA architecture establishes a scalable, resilient framework for safe, real-time navigation—advancing sensory augmentation for individuals with VI. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

21 pages, 11906 KB  
Article
Voxelized Point Cloud and Solid 3D Model Integration to Assess Visual Exposure in Yueya Lake Park, Nanjing
by Guanting Zhang, Dongxu Yang and Shi Cheng
Land 2025, 14(10), 2095; https://doi.org/10.3390/land14102095 - 21 Oct 2025
Cited by 1 | Viewed by 912
Abstract
Natural elements such as vegetation, water bodies, and sky, together with artificial elements including buildings and paved surfaces, constitute the core of urban visual environments. Their perception at the pedestrian level not only influences city image but also contributes to residents’ well-being and [...] Read more.
Natural elements such as vegetation, water bodies, and sky, together with artificial elements including buildings and paved surfaces, constitute the core of urban visual environments. Their perception at the pedestrian level not only influences city image but also contributes to residents’ well-being and spatial experience. This study develops a hybrid 3D visibility assessment framework that integrates a city-scale LOD1 solid model with high-resolution mobile LiDAR point clouds to quantify five visual exposure indicators. The case study area is Yueya Lake Park in Nanjing, where a voxel-based line-of-sight sampling approach simulated eye-level visibility at 1.6 m along the southern lakeside promenade. Sixteen viewpoints were selected at 50 m intervals to capture spatial variations in visual exposure. Comparative analysis between the solid model (excluding vegetation) and the hybrid model (including vegetation) revealed that vegetation significantly reshaped the pedestrian visual field by reducing the dominance of sky and buildings, enhancing near-field greenery, and reframing water views. Artificial elements such as buildings and ground showed decreased exposure in the hybrid model, reflecting vegetation’s masking effect. The calculation efficiency remains a limitation in this study. Overall, the study demonstrates that integrating natural and artificial elements provides a more realistic and nuanced assessment of pedestrian visual perception, offering valuable support for sustainable landscape planning, canopy management, and the equitable design of urban public spaces. Full article
Show Figures

Figure 1

26 pages, 38057 KB  
Article
Multimodal RGB–LiDAR Fusion for Robust Drivable Area Segmentation and Mapping
by Hyunmin Kim, Minkyung Jun and Hoeryong Jung
Sensors 2025, 25(18), 5841; https://doi.org/10.3390/s25185841 - 18 Sep 2025
Viewed by 2388
Abstract
Drivable area detection and segmentation are critical tasks for autonomous mobile robots in complex and dynamic environments. RGB-based methods offer rich semantic information but suffer in unstructured environments and under varying lighting, while LiDAR-based models provide precise spatial measurements but often require high-resolution [...] Read more.
Drivable area detection and segmentation are critical tasks for autonomous mobile robots in complex and dynamic environments. RGB-based methods offer rich semantic information but suffer in unstructured environments and under varying lighting, while LiDAR-based models provide precise spatial measurements but often require high-resolution sensors and are sensitive to sparsity. In addition, most fusion-based systems are constrained by fixed sensor setups and demand retraining when hardware configurations change. This paper presents a real-time, modular RGB–LiDAR fusion framework for robust drivable area recognition and mapping. Our method decouples RGB and LiDAR preprocessing to support sensor-agnostic adaptability without retraining, enabling seamless deployment across diverse platforms. By fusing RGB segmentation with LiDAR ground estimation, we generate high-confidence drivable area point clouds, which are incrementally integrated via SLAM into a global drivable area map. The proposed approach was evaluated on the KITTI dataset in terms of intersection over union (IoU), precision, and frames per second (FPS). Experimental results demonstrate that the proposed framework achieves competitive accuracy and the highest inference speed among compared methods, confirming its suitability for real-time autonomous navigation. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

22 pages, 9956 KB  
Article
Short-Range High Spectral Resolution Lidar for Aerosol Sensing Using a Compact High-Repetition-Rate Fiber Laser
by Manuela Hoyos-Restrepo, Romain Ceolato, Andrés E. Bedoya-Velásquez and Yoshitaka Jin
Remote Sens. 2025, 17(17), 3084; https://doi.org/10.3390/rs17173084 - 4 Sep 2025
Viewed by 1467
Abstract
This work presents a proof of concept for a short-range high spectral resolution lidar (SR-HSRL) optimized for aerosol characterization in the first kilometer of the atmosphere. The system is based on a compact, high-repetition-rate diode-based fiber laser with a 300 MHz linewidth and [...] Read more.
This work presents a proof of concept for a short-range high spectral resolution lidar (SR-HSRL) optimized for aerosol characterization in the first kilometer of the atmosphere. The system is based on a compact, high-repetition-rate diode-based fiber laser with a 300 MHz linewidth and 5 ns pulse duration, coupled with an iodine absorption cell. A central challenge in the instrument’s development was identifying a laser source that offered both sufficient spectral resolution for HSRL retrievals and nanosecond pulse durations for high spatiotemporal resolution, while also being compact, tunable, and cost-effective. To address this, we developed a methodology for complete spectral and temporal laser characterization. A two-day field campaign conducted in July 2024 in Tsukuba, Japan, validated the system’s performance. Despite the relatively broad laser linewidth, we successfully retrieved aerosol backscatter coefficient profiles from 50 to 1000 m, with a spatial resolution of 7.5 m and a temporal resolution of 6 s. The results demonstrate the feasibility of using SR-HSRL for detailed studies of aerosol layers, cloud interfaces, and aerosol–cloud interactions. Future developments will focus on extending the technique to ultra-short-range applications (<100 m) from ground-based and mobile platforms, to retrieve aerosol extinction coefficients and lidar ratios to improve the characterization of near-source aerosol properties and their radiative impacts. Full article
(This article belongs to the Special Issue Lidar Monitoring of Aerosols and Clouds)
Show Figures

Figure 1

16 pages, 11231 KB  
Article
Aerial Vehicle Detection Using Ground-Based LiDAR
by John Kirschler and Jay Wilhelm
Aerospace 2025, 12(9), 756; https://doi.org/10.3390/aerospace12090756 - 22 Aug 2025
Viewed by 1315
Abstract
Ground-based LiDAR sensing offers a promising approach for delivering short-range landing feedback to aerial vehicles operating near vertiports and in GNSS-degraded environments. This work introduces a detection system capable of classifying aerial vehicles and estimating their 3D positions with sub-meter accuracy. Using a [...] Read more.
Ground-based LiDAR sensing offers a promising approach for delivering short-range landing feedback to aerial vehicles operating near vertiports and in GNSS-degraded environments. This work introduces a detection system capable of classifying aerial vehicles and estimating their 3D positions with sub-meter accuracy. Using a simulated Gazebo environment, multiple LiDAR sensors and five vehicle classes, ranging from hobbyist drones to air taxis, were modeled to evaluate detection performance. RGB-encoded point clouds were processed using a modified YOLOv6 neural network with Slicing-Aided Hyper Inference (SAHI) to preserve high-resolution object features. Classification accuracy and position error were analyzed using mean Average Precision (mAP) and Mean Absolute Error (MAE) across varied sensor parameters, vehicle sizes, and distances. Within 40 m, the system consistently achieved over 95% classification accuracy and average position errors below 0.5 m. Results support the viability of high-density LiDAR as a complementary method for precision landing guidance in advanced air mobility applications. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

31 pages, 5985 KB  
Article
Comparing Terrestrial and Mobile Laser Scanning Approaches for Multi-Layer Fuel Load Prediction in the Western United States
by Eugênia Kelly Luciano Batista, Andrew T. Hudak, Jeff W. Atkins, Eben North Broadbent, Kody Melissa Brock, Michael J. Campbell, Nuria Sánchez-López, Monique Bohora Schlickmann, Francisco Mauro, Andres Susaeta, Eric Rowell, Caio Hamamura, Ana Paula Dalla Corte, Inga La Puma, Russell A. Parsons, Benjamin C. Bright, Jason Vogel, Inacio Thomaz Bueno, Gabriel Maximo da Silva, Carine Klauberg, Jinyi Xia, Jessie F. Eastburn, Kleydson Diego Rocha and Carlos Alberto Silvaadd Show full author list remove Hide full author list
Remote Sens. 2025, 17(16), 2757; https://doi.org/10.3390/rs17162757 - 8 Aug 2025
Cited by 1 | Viewed by 1568
Abstract
Effective estimation of fuel load is critical for mitigating wildfire risks. Here, we evaluate the performance of mobile laser scanning (MLS) and terrestrial laser scanning (TLS) to estimate fuel loads across multiple vegetation layers. Data were collected in two forest regions: the North [...] Read more.
Effective estimation of fuel load is critical for mitigating wildfire risks. Here, we evaluate the performance of mobile laser scanning (MLS) and terrestrial laser scanning (TLS) to estimate fuel loads across multiple vegetation layers. Data were collected in two forest regions: the North Kaibab (NK) Plateau in Arizona and Monroe Mountain (MM) in Utah. We used random forest models to predict vegetation attributes, evaluating the performance of full models and transferred models using R2, RMSE, and bias. The MLS consistently outperformed the TLS system, particularly for canopy-related attributes and woody biomass components. However, the TLS system showed potential for capturing canopy structure attributes, while offering advantages like operational simplicity, low equipment demands, and ease of deployment in the field, making it a cost-effective alternative for managers without access to more complex and expensive mobile or airborne systems. Our results show that model transferability between NK and MM is highly variable depending on the fuel attributes. Attributes related to canopy biomass showed better transferability, with small losses in predictive accuracy when models were transferred between the two sites. Conversely, surface fuel attributes showed more significant challenges for model transferability, given the difficulty of laser penetration in the lower vegetation layers. In general, models trained in NK and validated in MM consistently outperformed those trained in MM and transferred to NK. This may suggest that the NK plots captured a broader complexity of vegetation structure and environmental conditions from which models learned better and were able to generalize to MM. This study highlights the potential of ground-based LiDAR technologies in providing detailed information and important insights into fire risk and forest structure. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

19 pages, 8766 KB  
Article
Fusion of Airborne, SLAM-Based, and iPhone LiDAR for Accurate Forest Road Mapping in Harvesting Areas
by Evangelia Siafali, Vasilis Polychronos and Petros A. Tsioras
Land 2025, 14(8), 1553; https://doi.org/10.3390/land14081553 - 28 Jul 2025
Cited by 2 | Viewed by 3259
Abstract
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and [...] Read more.
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and ensure accurate and efficient data collection and mapping. Airborne data were collected using the DJI Matrice 300 RTK UAV equipped with a Zenmuse L2 LiDAR sensor, which achieved a high point density of 285 points/m2 at an altitude of 80 m. Ground-level data were collected using the BLK2GO handheld laser scanner (HPLS) with SLAM methods (LiDAR SLAM, Visual SLAM, Inertial Measurement Unit) and the iPhone 13 Pro Max LiDAR. Data processing included generating DEMs, DSMs, and True Digital Orthophotos (TDOMs) via DJI Terra, LiDAR360 V8, and Cyclone REGISTER 360 PLUS, with additional processing and merging using CloudCompare V2 and ArcGIS Pro 3.4.0. The pairwise comparison analysis between ALS data and each alternative method revealed notable differences in elevation, highlighting discrepancies between methods. ALS + iPhone demonstrated the smallest deviation from ALS (MAE = 0.011, RMSE = 0.011, RE = 0.003%) and HPLS the larger deviation from ALS (MAE = 0.507, RMSE = 0.542, RE = 0.123%). The findings highlight the potential of fusing point clouds from diverse platforms to enhance forest road mapping accuracy. However, the selection of technology should consider trade-offs among accuracy, cost, and operational constraints. Mobile LiDAR solutions, particularly the iPhone, offer promising low-cost alternatives for certain applications. Future research should explore real-time fusion workflows and strategies to improve the cost-effectiveness and scalability of multisensor approaches for forest road monitoring. Full article
Show Figures

Figure 1

27 pages, 6578 KB  
Article
Evaluating Neural Radiance Fields for ADA-Compliant Sidewalk Assessments: A Comparative Study with LiDAR and Manual Methods
by Hang Du, Shuaizhou Wang, Linlin Zhang, Mark Amo-Boateng and Yaw Adu-Gyamfi
Infrastructures 2025, 10(8), 191; https://doi.org/10.3390/infrastructures10080191 - 22 Jul 2025
Viewed by 1375
Abstract
An accurate assessment of sidewalk conditions is critical for ensuring compliance with the Americans with Disabilities Act (ADA), particularly to safeguard mobility for wheelchair users. This paper presents a novel 3D reconstruction framework based on neural radiance field (NeRF), which utilize a monocular [...] Read more.
An accurate assessment of sidewalk conditions is critical for ensuring compliance with the Americans with Disabilities Act (ADA), particularly to safeguard mobility for wheelchair users. This paper presents a novel 3D reconstruction framework based on neural radiance field (NeRF), which utilize a monocular video input from consumer-grade cameras to generate high-fidelity 3D models of sidewalk environments. The framework enables automatic extraction of ADA-relevant geometric features, including the running slope, the cross slope, and vertical displacements, facilitating an efficient and scalable compliance assessment process. A comparative study is conducted across three surveying methods—manual measurements, LiDAR scanning, and the proposed NeRF-based approach—evaluated on four sidewalks and one curb ramp. Each method was assessed based on accuracy, cost, time, level of automation, and scalability. The NeRF-based approach achieved high agreement with LiDAR-derived ground truth, delivering an F1 score of 96.52%, a precision of 96.74%, and a recall of 96.34% for ADA compliance classification. These results underscore the potential of NeRF to serve as a cost-effective, automated alternative to traditional and LiDAR-based methods, with sufficient precision for widespread deployment in municipal sidewalk audits. Full article
Show Figures

Figure 1

18 pages, 16696 KB  
Technical Note
LIO-GC: LiDAR Inertial Odometry with Adaptive Ground Constraints
by Wenwen Tian, Juefei Wang, Puwei Yang, Wen Xiao and Sisi Zlatanova
Remote Sens. 2025, 17(14), 2376; https://doi.org/10.3390/rs17142376 - 10 Jul 2025
Cited by 1 | Viewed by 4564
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or [...] Read more.
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or cost-effective spinning LiDARs, where vertical features are sparse. To address this issue, we introduce LIO-GC, which effectively extracts ground features and integrates them into a factor graph to rectify vertical accuracy. Unlike conventional methods relying on geometric features for ground plane segmentation, our approach leverages a self-adaptive strategy that considers the uneven point cloud distribution and inconsistency due to ground fluctuations. By optimizing laser range factors, ground feature constraints, and loop closure factors using graph optimization frameworks, our method surpasses current approaches, demonstrating superior performance through evaluation on open-source and newly collected datasets. Full article
Show Figures

Figure 1

26 pages, 10897 KB  
Article
LiDAR-Based Road Cracking Detection: Machine Learning Comparison, Intensity Normalization, and Open-Source WebGIS for Infrastructure Maintenance
by Nicole Pascucci, Donatella Dominici and Ayman Habib
Remote Sens. 2025, 17(9), 1543; https://doi.org/10.3390/rs17091543 - 26 Apr 2025
Cited by 7 | Viewed by 3477
Abstract
This study introduces an innovative and scalable approach for automated road surface assessment by integrating Mobile Mapping System (MMS)-based LiDAR data analysis with an open-source WebGIS platform. In a U.S.-based case study, over 20 datasets were collected along Interstate I-65 in West Lafayette, [...] Read more.
This study introduces an innovative and scalable approach for automated road surface assessment by integrating Mobile Mapping System (MMS)-based LiDAR data analysis with an open-source WebGIS platform. In a U.S.-based case study, over 20 datasets were collected along Interstate I-65 in West Lafayette, Indiana, using the Purdue Wheel-based Mobile Mapping System—Ultra High Accuracy (PWMMS-UHA), following Indiana Department of Transportation (INDOT) guidelines. Preprocessing included noise removal, resolution reduction to 2 cm, and ground/non-ground separation using the Cloth Simulation Filter (CSF), resulting in Bare Earth (BE), Digital Terrain Model (DTM), and Above Ground (AG) point clouds. The optimized BE layer, enriched with intensity and color information, enabled crack detection through Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Random Forest (RF) classification, with and without intensity normalization. DBSCAN parameter tuning was guided by silhouette scores, while model performance was evaluated using precision, recall, F1-score, and the Jaccard Index, benchmarked against reference data. Results demonstrate that RF consistently outperformed DBSCAN, particularly under intensity normalization, achieving Jaccard Index values of 94% for longitudinal and 88% for transverse cracks. A key contribution of this work is the integration of geospatial analytics into an interactive, open-source WebGIS environment—developed using Blender, QGIS, and Lizmap—to support predictive maintenance planning. Moreover, intervention thresholds were defined based on crack surface area, aligned with the Pavement Condition Index (PCI) and FHWA standards, offering a data-driven framework for infrastructure monitoring. This study emphasizes the practical advantages of comparing clustering and machine learning techniques on 3D LiDAR point clouds, both with and without intensity normalization, and proposes a replicable, computationally efficient alternative to deep learning methods, which often require extensive training datasets and high computational resources. Full article
Show Figures

Figure 1

21 pages, 31401 KB  
Article
BEV-CAM3D: A Unified Bird’s-Eye View Architecture for Autonomous Driving with Monocular Cameras and 3D Point Clouds
by Daniel Ayo Oladele, Elisha Didam Markus and Adnan M. Abu-Mahfouz
AI 2025, 6(4), 82; https://doi.org/10.3390/ai6040082 - 18 Apr 2025
Cited by 1 | Viewed by 6382
Abstract
Three-dimensional (3D) visual perception is pivotal for understanding surrounding environments in applications such as autonomous driving and mobile robotics. While LiDAR-based models dominate due to accurate depth sensing, their cost and sparse outputs have driven interest in camera-based systems. However, challenges like cross-domain [...] Read more.
Three-dimensional (3D) visual perception is pivotal for understanding surrounding environments in applications such as autonomous driving and mobile robotics. While LiDAR-based models dominate due to accurate depth sensing, their cost and sparse outputs have driven interest in camera-based systems. However, challenges like cross-domain degradation and depth estimation inaccuracies persist. This paper introduces BEVCAM3D, a unified bird’s-eye view (BEV) architecture that fuses monocular cameras and LiDAR point clouds to overcome single-sensor limitations. BEVCAM3D integrates a deformable cross-modality attention module for feature alignment and a fast ground segmentation algorithm to reduce computational overhead by 40%. Evaluated on the nuScenes dataset, BEVCAM3D achieves state-of-the-art performance, with a 73.9% mAP and a 76.2% NDS, outperforming existing LiDAR-camera fusion methods like SparseFusion (72.0% mAP) and IS-Fusion (73.0% mAP). Notably, it excels in detecting pedestrians (91.0% AP) and traffic cones (89.9% AP), addressing the class imbalance in autonomous driving scenarios. The framework supports real-time inference at 11.2 FPS with an EfficientDet-B3 backbone and demonstrates robustness under low-light conditions (62.3% nighttime mAP). Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

Back to TopTop