Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (290)

Search Parameters:
Keywords = mobile LiDAR points

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 15860 KB  
Article
Robot Object Detection and Tracking Based on Image–Point Cloud Instance Matching
by Hongxing Wang, Rui Zhu, Zelin Ye and Yaxin Li
Sensors 2026, 26(2), 718; https://doi.org/10.3390/s26020718 - 21 Jan 2026
Viewed by 85
Abstract
Effectively fusing the rich semantic information from camera images with the high-precision geometric measurements provided by LiDAR point clouds is a key challenge in mobile robot environmental perception. To address this problem, this paper proposes a highly extensible instance-aware fusion framework designed to [...] Read more.
Effectively fusing the rich semantic information from camera images with the high-precision geometric measurements provided by LiDAR point clouds is a key challenge in mobile robot environmental perception. To address this problem, this paper proposes a highly extensible instance-aware fusion framework designed to achieve efficient alignment and unified modeling of heterogeneous sensory data. The proposed approach adopts a modular processing pipeline. First, semantic instance masks are extracted from RGB images using an instance segmentation network, and a projection mechanism is employed to establish spatial correspondences between image pixels and LiDAR point cloud measurements. Subsequently, three-dimensional bounding boxes are reconstructed through point cloud clustering and geometric fitting, and a reprojection-based validation mechanism is introduced to ensure consistency across modalities. Building upon this representation, the system integrates a data association module with a Kalman filter-based state estimator to form a closed-loop multi-object tracking framework. Experimental results on the KITTI dataset demonstrate that the proposed system achieves strong 2D and 3D detection performance across different difficulty levels. In multi-object tracking evaluation, the method attains a MOTA score of 47.8 and an IDF1 score of 71.93, validating the stability of the association strategy and the continuity of object trajectories in complex scenes. Furthermore, real-world experiments on a mobile computing platform show an average end-to-end latency of only 173.9 ms, while ablation studies further confirm the effectiveness of individual system components. Overall, the proposed framework exhibits strong performance in terms of geometric reconstruction accuracy and tracking robustness, and its lightweight design and low latency satisfy the stringent requirements of practical robotic deployment. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

23 pages, 52765 KB  
Article
GNSS NRTK, UAS-Based SfM Photogrammetry, TLS and HMLS Data for a 3D Survey of Sand Dunes in the Area of Caleri (Po River Delta, Italy)
by Massimo Fabris and Michele Monego
Land 2026, 15(1), 95; https://doi.org/10.3390/land15010095 - 3 Jan 2026
Viewed by 266
Abstract
Coastal environments are fragile ecosystems threatened by various factors, both natural and anthropogenic. The preservation and protection of these environments, and in particular, the sand dune systems, which contribute significantly to the defense of the inland from flooding, require continuous monitoring. To this [...] Read more.
Coastal environments are fragile ecosystems threatened by various factors, both natural and anthropogenic. The preservation and protection of these environments, and in particular, the sand dune systems, which contribute significantly to the defense of the inland from flooding, require continuous monitoring. To this end, high-resolution and high-precision multitemporal data acquired with various techniques can be used, such as, among other things, the global navigation satellite system (GNSS) using the network real-time kinematic (NRTK) approach to acquire 3D points, UAS-based structure-from-motion photogrammetry (SfM), terrestrial laser scanning (TLS), and handheld mobile laser scanning (HMLS)-based light detection and ranging (LiDAR). These techniques were used in this work for the 3D survey of a portion of vegetated sand dunes in the Caleri area (Po River Delta, northern Italy) to assess their applicability in complex environments such as coastal vegetated dune systems. Aerial-based and ground-based acquisitions allowed us to produce point clouds, georeferenced using common ground control points (GCPs), measured both with the GNSS NRTK method and the total station technique. The 3D data were compared to each other to evaluate the accuracy and performance of the different techniques. The results provided good agreement between the different point clouds, as the standard deviations of the differences were lower than 9.3 cm. The GNSS NRTK technique, used with the kinematic approach, allowed for the acquisition of the bare-ground surface but at a cost of lower resolution. On the other hand, the HMLS represented the poorest ability in the penetration of vegetation, providing 3D points with the highest elevation value. UAS-based and TLS-based point clouds provided similar average values, with significant differences only in dense vegetation caused by a very different platform of acquisition and point of view. Full article
(This article belongs to the Special Issue Digital Earth and Remote Sensing for Land Management, 2nd Edition)
Show Figures

Figure 1

22 pages, 17762 KB  
Article
Highway Reconstruction Through Fine-Grained Semantic Segmentation of Mobile Laser Scanning Data
by Yuyu Chen, Zhou Yang, Huijing Zhang and Jinhu Wang
Sensors 2026, 26(1), 40; https://doi.org/10.3390/s26010040 - 20 Dec 2025
Cited by 1 | Viewed by 409
Abstract
The highway is a crucial component of modern transportation systems, and its efficient management is essential for ensuring safety and facilitating communication. The automatic understanding and reconstruction of highway environments are therefore pivotal for advanced traffic management and intelligent transportation systems. This work [...] Read more.
The highway is a crucial component of modern transportation systems, and its efficient management is essential for ensuring safety and facilitating communication. The automatic understanding and reconstruction of highway environments are therefore pivotal for advanced traffic management and intelligent transportation systems. This work introduces a methodology for the fine-grained semantic segmentation and reconstruction of highway environments using dense 3D point cloud data acquired via mobile laser scanning. First, a multi-scale, object-based data augmentation and down-sampling method is introduced to address the issue of training sample imbalance. Subsequently, a deep learning approach utilizing the KPConv convolutional network is proposed to achieve fine-grained semantic segmentation. The segmentation results are then used to reconstruct a 3D model of the highway environment. The methodology is validated on a 32 km stretch of highway, achieving semantic segmentation across 27 categories of environmental features. When evaluated against a manually annotated ground truth, the results exhibit a mean Intersection over Union (mIoU) of 87.27%. These findings demonstrate that the proposed methodology is effective for fine-grained semantic segmentation and instance-level reconstruction of highways in practical scenarios. Full article
(This article belongs to the Special Issue Application of LiDAR Remote Sensing and Mapping)
Show Figures

Figure 1

18 pages, 8006 KB  
Article
Optimal Low-Cost MEMS INS/GNSS Integrated Georeferencing Solution for LiDAR Mobile Mapping Applications
by Nasir Al-Shereiqi, Mohammed El-Diasty and Ghazi Al-Rawas
Sensors 2025, 25(24), 7683; https://doi.org/10.3390/s25247683 - 18 Dec 2025
Viewed by 462
Abstract
Mobile mapping systems using LiDAR technology are becoming a reliable surveying technique to generate accurate point clouds. Mobile mapping systems integrate several advanced surveying technologies. This research investigated the development of a low-cost, accurate Microelectromechanical System (MEMS)-based INS/GNSS georeferencing system for LiDAR mobile [...] Read more.
Mobile mapping systems using LiDAR technology are becoming a reliable surveying technique to generate accurate point clouds. Mobile mapping systems integrate several advanced surveying technologies. This research investigated the development of a low-cost, accurate Microelectromechanical System (MEMS)-based INS/GNSS georeferencing system for LiDAR mobile mapping applications, enabling the generation of accurate point clouds. The challenge of using the MEMS IMU is that it is contaminated by high levels of noise and bias instability. To overcome this issue, new denoising and filtering methods were developed using a wavelet neural network (WNN) and an optimal maximum likelihood estimator (MLE) method to achieve an accurate MEMS-based INS/GNSS integration navigation solution for LiDAR mobile mapping applications. Moreover, the final accuracy of the MEMS-based INS/GNSS navigation solution was compared with the ASPRS standards for geospatial data production. It was found that the proposed WNN denoising method improved the MEMS-based INS/GNSS integration accuracy by approximately 11%, and that the optimal MLE method achieved approximately 12% higher accuracy than the forward-only navigation solution without GNSS outages. The proposed WNN denoising outperforms the current state-of-the-art Long Short-Term Memory (LSTM)–Recurrent Neural Network (RNN), or LSTM-RNN, denoising model. Additionally, it was found that, depending on the sensor–object distance, the accuracy of the optimal MLE-based MEMS INS/GNSS navigation solution with WNN denoising ranged from 1 to 3 cm for ground mapping and from 1 to 9 cm for building mapping, which can fulfill the ASPRS standards of classes 1 to 3 and classes 1 to 9 for ground and building mapping cases, respectively. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

30 pages, 22912 KB  
Article
HV-LIOM: Adaptive Hash-Voxel LiDAR–Inertial SLAM with Multi-Resolution Relocalization and Reinforcement Learning for Autonomous Exploration
by Shicheng Fan, Xiaopeng Chen, Weimin Zhang, Peng Xu, Zhengqing Zuo, Xinyan Tan, Xiaohai He, Chandan Sheikder, Meijun Guo and Chengxiang Li
Sensors 2025, 25(24), 7558; https://doi.org/10.3390/s25247558 - 12 Dec 2025
Viewed by 686
Abstract
This paper presents HV-LIOM (Adaptive Hash-Voxel LiDAR–Inertial Odometry and Mapping), a unified LiDAR–inertial SLAM and autonomous exploration framework for real-time 3D mapping in dynamic, GNSS-denied environments. We propose an adaptive hash-voxel mapping scheme that improves memory efficiency and real-time state estimation by subdividing [...] Read more.
This paper presents HV-LIOM (Adaptive Hash-Voxel LiDAR–Inertial Odometry and Mapping), a unified LiDAR–inertial SLAM and autonomous exploration framework for real-time 3D mapping in dynamic, GNSS-denied environments. We propose an adaptive hash-voxel mapping scheme that improves memory efficiency and real-time state estimation by subdividing voxels according to local geometric complexity and point density. To enhance robustness to poor initialization, we introduce a multi-resolution relocalization strategy that enables reliable localization against a prior map under large initial pose errors. A learning-based loop-closure module further detects revisited places and injects global constraints, while global pose-graph optimization maintains long-term map consistency. For autonomous exploration, we integrate a Soft Actor–Critic (SAC) policy that selects informative navigation targets online, improving exploration efficiency in unknown scenes. We evaluate HV-LIOM on public datasets (Hilti and NCLT) and a custom mobile robot platform. Results show that HV-LIOM improves absolute pose accuracy by up to 15.2% over FAST-LIO2 in indoor settings and by 7.6% in large-scale outdoor scenarios. The learned exploration policy achieves comparable or superior area coverage with reduced travel distance and exploration time relative to sampling-based and learning-based baselines. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

33 pages, 5657 KB  
Article
LiDAR-Based Urban Traffic Flow and Safety Assessment Using AI-Driven Surrogate Indicators
by Dohun Kim, Hongjin Kim and Wonjong Kim
Remote Sens. 2025, 17(24), 3989; https://doi.org/10.3390/rs17243989 - 10 Dec 2025
Viewed by 676
Abstract
Urban mobility systems increasingly depend on remote sensing and artificial intelligence to enhance traffic monitoring and safety management. This study presents a LiDAR-based framework for urban road condition analysis and risk evaluation using vehicle-mounted sensors as dynamic remote sensing platforms. The framework integrates [...] Read more.
Urban mobility systems increasingly depend on remote sensing and artificial intelligence to enhance traffic monitoring and safety management. This study presents a LiDAR-based framework for urban road condition analysis and risk evaluation using vehicle-mounted sensors as dynamic remote sensing platforms. The framework integrates deep learning based object detection with mathematically defined surrogate safety indicators to quantify collision risk and evaluate evasive maneuverability in real traffic environments. Two indicators, Hazardous Modified Time to Collision (HMTTC) and Searching for Safety Space (SSS), are introduced to assess lane-level safety and spatial availability of avoidance zones. LiDAR point cloud data are processed using a Voxel RCNN architecture and converted into parameters such as density, speed, and spacing. Field experiments conducted on highways and urban corridors in South Korea reveal strong correlations between HMTTC occurrences, congestion, and geometric road features. The results demonstrate that AI-driven analysis of LiDAR data enables continuous, infrastructure-independent urban traffic safety monitoring, thereby supporting data-driven, resilient transportation systems. Full article
(This article belongs to the Special Issue Applications of AI and Remote Sensing in Urban Systems II)
Show Figures

Figure 1

25 pages, 6241 KB  
Article
Evaluation of Hybrid Data Collection for Traffic Accident Site Documentation
by Zdeněk Svatý, Pavel Vrtal, Tomáš Kohout, Luboš Nouzovský and Karel Kocián
Geomatics 2025, 5(4), 77; https://doi.org/10.3390/geomatics5040077 - 10 Dec 2025
Viewed by 303
Abstract
This study examines the possibilities of using hybrid data collection methods based on photogrammetric and LiDAR imaging for documenting traffic accident sites. The evaluation was performed with an iPhone 15 Pro and a viDoc GNSS receiver. Comparative measurements were made against instruments with [...] Read more.
This study examines the possibilities of using hybrid data collection methods based on photogrammetric and LiDAR imaging for documenting traffic accident sites. The evaluation was performed with an iPhone 15 Pro and a viDoc GNSS receiver. Comparative measurements were made against instruments with higher accuracy. The test scenarios included measuring errors along a 25 m line and scanning a larger traffic area. Measurements were conducted under limiting conditions on a homogeneous surface without terrain irregularities or objects. The results show that although hybrid scanning cannot fully replace traditional surveying instruments, it provides accurate results for documenting traffic accident sites. The analysis additionally revealed an almost linear spread of errors on homogeneous asphalt surfaces. Moreover, it was confirmed that the use of a GNSS receiver and control points has a significant impact on the quality of the data. Such a comprehensive assessment of surface homogeneity has not been tested yet. To achieve accuracy, it is recommended to use a scanning mode based on at least 90% image overlap with RTK GNSS. The relative error rate on a linear section ranged from 0.5 to 1.0%, which corresponds to an error of up to 5 cm over a 5 m section. When evaluating a larger area using hybrid data collection, 93.38% of the points had an error below 10 cm, with a mean deviation of 6.2 cm. These findings expand current knowledge and define practical device settings and operational limits for the use of hybrid mobile scanning. Full article
Show Figures

Figure 1

31 pages, 11875 KB  
Article
A Comparative Analysis of Low-Cost Devices for High-Precision Diameter at Breast Height Estimation
by Jozef Výbošťok, Juliána Chudá, Daniel Tomčík, Julián Tomaštík, Roman Kadlečík and Martin Mokroš
Remote Sens. 2025, 17(23), 3888; https://doi.org/10.3390/rs17233888 - 29 Nov 2025
Viewed by 559
Abstract
Forestry is essential for environmental sustainability, biodiversity conservation, carbon sequestration, and renewable resource management. Traditional methods for forest inventory, particularly the manual measurement of diameter at breast height (DBH), are labor-intensive and prone to error. Recent advancements in proximal sensing, including lidar and [...] Read more.
Forestry is essential for environmental sustainability, biodiversity conservation, carbon sequestration, and renewable resource management. Traditional methods for forest inventory, particularly the manual measurement of diameter at breast height (DBH), are labor-intensive and prone to error. Recent advancements in proximal sensing, including lidar and photogrammetry, have paved the way for more efficient approaches, yet high costs remain a barrier to widespread adoption. This study investigates the potential of close-range photogrammetry (CRP) using low-cost devices, such as smartphones, cameras, and specialized handheld laser scanners (Stonex and LIVOX prototype), to generate 3D point clouds for accurate DBH estimation. We compared these devices by assessing their agreement and efficiency when compared to conventional methods in diverse forest conditions across multiple tree species. Additionally, we analyze factors influencing measurement errors and propose a comprehensive decision-making framework to guide technology selection in forest inventory. The results show that the lowest-cost devices and photogrammetric methods achieved the highest agreement with the conventional (caliper-based) measurements, while mobile applications were the fastest and least expensive but also the least accurate. Photogrammetry provided the most accurate DBH estimates (error ≈ 0.7 cm) but required the highest effort; handheld laser scanners achieved an average accuracy of about 1.5 cm at substantially higher cost, while mobile applications were the fastest and least expensive but also the least accurate (3–3.5 cm error). The outcomes of this research aim to facilitate more accessible, reliable, and sustainable forest management practices. Full article
Show Figures

Figure 1

13 pages, 1720 KB  
Article
Segment-Based SLAM Registration Optimization Algorithm Combining NDT and PL-ICP
by Yi Zhang, Xiao Wang, Xiuqin Lyu, Liang Zhang, Weiwei Song and Rui Zhang
Sensors 2025, 25(23), 7175; https://doi.org/10.3390/s25237175 - 24 Nov 2025
Viewed by 566
Abstract
With the continuous advancement of LiDAR technology, solid-state LiDAR, with its low cost and unique scanning mode, shows great potential in measurement applications. However, in large-scale environments, the SLAM algorithm LOAM-Livox for solid-state LiDAR often accumulates registration errors, limiting its applicability. To address [...] Read more.
With the continuous advancement of LiDAR technology, solid-state LiDAR, with its low cost and unique scanning mode, shows great potential in measurement applications. However, in large-scale environments, the SLAM algorithm LOAM-Livox for solid-state LiDAR often accumulates registration errors, limiting its applicability. To address this, we propose a segment-based SLAM registration optimization algorithm that combines Normal Distributions Transform (NDT) and Point-to-Line Iterative Closest Point (PL-ICP). This algorithm divides the entire data processing into segments, performs SLAM independently on each segment, and registers overlapping areas between adjacent segments to minimize error accumulation. Experiments on both public and self-collected datasets demonstrate that the proposed NDT + PL-ICP optimization algorithm significantly improves the accuracy of mobile mapping with solid-state LiDAR. This approach effectively resolves the error accumulation issue in SLAM, confirming its effectiveness and practicality in real-world applications. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensor)
Show Figures

Figure 1

27 pages, 33395 KB  
Article
Deep Line-Segment Detection-Driven Building Footprints Extraction from Backpack LiDAR Point Clouds for Urban Scene Reconstruction
by Jia Li, Rushi Lv, Qiuping Lan, Xinyi Shou, Hengyu Ruan, Jianjun Cao and Zikuan Li
Remote Sens. 2025, 17(22), 3730; https://doi.org/10.3390/rs17223730 - 17 Nov 2025
Viewed by 974
Abstract
Accurate and reliable extraction of building footprints from LiDAR point clouds is a fundamental task in remote sensing and urban scene reconstruction. Building footprints serve as essential geospatial products that support GIS database updating, land-use monitoring, disaster management, and digital twin development. Traditional [...] Read more.
Accurate and reliable extraction of building footprints from LiDAR point clouds is a fundamental task in remote sensing and urban scene reconstruction. Building footprints serve as essential geospatial products that support GIS database updating, land-use monitoring, disaster management, and digital twin development. Traditional image-based methods enable large-scale mapping but suffer from 2D perspective limitations and radiometric distortions, while airborne or vehicle-borne LiDAR systems often face single-viewpoint constraints that lead to incomplete or fragmented footprints. Recently, backpack mobile laser scanning (MLS) has emerged as a flexible platform for capturing dense urban geometry at the pedestrian level. However, the high noise, point sparsity, and structural complexity of MLS data make reliable footprints delineation particularly challenging. To address these issues, this study proposes a Deep Line-Segment Detection–Driven Building Footprints Extraction Framework that integrates multi-layer accumulated occupancy mapping, deep geometric feature learning, and structure-aware regularization. The accumulated occupancy maps aggregate stable wall features from multiple height slices to enhance contour continuity and suppress random noise. A deep line-segment detector is then employed to extract robust geometric cues from noisy projections, achieving accurate edge localization and reduced false responses. Finally, a structural chain-based completion and redundancy filtering strategy repairs fragmented contours and removes spurious lines, ensuring coherent and topologically consistent footprints reconstruction. Extensive experiments conducted on two campus scenes containing 102 buildings demonstrate that the proposed method achieves superior performance with an average Precision of 95.7%, Recall of 92.2%, F1-score of 93.9%, and IoU of 88.6%, outperforming existing baseline approaches by 4.5–7.8% in F1-score. These results highlight the strong potential of backpack LiDAR point clouds, when combined with deep line-segment detection and structural reasoning, to complement traditional remote sensing imagery and provide a reliable pathway for large-scale urban scene reconstruction and geospatial interpretation. Full article
Show Figures

Figure 1

20 pages, 8109 KB  
Article
Development of an Orchard Inspection Robot: A ROS-Based LiDAR-SLAM System with Hybrid A*-DWA Navigation
by Jiwei Qu, Yanqiu Gu, Zhinuo Qiu, Kangquan Guo and Qingzhen Zhu
Sensors 2025, 25(21), 6662; https://doi.org/10.3390/s25216662 - 1 Nov 2025
Viewed by 1312
Abstract
The application of orchard inspection robots has become increasingly widespread. How-ever, achieving autonomous navigation in unstructured environments continues to pre-sent significant challenges. This study investigates the Simultaneous Localization and Mapping (SLAM) navigation system of an orchard inspection robot and evaluates its performance using [...] Read more.
The application of orchard inspection robots has become increasingly widespread. How-ever, achieving autonomous navigation in unstructured environments continues to pre-sent significant challenges. This study investigates the Simultaneous Localization and Mapping (SLAM) navigation system of an orchard inspection robot and evaluates its performance using Light Detection and Ranging (LiDAR) technology. A mobile robot that integrates tightly coupled multi-sensors is developed and implemented. The integration of LiDAR and Inertial Measurement Units (IMUs) enables the perception of environmental information. Moreover, the robot’s kinematic model is established, and coordinate transformations are performed based on the Unified Robotics Description Format (URDF). The URDF facilitates the visualization of robot features within the Robot Operating System (ROS). ROS navigation nodes are configured for path planning, where an improved A* algorithm, combined with the Dynamic Window Approach (DWA), is introduced to achieve efficient global and local path planning. The comparison of the simulation results with classical algorithms demonstrated the implemented algorithm exhibits superior search efficiency and smoothness. The robot’s navigation performance is rigorously tested, focusing on navigation accuracy and obstacle avoidance capability. Results demonstrated that, during temporary stops at waypoints, the robot exhibits an average lateral deviation of 0.163 m and a longitudinal deviation of 0.282 m from the target point. The average braking time and startup time of the robot at the four waypoints are 0.46 s and 0.64 s, respectively. In obstacle avoidance tests, optimal performance is observed with an expansion radius of 0.4 m across various obstacle sizes. The proposed combined method achieves efficient and stable global and local path planning, serving as a reference for future applications of mobile inspection robots in autonomous navigation. Full article
Show Figures

Figure 1

21 pages, 11906 KB  
Article
Voxelized Point Cloud and Solid 3D Model Integration to Assess Visual Exposure in Yueya Lake Park, Nanjing
by Guanting Zhang, Dongxu Yang and Shi Cheng
Land 2025, 14(10), 2095; https://doi.org/10.3390/land14102095 - 21 Oct 2025
Cited by 1 | Viewed by 865
Abstract
Natural elements such as vegetation, water bodies, and sky, together with artificial elements including buildings and paved surfaces, constitute the core of urban visual environments. Their perception at the pedestrian level not only influences city image but also contributes to residents’ well-being and [...] Read more.
Natural elements such as vegetation, water bodies, and sky, together with artificial elements including buildings and paved surfaces, constitute the core of urban visual environments. Their perception at the pedestrian level not only influences city image but also contributes to residents’ well-being and spatial experience. This study develops a hybrid 3D visibility assessment framework that integrates a city-scale LOD1 solid model with high-resolution mobile LiDAR point clouds to quantify five visual exposure indicators. The case study area is Yueya Lake Park in Nanjing, where a voxel-based line-of-sight sampling approach simulated eye-level visibility at 1.6 m along the southern lakeside promenade. Sixteen viewpoints were selected at 50 m intervals to capture spatial variations in visual exposure. Comparative analysis between the solid model (excluding vegetation) and the hybrid model (including vegetation) revealed that vegetation significantly reshaped the pedestrian visual field by reducing the dominance of sky and buildings, enhancing near-field greenery, and reframing water views. Artificial elements such as buildings and ground showed decreased exposure in the hybrid model, reflecting vegetation’s masking effect. The calculation efficiency remains a limitation in this study. Overall, the study demonstrates that integrating natural and artificial elements provides a more realistic and nuanced assessment of pedestrian visual perception, offering valuable support for sustainable landscape planning, canopy management, and the equitable design of urban public spaces. Full article
Show Figures

Figure 1

15 pages, 516 KB  
Perspective
Advances in High-Resolution Spatiotemporal Monitoring Techniques for Indoor PM2.5 Distribution
by Qingyang Liu
Atmosphere 2025, 16(10), 1196; https://doi.org/10.3390/atmos16101196 - 17 Oct 2025
Viewed by 879
Abstract
Indoor air pollution, including fine particulate matter (PM2.5), poses a severe threat to human health. Due to the diverse sources of indoor PM2.5 and its high spatial heterogeneity in distribution, traditional single-point fixed monitoring fails to accurately reflect the actual [...] Read more.
Indoor air pollution, including fine particulate matter (PM2.5), poses a severe threat to human health. Due to the diverse sources of indoor PM2.5 and its high spatial heterogeneity in distribution, traditional single-point fixed monitoring fails to accurately reflect the actual human exposure level. In recent years, the development of high spatiotemporal resolution monitoring technologies has provided a new perspective for revealing the dynamic distribution patterns of indoor PM2.5. This study discusses two cutting-edge monitoring strategies: (1) mobile monitoring technology based on Indoor Positioning Systems (IPS) and portable sensors, which maps 2D exposure trajectories and concentration fields by having personnel carry sensors while moving; and (2) 3D dynamic monitoring technology based on in situ Lateral Scattering LiDAR (I-LiDAR), which non-intrusively reconstructs the 3D dynamic distribution of PM2.5 concentrations using laser arrays. This study elaborates on the principles, calibration methods, application cases, advantages, and disadvantages of the two technologies, compares their applicable scenarios, and outlines future research directions in multi-technology integration, intelligent calibration, and public health applications. It aims to provide a theoretical basis and technical reference for the accurate assessment of indoor air quality and the prevention and control of health risks. Full article
Show Figures

Graphical abstract

23 pages, 4831 KB  
Article
Accuracy Assessment of iPhone LiDAR for Mapping Streambeds and Small Water Structures in Forested Terrain
by Dominika Krausková, Tomáš Mikita, Petr Hrůza and Barbora Kudrnová
Sensors 2025, 25(19), 6141; https://doi.org/10.3390/s25196141 - 4 Oct 2025
Cited by 1 | Viewed by 5746
Abstract
Accurate mapping of small water structures and streambeds is essential for hydrological modeling, erosion control, and landscape management. While traditional geodetic methods such as GNSS and total stations provide high precision, they are time-consuming and require specialized equipment. Recent advances in mobile technology, [...] Read more.
Accurate mapping of small water structures and streambeds is essential for hydrological modeling, erosion control, and landscape management. While traditional geodetic methods such as GNSS and total stations provide high precision, they are time-consuming and require specialized equipment. Recent advances in mobile technology, particularly smartphones equipped with LiDAR sensors, offer a potential alternative for rapid and cost-effective field data collection. This study assesses the accuracy of the iPhone 14 Pro’s built-in LiDAR sensor for mapping streambeds and retention structures in challenging terrain. The test site was the Dílský stream in the Oslavany cadastral area, characterized by steep slopes, rocky surfaces, and dense vegetation. The stream channel and water structures were first surveyed using GNSS and a total station and subsequently re-measured with the iPhone. Several scanning workflows were tested to evaluate field applicability. Results show that the iPhone LiDAR sensor can capture landscape features with useful accuracy when supported by reference points spaced every 20 m, achieving a vertical RMSE of 0.16 m. Retention structures were mapped with an average positional error of 7%, with deviations of up to 0.20 m in complex or vegetated areas. The findings highlight the potential of smartphone LiDAR for rapid, small-scale mapping, while acknowledging its limitations in rugged environments. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

26 pages, 38057 KB  
Article
Multimodal RGB–LiDAR Fusion for Robust Drivable Area Segmentation and Mapping
by Hyunmin Kim, Minkyung Jun and Hoeryong Jung
Sensors 2025, 25(18), 5841; https://doi.org/10.3390/s25185841 - 18 Sep 2025
Viewed by 2324
Abstract
Drivable area detection and segmentation are critical tasks for autonomous mobile robots in complex and dynamic environments. RGB-based methods offer rich semantic information but suffer in unstructured environments and under varying lighting, while LiDAR-based models provide precise spatial measurements but often require high-resolution [...] Read more.
Drivable area detection and segmentation are critical tasks for autonomous mobile robots in complex and dynamic environments. RGB-based methods offer rich semantic information but suffer in unstructured environments and under varying lighting, while LiDAR-based models provide precise spatial measurements but often require high-resolution sensors and are sensitive to sparsity. In addition, most fusion-based systems are constrained by fixed sensor setups and demand retraining when hardware configurations change. This paper presents a real-time, modular RGB–LiDAR fusion framework for robust drivable area recognition and mapping. Our method decouples RGB and LiDAR preprocessing to support sensor-agnostic adaptability without retraining, enabling seamless deployment across diverse platforms. By fusing RGB segmentation with LiDAR ground estimation, we generate high-confidence drivable area point clouds, which are incrementally integrated via SLAM into a global drivable area map. The proposed approach was evaluated on the KITTI dataset in terms of intersection over union (IoU), precision, and frames per second (FPS). Experimental results demonstrate that the proposed framework achieves competitive accuracy and the highest inference speed among compared methods, confirming its suitability for real-time autonomous navigation. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Back to TopTop