Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (32)

Search Parameters:
Keywords = LiDAR intensity completion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 32355 KiB  
Article
Evaluating UAV LiDAR and Field Spectroscopy for Estimating Residual Dry Matter Across Conservation Grazing Lands
by Bruce Markman, H. Scott Butterfield, Janet Franklin, Lloyd Coulter, Moses Katkowski and Daniel Sousa
Remote Sens. 2025, 17(14), 2352; https://doi.org/10.3390/rs17142352 - 9 Jul 2025
Viewed by 524
Abstract
Residual dry matter (RDM) is a term used in rangeland management to describe the non-photosynthetic plant material left on the soil surface at the end of the growing season. RDM measurements are used by agencies and conservation entities for managing grazing and fire [...] Read more.
Residual dry matter (RDM) is a term used in rangeland management to describe the non-photosynthetic plant material left on the soil surface at the end of the growing season. RDM measurements are used by agencies and conservation entities for managing grazing and fire fuels. Measuring the RDM using traditional methods is labor-intensive, costly, and subjective, making consistent sampling challenging. Previous studies have assessed the use of multispectral remote sensing to estimate the RDM, but with limited success across space and time. The existing approaches may be improved through the use of spectroscopic (hyperspectral) sensors, capable of capturing the cellulose and lignin present in dry grass, as well as Unmanned Aerial Vehicle (UAV)-mounted Light Detection and Ranging (LiDAR) sensors, capable of capturing centimeter-scale 3D vegetation structures. Here, we evaluate the relationships between the RDM and spectral and LiDAR data across the Jack and Laura Dangermond Preserve (Santa Barbara County, CA, USA), which uses grazing and prescribed fire for rangeland management. The spectral indices did not correlate with the RDM (R2 < 0.1), likely due to complete areal coverage with dense grass. The LiDAR canopy height models performed better for all the samples (R2 = 0.37), with much stronger performance (R2 = 0.81) when using a stratified model to predict the RDM in plots with predominantly standing (as opposed to laying) vegetation. This study demonstrates the potential of UAV LiDAR for direct RDM quantification where vegetation is standing upright, which could help improve RDM mapping and management for rangelands in California and beyond. Full article
Show Figures

Figure 1

25 pages, 6553 KiB  
Article
Tree Species Classification Based on Point Cloud Completion
by Haoran Liu, Hao Zhong, Guangqiang Xie and Ping Zhang
Forests 2025, 16(2), 280; https://doi.org/10.3390/f16020280 - 6 Feb 2025
Cited by 1 | Viewed by 857
Abstract
LiDAR is an active remote sensing technology widely used in forestry applications, such as forest resource surveys, tree information collection, and ecosystem monitoring. However, due to the resolution limitations of 3D-laser scanners and the canopy occlusion in forest environments, the tree point clouds [...] Read more.
LiDAR is an active remote sensing technology widely used in forestry applications, such as forest resource surveys, tree information collection, and ecosystem monitoring. However, due to the resolution limitations of 3D-laser scanners and the canopy occlusion in forest environments, the tree point clouds obtained often have missing data. This can reduce the accuracy of individual tree segmentation, which subsequently affects the tree species classification. To address the issue, this study used point cloud data with RGB information collected by the UAV platform to improve tree species classification by completing the missing point clouds. Furthermore, the study also explored the effects of point cloud completion, feature selection, and classification methods on the results. Specifically, both a traditional geometric method and a deep learning-based method were used for point cloud completion, and their performance was compared. For the classification of tree species, five machine learning algorithms—Random Forest (RF), Support Vector Machine (SVM), Back Propagation Neural Network (BPNN), Quadratic Discriminant Analysis (QDA), and K-Nearest Neighbors (KNN)—were utilized. This study also ranked the importance of features to assess the impact of different algorithms and features on classification accuracy. The results showed that the deep learning-based completion method provided the best performance (avgCD = 6.14; avgF1 = 0.85), generating more complete point clouds than the traditional method. On the other hand, compared with SVM and BPNN, RF showed better performance in dealing with multi-classification tasks with limited training samples (OA-87.41%, Kappa-0.85). Among the six dominant tree species, Pinus koraiensis had the highest classification accuracy (93.75%), while that of Juglans mandshurica was the lowest (82.05%). In addition, the vegetation index and the tree structure parameter accounted for 50% and 30%, respectively, in the top 10 features in terms of feature importance. The point cloud intensity also had a high contribution to the classification results, indicating that the lidar point cloud data can also be used as an important basis for tree species classification. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 10278 KiB  
Article
Three-Dimensional Reconstruction of Zebra Crossings in Vehicle-Mounted LiDAR Point Clouds
by Zhenfeng Zhao, Shu Gan, Bo Xiao, Xinpeng Wang and Chong Liu
Remote Sens. 2024, 16(19), 3722; https://doi.org/10.3390/rs16193722 - 7 Oct 2024
Cited by 4 | Viewed by 2134
Abstract
In the production of high-definition maps, it is necessary to achieve the three-dimensional instantiation of road furniture that is difficult to depict on traditional maps. The development of mobile laser measurement technology provides a new means for acquiring road furniture data. To address [...] Read more.
In the production of high-definition maps, it is necessary to achieve the three-dimensional instantiation of road furniture that is difficult to depict on traditional maps. The development of mobile laser measurement technology provides a new means for acquiring road furniture data. To address the issue of traffic marking extraction accuracy in practical production, which is affected by degradation, occlusion, and non-standard variations, this paper proposes a 3D reconstruction method based on energy functions and template matching, using zebra crossings in vehicle-mounted LiDAR point clouds as an example. First, regions of interest (RoIs) containing zebra crossings are obtained through manual selection. Candidate point sets are then obtained at fixed distances, and their neighborhood intensity features are calculated to determine the number of zebra stripes using non-maximum suppression. Next, the slice intensity feature of each zebra stripe is calculated, followed by outlier filtering to determine the optimized length. Finally, a matching template is selected, and an energy function composed of the average intensity of the point cloud within the template, the intensity information entropy, and the intensity gradient at the template boundary is constructed. The 3D reconstruction result is obtained by solving the energy function, performing mode statistics, and normalization. This method enables the complete 3D reconstruction of zebra stripes within the RoI, maintaining an average planar corner accuracy within 0.05 m and an elevation accuracy within 0.02 m. The matching and reconstruction time does not exceed 1 s, and it has been applied in practical production. Full article
Show Figures

Figure 1

20 pages, 6719 KiB  
Article
Tracking Method of GM-APD LiDAR Based on Adaptive Fusion of Intensity Image and Point Cloud
by Bo Xiao, Yuchao Wang, Tingsheng Huang, Xuelian Liu, Da Xie, Xulang Zhou, Zhanwen Liu and Chunyang Wang
Appl. Sci. 2024, 14(17), 7884; https://doi.org/10.3390/app14177884 - 5 Sep 2024
Cited by 1 | Viewed by 1511
Abstract
The target is often obstructed by obstacles with the dynamic tracking scene, leading to a loss of target information and a decrease in tracking accuracy or even complete failure. To address these challenges, we leverage the capabilities of Geiger-mode Avalanche Photodiode (GM-APD) LiDAR [...] Read more.
The target is often obstructed by obstacles with the dynamic tracking scene, leading to a loss of target information and a decrease in tracking accuracy or even complete failure. To address these challenges, we leverage the capabilities of Geiger-mode Avalanche Photodiode (GM-APD) LiDAR to acquire both intensity images and point cloud data for researching a target tracking method that combines the fusion of intensity images and point cloud data. Building upon Kernelized correlation filtering (KCF), we introduce Fourier descriptors based on intensity images to enhance the representational capacity of target features, thereby achieving precise target tracking using intensity images. Additionally, an adaptive factor is designed based on peak sidelobe ratio and intrinsic shape signature to accurately detect occlusions. Finally, by fusing the tracking results from Kalman filter and KCF with adaptive factors following occlusion detection, we obtain location information for the central point of the target. The proposed method is validated through simulations using the KITTI tracking dataset, yielding an average position error of 0.1182m for the central point of the target. Moreover, our approach achieves an average tracking accuracy that is 21.67% higher than that obtained by Kalman filtering algorithm and 7.94% higher than extended Kalman filtering algorithm on average. Full article
(This article belongs to the Special Issue Optical Sensors: Applications, Performance and Challenges)
Show Figures

Figure 1

15 pages, 1225 KiB  
Article
A Self-Supervised Few-Shot Semantic Segmentation Method Based on Multi-Task Learning and Dense Attention Computation
by Kai Yi , Weihang Wang  and Yi Zhang 
Sensors 2024, 24(15), 4975; https://doi.org/10.3390/s24154975 - 31 Jul 2024
Viewed by 1742
Abstract
Nowadays, autonomous driving technology has become widely prevalent. The intelligent vehicles have been equipped with various sensors (e.g., vision sensors, LiDAR, depth cameras etc.). Among them, the vision systems with tailored semantic segmentation and perception algorithms play critical roles in scene understanding. However, [...] Read more.
Nowadays, autonomous driving technology has become widely prevalent. The intelligent vehicles have been equipped with various sensors (e.g., vision sensors, LiDAR, depth cameras etc.). Among them, the vision systems with tailored semantic segmentation and perception algorithms play critical roles in scene understanding. However, the traditional supervised semantic segmentation needs a large number of pixel-level manual annotations to complete model training. Although few-shot methods reduce the annotation work to some extent, they are still labor intensive. In this paper, a self-supervised few-shot semantic segmentation method based on Multi-task Learning and Dense Attention Computation (dubbed MLDAC) is proposed. The salient part of an image is split into two parts; one of them serves as the support mask for few-shot segmentation, while cross-entropy losses are calculated between the other part and the entire region with the predicted results separately as multi-task learning so as to improve the model’s generalization ability. Swin Transformer is used as our backbone to extract feature maps at different scales. These feature maps are then input to multiple levels of dense attention computation blocks to enhance pixel-level correspondence. The final prediction results are obtained through inter-scale mixing and feature skip connection. The experimental results indicate that MLDAC obtains 55.1% and 26.8% one-shot mIoU self-supervised few-shot segmentation on the PASCAL-5i and COCO-20i datasets, respectively. In addition, it achieves 78.1% on the FSS-1000 few-shot dataset, proving its efficacy. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 30702 KiB  
Article
Towards Urban Digital Twins: A Workflow for Procedural Visualization Using Geospatial Data
by Sanjay Somanath, Vasilis Naserentin, Orfeas Eleftheriou, Daniel Sjölie, Beata Stahre Wästberg and Anders Logg
Remote Sens. 2024, 16(11), 1939; https://doi.org/10.3390/rs16111939 - 28 May 2024
Cited by 10 | Viewed by 4219
Abstract
A key feature for urban digital twins (DTs) is an automatically generated detailed 3D representation of the built and unbuilt environment from aerial imagery, footprints, LiDAR, or a fusion of these. Such 3D models have applications in architecture, civil engineering, urban planning, construction, [...] Read more.
A key feature for urban digital twins (DTs) is an automatically generated detailed 3D representation of the built and unbuilt environment from aerial imagery, footprints, LiDAR, or a fusion of these. Such 3D models have applications in architecture, civil engineering, urban planning, construction, real estate, Geographical Information Systems (GIS), and many other areas. While the visualization of large-scale data in conjunction with the generated 3D models is often a recurring and resource-intensive task, an automated workflow is complex, requiring many steps to achieve a high-quality visualization. Methods for building reconstruction approaches have come a long way, from previously manual approaches to semi-automatic or automatic approaches. This paper aims to complement existing methods of 3D building generation. First, we present a literature review covering different options for procedural context generation and visualization methods, focusing on workflows and data pipelines. Next, we present a semi-automated workflow that extends the building reconstruction pipeline to include procedural context generation using Python and Unreal Engine. Finally, we propose a workflow for integrating various types of large-scale urban analysis data for visualization. We conclude with a series of challenges faced in achieving such pipelines and the limitations of the current approach. However, the steps for a complete, end-to-end solution involve further developing robust systems for building detection, rooftop recognition, and geometry generation and importing and visualizing data in the same 3D environment, highlighting a need for further research and development in this field. Full article
Show Figures

Figure 1

16 pages, 11184 KiB  
Article
Progressing towards Estimates of Local Emissions from Trees in Cities: A Transdisciplinary Framework Integrating Available Municipal Data, AI, and Citizen Science
by Julia Mayer, Martin Memmel, Johannes Ruf, Dhruv Patel, Lena Hoff and Sascha Henninger
Appl. Sci. 2024, 14(1), 396; https://doi.org/10.3390/app14010396 - 31 Dec 2023
Cited by 1 | Viewed by 1949
Abstract
Urban tree cadastres, crucial for climate adaptation and urban planning, face challenges in maintaining accuracy and completeness. A transdisciplinary approach in Kaiserslautern, Germany, complements existing incomplete tree data with additional precise GPS locations of urban trees. Deep learning models using aerial imagery identify [...] Read more.
Urban tree cadastres, crucial for climate adaptation and urban planning, face challenges in maintaining accuracy and completeness. A transdisciplinary approach in Kaiserslautern, Germany, complements existing incomplete tree data with additional precise GPS locations of urban trees. Deep learning models using aerial imagery identify trees, while other applications employ street view imagery and LIDAR data to collect additional attributes, such as height and crown width. A web application encourages citizen participation in adding features like species and improving datasets for further model training. The initiative aims to minimize resource-intensive maintenance conducted by local administrations, integrate additional features, and improve data quality. Its primary goal is to create transferable AI models utilizing aerial imagery and LIDAR data that can be applied in regions with similar tree populations. The approach includes tree clusters and private trees, which are essential for assessing allergy and ozone potential but are usually not recorded in municipal tree cadastres. The paper highlights the potential of improving tree cadastres for effective urban planning in a transdisciplinary approach, taking into account climate change, health, and public engagement. Full article
(This article belongs to the Special Issue Smart City and Informatization)
Show Figures

Figure 1

20 pages, 9021 KiB  
Article
Measuring the Canopy Architecture of Young Vegetation Using the Fastrak Polhemus 3D Digitizer
by Kristýna Šleglová, Jakub Brichta, Lukáš Bílek and Peter Surový
Sensors 2024, 24(1), 109; https://doi.org/10.3390/s24010109 - 25 Dec 2023
Cited by 3 | Viewed by 1438
Abstract
In the context of climate change conditions, addressing the shifting composition of forest stands and changes in traditional forest management practices are necessary. For this purpose, understanding the biomass allocation directly influenced by crown architecture is crucial. In this paper, we want to [...] Read more.
In the context of climate change conditions, addressing the shifting composition of forest stands and changes in traditional forest management practices are necessary. For this purpose, understanding the biomass allocation directly influenced by crown architecture is crucial. In this paper, we want to demonstrate the possibility of 3D mensuration of canopy architecture with the digitizer sensor Fastrak Polhemus and demonstrate its capability for assessing important structural information for forest purposes. Scots pine trees were chosen for this purpose, as it is the most widespread tree species in Europe, which, paradoxically, is very negatively affected by climate change. In our study, we examined young trees since the architecture of young trees influences their growth potential. In order to get the most accurate measurement of tree architecture, we evaluated the use of the Fastrak Polhemus magnetic digitizer to create a 3D model of individual trees and perform a subsequent statistical analysis of the data obtained. It was found that the stand density affects the number of branches in different orders and the heights of the trees in the process of natural regeneration. Regarding the branches, in our case, the highest number of branch orders was found in the clear-cut areas (density = 0.0), whereas the lowest branching was on-site with mature stands (density = 0.8). The results showed that the intensity of branching (assessed as the number of third-order branches) depends on the total number of branches of the tree of different branch orders but also on stand density where the tree is growing. An important finding in this study was the negative correlation between the tree branching and the tree height. The growth in height is lower when the branching expansion is higher. Similar data could be obtained with Lidar sensors. However, the occlusion due to the complexity of the tree crown would impede the information from being complete when using the magnetic digitizer. These results provide vital information for the creation of structural-functional models, which can be used to predict and estimate future tree growth and carbon fixation. Full article
(This article belongs to the Special Issue Novel Magnetic Sensors and Applications)
Show Figures

Figure 1

33 pages, 19921 KiB  
Article
Combined Characterization of Airborne Saharan Dust above Sofia, Bulgaria, during Blocking-Pattern Conditioned Dust Episode in February 2021
by Zahari Peshev, Anatoli Chaikovsky, Tsvetina Evgenieva, Vladislav Pescherenkov, Liliya Vulkova, Atanaska Deleva and Tanja Dreischuh
Remote Sens. 2023, 15(15), 3833; https://doi.org/10.3390/rs15153833 - 1 Aug 2023
Cited by 6 | Viewed by 2191
Abstract
The wintertime outbreaks of Saharan dust, increasing in intensity and frequency over the last decade, have become an important component of the global dust cycle and a challenging issue in elucidating its feedback to the ongoing climate change. For their adequate monitoring and [...] Read more.
The wintertime outbreaks of Saharan dust, increasing in intensity and frequency over the last decade, have become an important component of the global dust cycle and a challenging issue in elucidating its feedback to the ongoing climate change. For their adequate monitoring and characterization, systematic multi-instrument observations and multi-aspect analyses of the distribution and properties of desert aerosols are required, covering the full duration of dust events. In this paper, we present observations of Saharan dust in the atmosphere above Sofia, Bulgaria, during a strong dust episode over the whole of Europe in February 2021, conditioned by a persistent blocking weather pattern over the Mediterranean basin, providing clear skies and constant measurement conditions. This study was accomplished using different remote sensing (lidar, satellite, and radiometric), in situ (particle analyzing), and modeling/forecasting methods and resources, using real measurements and data (re)analysis. A wide range of columnar and range/time-resolved optical, microphysical, physical, topological, and dynamical characteristics of the detected aerosols dominated by desert dust are obtained and profiled with increased accuracy and reliability by combining the applied approaches and instruments in terms of complementarity, calibration, and normalization. Vertical profiles of the aerosol/dust total and mode volume concentrations are presented and analyzed using the LIRIC-2 inversion code joining lidar and sun-photometer data. The results show that interactive combining and use of various relevant approaches, instruments, and data have a significant synergistic effect and potential for verifying and improving theoretical models aimed at complete aerosol/dust characterization. Full article
Show Figures

Figure 1

23 pages, 4132 KiB  
Article
Added Value of Aerosol Observations of a Future AOS High Spectral Resolution Lidar with Respect to Classic Backscatter Spaceborne Lidar Measurements
by Flavien Cornut, Laaziz El Amraoui, Juan Cuesta and Jérôme Blanc
Remote Sens. 2023, 15(2), 506; https://doi.org/10.3390/rs15020506 - 14 Jan 2023
Cited by 3 | Viewed by 2322
Abstract
In the context of the Atmosphere Observing System (AOS) international program, a new-generation spaceborne lidar is expected to be in polar orbit for deriving new observations of aerosol and clouds. In this work, we analyze the added values of these new observations for [...] Read more.
In the context of the Atmosphere Observing System (AOS) international program, a new-generation spaceborne lidar is expected to be in polar orbit for deriving new observations of aerosol and clouds. In this work, we analyze the added values of these new observations for characterizing aerosol vertical distribution. For this, synthetic observations are simulated using the BLISS lidar simulator in terms of the backscatter coefficient at 532 nm. We consider two types of lidar instruments, an elastic backscatter lidar instrument and a high spectral resolution lidar (HSRL). These simulations are performed with atmospheric profiles from a nature run (NR) modeled by the MOCAGE chemical transport model. In three case studies involving large events of different aerosol species, the added value of the HSRL channel (for measuring aerosol backscatter profiles with respect to simple backscatter measurements) is shown. Observations independent of an a priori lidar ratio assumption, as done typically for simple backscattering instruments, allow probing the vertical structures of aerosol layers without divergence, even in cases of intense episodes. A 5-day study in the case of desert dust completes the study of the added value of the HSRL channel with relative mean bias from the NR of the order of 1.5%. For low abundances, relative errors in the backscatter coefficient profiles may lay between +40% and −40%, with mean biases between +5% and −5%. Full article
(This article belongs to the Special Issue Lidar for Advanced Classification and Retrieval of Aerosols)
Show Figures

Figure 1

19 pages, 13904 KiB  
Article
Monitoring Mining Surface Subsidence with Multi-Temporal Three-Dimensional Unmanned Aerial Vehicle Point Cloud
by Xiaoyu Liu, Wu Zhu, Xugang Lian and Xuanyu Xu
Remote Sens. 2023, 15(2), 374; https://doi.org/10.3390/rs15020374 - 7 Jan 2023
Cited by 28 | Viewed by 3848
Abstract
Long-term and high-intensity coal mining has led to the increasingly serious surface subsidence and environmental problems. Surface subsidence monitoring plays an important role in protecting the ecological environment of the mining area and the sustainable development of modern coal mines. The development of [...] Read more.
Long-term and high-intensity coal mining has led to the increasingly serious surface subsidence and environmental problems. Surface subsidence monitoring plays an important role in protecting the ecological environment of the mining area and the sustainable development of modern coal mines. The development of surveying technology has promoted the acquisition of high-resolution terrain data. The combination of an unmanned aerial vehicle (UAV) point cloud and the structure from motion (SfM) method has shown the potential of collecting multi-temporal high-resolution terrain data in complex or inaccessible environments. The difference of the DEM (DoD) is the main method to obtain the surface subsidence in mining areas. However, the obtained digital elevation model (DEM) needs to interpolate the point cloud into the grid, and this process may introduce errors in complex natural topographic environments. Therefore, a complete three-dimensional change analysis is required to quantify the surface change in complex natural terrain. In this study, we propose a quantitative analysis method of ground subsidence based on three-dimensional point cloud. Firstly, the Monte Carlo simulation statistical analysis was adopted to indirectly evaluate the performance of direct georeferencing photogrammetric products. After that, the operation of co-registration was carried out to register the multi-temporal UAV dense matching point cloud. Finally, the model-to-model cloud comparison (M3C2) algorithm was used to quantify the surface change and reveal the spatio-temporal characteristics of surface subsidence. In order to evaluate the proposed method, four periods of multi-temporal UAV photogrammetric data and a period of airborne LiDAR point cloud data were collected in the Yangquan mining area, China, from 2020 to 2022. The 3D precision map of a sparse point cloud generated by Monte Carlo simulation shows that the average precision in X, Y and Z directions is 44.80 mm, 45.22 and 63.60 mm, respectively. The standard deviation range of the M3C2 distance calculated by multi-temporal data in the stable area is 0.13–0.19, indicating the consistency of multi-temporal photogrammetric data of UAV. Compared with DoD, the dynamic moving basin obtained by the M3C2 algorithm based on the 3D point cloud obtained more real surface deformation distribution. This method has high potential in monitoring terrain change in remote areas, and can provide a reference for monitoring similar objects such as landslides. Full article
(This article belongs to the Special Issue Application of UAVs in Geo-Engineering for Hazard Observation)
Show Figures

Figure 1

19 pages, 7361 KiB  
Article
Multispectral LiDAR Point Cloud Segmentation for Land Cover Leveraging Semantic Fusion in Deep Learning Network
by Kai Xiao, Jia Qian, Teng Li and Yuanxi Peng
Remote Sens. 2023, 15(1), 243; https://doi.org/10.3390/rs15010243 - 31 Dec 2022
Cited by 9 | Viewed by 3592
Abstract
Multispectral LiDAR technology can simultaneously acquire spatial geometric data and multispectral wavelength intensity information, which can provide richer attribute features for semantic segmentation of point cloud scenes. However, due to the disordered distribution and huge number of point clouds, it is still a [...] Read more.
Multispectral LiDAR technology can simultaneously acquire spatial geometric data and multispectral wavelength intensity information, which can provide richer attribute features for semantic segmentation of point cloud scenes. However, due to the disordered distribution and huge number of point clouds, it is still a challenging task to accomplish fine-grained semantic segmentation of point clouds from large-scale multispectral LiDAR data. To deal with this situation, we propose a deep learning network that can leverage contextual semantic information to complete the semantic segmentation of large-scale point clouds. In our network, we work on fusing local geometry and feature content based on 3D spatial geometric associativity and embed it into a backbone network. In addition, to cope with the problem of redundant point cloud feature distribution found in the experiment, we designed a data preprocessing with principal component extraction to improve the processing capability of the proposed network on the applied multispectral LiDAR data. Finally, we conduct a series of comparative experiments using multispectral LiDAR point clouds of real land cover in order to objectively evaluate the performance of the proposed method compared with other advanced methods. With the obtained results, we confirm that the proposed method achieves satisfactory results in real point cloud semantic segmentation. Moreover, the quantitative evaluation metrics show that it reaches state-of-the-art. Full article
Show Figures

Figure 1

19 pages, 21659 KiB  
Article
Hybrid Methods’ Integration for Remote Sensing Monitoring and Process Analysis of Dust Storm Based on Multi-Source Data
by Yanjiao Wang, Jiakui Tang, Zili Zhang, Wuhua Wang, Jiru Wang and Zhao Wang
Atmosphere 2023, 14(1), 3; https://doi.org/10.3390/atmos14010003 - 20 Dec 2022
Cited by 6 | Viewed by 2804
Abstract
Dust storms are of great importance to climate change, air quality, and human health. In this study, a complete application frame of integrating hybrid methods based on multi-source data is proposed for remote sensing monitoring and process analysis of dust storms. In the [...] Read more.
Dust storms are of great importance to climate change, air quality, and human health. In this study, a complete application frame of integrating hybrid methods based on multi-source data is proposed for remote sensing monitoring and process analysis of dust storms. In the frame, horizontal spatial distribution of dust intensity can be mapped by optical remote sensing products such as aerosol optical depth (AOD) from MODIS; the vertical spatial distribution of dust intensity by LIDAR satellite remote sensing products such as AOD profile from CALIPSO; geostationary satellite remote sensing products such as Chinese Fengyun or Japanese Himawari can achieve high-frequency temporal distribution information of dust storms. More detailed process analysis of dust storms includes air quality analysis supported by particulate matter (PM) data from ground stations and the dust emission trace and transport pathways from HYSPLIT back trajectory driven by meteorological data from the Global Data Assimilation System (GDAS). The dust storm outbreak condition of the source location can be proved by precipitation data from the WMO and soil moisture data from remote sensing products, which can be used to verify the deduced emission trace from HYSPLIT. The proposed application frame of integrating hybrid methods was applied to monitor and analyze a very heavy dust storm that occurred in northern China from 14–18 March 2021, which was one of the most severe dust storms in recent decades. Results showed that the dust storm event could be well monitored and analyzed dynamically. It was found that the dust originated in western Mongolia and northwestern China and was then transmitted along the northwest–southeast direction, consequently affected the air quality of most cities of northern China. The results are consistent with the prior research and showed the excellent potential of the integration of the hybrid methods in monitoring dust storms. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

17 pages, 26668 KiB  
Article
Automatic Filtering and Classification of Low-Density Airborne Laser Scanner Clouds in Shrubland Environments
by Tiziana Simoniello, Rosa Coluzzi, Annibale Guariglia, Vito Imbrenda, Maria Lanfredi and Caterina Samela
Remote Sens. 2022, 14(20), 5127; https://doi.org/10.3390/rs14205127 - 13 Oct 2022
Cited by 18 | Viewed by 3073
Abstract
The monitoring of shrublands plays a fundamental role, from an ecological and climatic point of view, in biodiversity conservation, carbon stock estimates, and climate-change impact assessments. Laser scanning systems have proven to have a high capability in mapping non-herbaceous vegetation by classifying high-density [...] Read more.
The monitoring of shrublands plays a fundamental role, from an ecological and climatic point of view, in biodiversity conservation, carbon stock estimates, and climate-change impact assessments. Laser scanning systems have proven to have a high capability in mapping non-herbaceous vegetation by classifying high-density point clouds. On the other hand, the classification of low-density airborne laser scanner (ALS) clouds is largely affected by confusion with rock spikes and boulders having similar heights and shapes. To identify rocks and improve the accuracy of vegetation classes, we implemented an effective and time-saving procedure based on the integration of geometric features with laser intensity segmented by K-means clustering (GIK procedure). The classification accuracy was evaluated, taking into account the data unevenness (small size of rock class vs. vegetation and terrain classes) by estimating the Balanced Accuracy (BA range 89.15–90.37); a comparison with a standard geometry-based procedure showed an increase in accuracy of about 27%. The classical overall accuracy is generally very high for all the classifications: the average is 92.7 for geometry-based and 94.9 for GIK. At class level, the precision (user’s accuracy) for vegetation classes is very high (on average, 92.6% for shrubs and 99% for bushes) with a relative increase for shrubs up to 20% (>10% when rocks occupy more than 8% of the scene). Less pronounced differences were found for bushes (maximum 4.13%). The precision of rock class is quite acceptable (about 64%), compared to the complete absence of detection of the geometric procedure. We also evaluated how point cloud density affects the proposed procedure and found that the increase in shrub precision is also preserved for ALS clouds with very low point density (<1.5 pts/m2). The easiness of the approach also makes it implementable in an operative context for a non-full expert in LiDAR data classification, and it is suitable for the great wealth of large-scale acquisitions carried out in the past by using monowavelength NIR laser scanners with a small footprint configuration. Full article
Show Figures

Graphical abstract

23 pages, 5847 KiB  
Article
Development of High-Fidelity Automotive LiDAR Sensor Model with Standardized Interfaces
by Arsalan Haider, Marcell Pigniczki, Michael H. Köhler, Maximilian Fink, Michael Schardt, Yannik Cichy, Thomas Zeh, Lukas Haas, Tim Poguntke, Martin Jakobi and Alexander W. Koch
Sensors 2022, 22(19), 7556; https://doi.org/10.3390/s22197556 - 5 Oct 2022
Cited by 21 | Viewed by 5662
Abstract
This work introduces a process to develop a tool-independent, high-fidelity, ray tracing-based light detection and ranging (LiDAR) model. This virtual LiDAR sensor includes accurate modeling of the scan pattern and a complete signal processing toolchain of a LiDAR sensor. It is developed as [...] Read more.
This work introduces a process to develop a tool-independent, high-fidelity, ray tracing-based light detection and ranging (LiDAR) model. This virtual LiDAR sensor includes accurate modeling of the scan pattern and a complete signal processing toolchain of a LiDAR sensor. It is developed as a functional mock-up unit (FMU) by using the standardized open simulation interface (OSI) 3.0.2, and functional mock-up interface (FMI) 2.0. Subsequently, it was integrated into two commercial software virtual environment frameworks to demonstrate its exchangeability. Furthermore, the accuracy of the LiDAR sensor model is validated by comparing the simulation and real measurement data on the time domain and on the point cloud level. The validation results show that the mean absolute percentage error (MAPE) of simulated and measured time domain signal amplitude is 1.7%. In addition, the MAPE of the number of points Npoints and mean intensity Imean values received from the virtual and real targets are 8.5% and 9.3%, respectively. To the author’s knowledge, these are the smallest errors reported for the number of received points Npoints and mean intensity Imean values up until now. Moreover, the distance error derror is below the range accuracy of the actual LiDAR sensor, which is 2 cm for this use case. In addition, the proving ground measurement results are compared with the state-of-the-art LiDAR model provided by commercial software and the proposed LiDAR model to measure the presented model fidelity. The results show that the complete signal processing steps and imperfections of real LiDAR sensors need to be considered in the virtual LiDAR to obtain simulation results close to the actual sensor. Such considerable imperfections are optical losses, inherent detector effects, effects generated by the electrical amplification, and noise produced by the sunlight. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Back to TopTop