Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (150)

Search Parameters:
Keywords = three-dimensional imaging lidar

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
50 pages, 28354 KiB  
Article
Mobile Mapping Approach to Apply Innovative Approaches for Real Estate Asset Management: A Case Study
by Giorgio P. M. Vassena
Appl. Sci. 2025, 15(14), 7638; https://doi.org/10.3390/app15147638 - 8 Jul 2025
Viewed by 569
Abstract
Technological development has strongly impacted all processes related to the design, construction, and management of real estate assets. In fact, the introduction of the BIM approach has required the application of three-dimensional survey technologies, and in particular the use of LiDAR instruments, both [...] Read more.
Technological development has strongly impacted all processes related to the design, construction, and management of real estate assets. In fact, the introduction of the BIM approach has required the application of three-dimensional survey technologies, and in particular the use of LiDAR instruments, both in their static (TLS—terrestrial laser scanner) and dynamic (iMMS—indoor mobile mapping system) implementations. Operators and developers of LiDAR technologies, for the implementation of scan-to-BIM procedures, initially placed particular care on the 3D surveying accuracy obtainable from such tools. The incorporation of RGB sensors into these instruments has progressively expanded LiDAR-based applications from essential topographic surveying to geospatial applications, where the emphasis is no longer on the accurate three-dimensional reconstruction of buildings but on the capability to create three-dimensional image-based visualizations, such as virtual tours, which allow the recognition of assets located in every area of the buildings. Although much has been written about obtaining the best possible accuracy for extensive asset surveying of large-scale building complexes using iMMS systems, it is now essential to develop and define suitable procedures for controlling such kinds of surveying, targeted at specific geospatial applications. We especially address the design, field acquisition, quality control, and mass data management techniques that might be used in such complex environments. This work aims to contribute by defining the technical specifications for the implementation of geospatial mapping of vast asset survey activities involving significant building sites utilizing iMMS instrumentation. Three-dimensional models can also facilitate virtual tours, enable local measurements inside rooms, and particularly support the subsequent integration of self-locating image-based technologies that can efficiently perform field updates of surveyed databases. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

18 pages, 4774 KiB  
Article
InfraredStereo3D: Breaking Night Vision Limits with Perspective Projection Positional Encoding and Groundbreaking Infrared Dataset
by Yuandong Niu, Limin Liu, Fuyu Huang, Juntao Ma, Chaowen Zheng, Yunfeng Jiang, Ting An, Zhongchen Zhao and Shuangyou Chen
Remote Sens. 2025, 17(12), 2035; https://doi.org/10.3390/rs17122035 - 13 Jun 2025
Viewed by 448
Abstract
In fields such as military reconnaissance, forest fire prevention, and autonomous driving at night, there is an urgent need for high-precision three-dimensional reconstruction in low-light or night environments. The acquisition of remote sensing data by RGB cameras relies on external light, resulting in [...] Read more.
In fields such as military reconnaissance, forest fire prevention, and autonomous driving at night, there is an urgent need for high-precision three-dimensional reconstruction in low-light or night environments. The acquisition of remote sensing data by RGB cameras relies on external light, resulting in a significant decline in image quality and making it difficult to meet the task requirements. The method based on lidar has poor imaging effects in rainy and foggy weather, close-range scenes, and scenarios requiring thermal imaging data. In contrast, infrared cameras can effectively overcome this challenge because their imaging mechanisms are different from those of RGB cameras and lidar. However, the research on three-dimensional scene reconstruction of infrared images is relatively immature, especially in the field of infrared binocular stereo matching. There are two main challenges given this situation: first, there is a lack of a dataset specifically for infrared binocular stereo matching; second, the lack of texture information in infrared images causes a limit in the extension of the RGB method to the infrared reconstruction problem. To solve these problems, this study begins with the construction of an infrared binocular stereo matching dataset and then proposes an innovative perspective projection positional encoding-based transformer method to complete the infrared binocular stereo matching task. In this paper, a stereo matching network combined with transformer and cost volume is constructed. The existing work in the positional encoding of the transformer usually uses a parallel projection model to simplify the calculation. Our method is based on the actual perspective projection model so that each pixel is associated with a different projection ray. It effectively solves the problem of feature extraction and matching caused by insufficient texture information in infrared images and significantly improves matching accuracy. We conducted experiments based on the infrared binocular stereo matching dataset proposed in this paper. Experiments demonstrated the effectiveness of the proposed method. Full article
(This article belongs to the Collection Visible Infrared Imaging Radiometers and Applications)
Show Figures

Figure 1

14 pages, 1649 KiB  
Article
Evaluation of Smartphones Equipped with Light Detection and Ranging Technology for Circumferential and Volumetric Measurements in Lower Extremity Lymphedema
by Masato Tsuchiya, Kanako Abe, Satoshi Kubo and Ryuichi Azuma
Biosensors 2025, 15(6), 381; https://doi.org/10.3390/bios15060381 - 12 Jun 2025
Viewed by 475
Abstract
Lower extremity lymphedema (LEL) requires precise limb measurements for treatment evaluation and compression garment design. Tape measurement (TM) is the standard method but is time-consuming. Smartphones with light detection and ranging (LiDAR) technology may offer fast and efficient alternatives for three-dimensional imaging and [...] Read more.
Lower extremity lymphedema (LEL) requires precise limb measurements for treatment evaluation and compression garment design. Tape measurement (TM) is the standard method but is time-consuming. Smartphones with light detection and ranging (LiDAR) technology may offer fast and efficient alternatives for three-dimensional imaging and measurement. This study evaluated the accuracy, reliability, and time efficiency of LiDAR measurements compared with those of TM in patients with LEL. A healthy volunteer and 55 patients were included. Circumferences of the foot, ankle, calf, knee, and thigh and the volume were measured using TM and smartphones with LiDAR. The water displacement method was used to validate volume measurements. The measurement time, reliability, correlation, agreement, and systematic differences between the methods were assessed. LiDAR showed excellent reliability in the healthy volunteer (inter-rater intraclass correlation coefficients: 0.960–0.988) and significantly reduced the measurement time compared with TM (64.0 ± 15.1 vs. 115.3 ± 30.6 s). In patients with LEL, strong correlations and agreements were observed for ankle, calf, and knee measurements. However, foot and thigh measurements showed lower correlations and larger discrepancies. LiDAR has excellent accuracy and reliability in measuring the circumference and volume of the lower leg and has the potential to reduce the time required to acquire data. Limitations include lower accuracy for foot and thigh measurements and the current workflow complexity, which requires the use of multiple software tools. Full article
Show Figures

Figure 1

18 pages, 9119 KiB  
Article
Monitoring and Analysis of Slope Geological Hazards Based on UAV Images
by Nan Li, Huanxiang Qiu, Hu Zhai, Yuhui Chen and Jipeng Wang
Appl. Sci. 2025, 15(10), 5482; https://doi.org/10.3390/app15105482 - 14 May 2025
Viewed by 637
Abstract
Slope-related geological disasters occur frequently in various countries, posing significant threats to surrounding infrastructure, ecosystems, and human lives and property. Traditional manual monitoring methods for slope hazards are inefficient and have limited coverage. To enhance the monitoring and analysis of geological hazards, a [...] Read more.
Slope-related geological disasters occur frequently in various countries, posing significant threats to surrounding infrastructure, ecosystems, and human lives and property. Traditional manual monitoring methods for slope hazards are inefficient and have limited coverage. To enhance the monitoring and analysis of geological hazards, a study was conducted on the legacy slopes of an abandoned quarry in Jinan, Shandong Province, China. High-resolution images of the slopes were captured using unmanned aerial vehicle (UAV) phase tilt photogrammetry, and three-dimensional models were subsequently constructed. Software tools, including LiDAR360 5.2 and ArcMap 10.8, were employed to extract slope geological information, identify disaster-prone areas, and conduct stability analyses. The Analytic Hierarchy Process (AHP) was employed to further evaluate the stability of hazardous slopes. The results reveal the presence of two geohazard-prone areas in the study area. Geological analysis shows that both areas exhibit instability, with a high susceptibility to small-scale rockfalls and landslides. The integration of UAV remote sensing technology with AHP represents a novel approach, and the combination of multiple analytical methods enhances the accuracy of slope stability assessments. Full article
Show Figures

Figure 1

22 pages, 30414 KiB  
Article
Metric Scaling and Extrinsic Calibration of Monocular Neural Network-Derived 3D Point Clouds in Railway Applications
by Daniel Thomanek and Clemens Gühmann
Appl. Sci. 2025, 15(10), 5361; https://doi.org/10.3390/app15105361 - 11 May 2025
Viewed by 539
Abstract
Three-dimensional reconstruction using monocular camera images is a well-established research topic. While multi-image approaches like Structure from Motion produce sparse point clouds, single-image depth estimation via machine learning promises denser results. However, many models estimate relative depth, and even those providing metric depth [...] Read more.
Three-dimensional reconstruction using monocular camera images is a well-established research topic. While multi-image approaches like Structure from Motion produce sparse point clouds, single-image depth estimation via machine learning promises denser results. However, many models estimate relative depth, and even those providing metric depth often struggle with unseen data due to unfamiliar camera parameters or domain-specific challenges. Accurate metric 3D reconstruction is critical for railway applications, such as ensuring structural gauge clearance from vegetation to meet legal requirements. We propose a novel method to scale 3D point clouds using the track gauge, which typically only varies in very limited values between large areas or countries worldwide (e.g., 1.435 m in Europe). Our approach leverages state-of-the-art image segmentation to detect rails and measure the track gauge from a train driver’s perspective. Additionally, we extend our method to estimate a reasonable railway-specific extrinsic camera calibration. Evaluations show that our method reduces the average Chamfer distance to LiDAR point clouds from 1.94 m (benchmark UniDepth) to 0.41 m for image-wise calibration and 0.71 m for average calibration. Full article
Show Figures

Figure 1

20 pages, 9870 KiB  
Article
Analysis, Simulation, and Scanning Geometry Calibration of Palmer Scanning Units for Airborne Hyperspectral Light Detection and Ranging
by Shuo Shi, Qian Xu, Chengyu Gong, Wei Gong, Xingtao Tang and Bowei Zhou
Remote Sens. 2025, 17(8), 1450; https://doi.org/10.3390/rs17081450 - 18 Apr 2025
Viewed by 427
Abstract
Airborne hyperspectral LiDAR (AHSL) is a technology that integrates the spectral content collected using hyperspectral imaging and the precise 3D descriptions of observed objects obtained using LiDAR (light detection and ranging). AHSL detects the spectral and three-dimensional (3D) information on an object simply [...] Read more.
Airborne hyperspectral LiDAR (AHSL) is a technology that integrates the spectral content collected using hyperspectral imaging and the precise 3D descriptions of observed objects obtained using LiDAR (light detection and ranging). AHSL detects the spectral and three-dimensional (3D) information on an object simply using laser measurements. Nevertheless, the advantageous richness of spectral properties also introduces novel issues into the scan unit, the mechanical–optical trade-off. Specifically, the abundant spectral information requires a larger optical aperture, limiting the acceptance of the mechanic load by the scan unit at a demanding rotation speed and flight height. Via the simulation and analysis of scan models, it is exhibited that Palmer scans fit the large optical aperture required by AHSL best. Furthermore, based on the simulation of the Palmer scan model, 45.23% is explored as the optimized ratio of overlap (ROP) for minimizing the diversity of the point density, with a reduction in the coefficient of variation (CV) from 0.47 to 0.19. The other issue is that it is intricate to calibrate the scanning geometry using outside devices due to the complex optical path. A self-calibration strategy is proposed for tackling this problem, which integrates indoor laser vector retrieval and airborne orientation correction. The strategy is composed of the following three improvements: (1) A self-determined laser vector retrieval strategy that utilizes the self-ranging feature of AHSL itself is proposed for retrieving the initial scanning laser vectors with a precision of 0.874 mrad. (2) A linear residual estimated interpolation method (LREI) is proposed for enhancing the precision of the interpolation, reducing the RMSE from 1.517 mrad to 0.977 mrad. Compared to the linear interpolation method, LREI maintains the geometric features of Palmer scanning traces. (3) A least-deviated flatness restricted optimization (LDFO) algorithm is used to calibrate the angle offset in aerial scanning point cloud data, which reduces the standard deviation in the flatness of the scanning plane from 1.389 m to 0.241 m and reduces the distortion of the scanning strip. This study provides a practical scanning method and a corresponding calibration strategy for AHSL. Full article
Show Figures

Figure 1

35 pages, 30272 KiB  
Article
Machine-Learning-Based Integrated Mining Big Data and Multi-Dimensional Ore-Forming Prediction: A Case Study of Yanshan Iron Mine, Hebei, China
by Yuhao Chen, Gongwen Wang, Nini Mou, Leilei Huang, Rong Mei and Mingyuan Zhang
Appl. Sci. 2025, 15(8), 4082; https://doi.org/10.3390/app15084082 - 8 Apr 2025
Cited by 1 | Viewed by 1048
Abstract
With the rapid development of big data and artificial intelligence technologies, the era of Industry 4.0 has driven large open-pit mines towards digital and intelligent transformation. This is particularly true in mature mining areas such as the Yanshan Iron Mine, where the depletion [...] Read more.
With the rapid development of big data and artificial intelligence technologies, the era of Industry 4.0 has driven large open-pit mines towards digital and intelligent transformation. This is particularly true in mature mining areas such as the Yanshan Iron Mine, where the depletion of shallow proven reserves and the increasing issues of mixed surrounding rocks with shallow ore bodies make it increasingly important to build intelligent mines and implement green and sustainable development strategies. However, previous mineralization predictions for the Yanshan Iron Mine largely relied on traditional geological data (such as blasting rock powder, borehole profiles, etc.) exploration reports or three-dimensional explicit ore body models, which lacked precision and were insufficient to meet the requirements for intelligent mine construction. Therefore, this study, based on artificial intelligence technology, focuses on geoscience big data mining and quantitative prediction, with the goal of achieving multi-scale, multi-dimensional, and multi-modal precise positioning of the Yanshan Iron Mine and establishing its intelligent mine technology system. The specific research contents and results are as follows: (1) This study collected and organized multi-source geoscience data for the Yanshan Iron Mine, including geological, geophysical, and remote sensing data, such as mine drilling data, centimeter-level drone image data, and high-spectral data of rocks and minerals, establishing a rich mine big data set. (2) SOM clustering analysis was performed on the elemental data of rock and mineral samples, identifying key elements positively correlated with iron as Mg, Al, Si, S, K, Ca, and Mn. TSG was used to interpret shortwave and thermal infrared hyperspectral data of the samples, identifying the main alteration mineral types in the mining area. Combined with spectral and elemental analysis, the universality of alteration features such as chloritization and carbonation, which are closely related to the mineralization process, was further verified. (3) Based on the spectral and elemental grade data of rock and mineral samples, a training model for ore grade–spectrum correlation was constructed using Random Forests, Support Vector Machines, and other algorithms, with the SMOTE algorithm applied to balance positive and negative samples. This model was then applied to centimeter-level drone images, achieving high-precision intelligent identification of magnetite in the mining area. Combined with LiDAR image elevation data, a real-time three-dimensional surface mineral monitoring model for the mining area was built. (4) The Bagged Positive Label Unlabeled Learning (BPUL) method was adopted to integrate five evidence maps—carbonate alteration, chloritization, mixed rockization, fault zones, and magnetic anomalies—to conduct three-dimensional mineralization prediction analysis for the mining area. The locations of key target areas were delineated. The SHAP index and three-dimensional explicit geological models were used to conduct an in-depth analysis of the contributions of different feature variables in the mineralization process of the Yanshan Iron Mine. In conclusion, this study successfully constructed the technical framework for intelligent mine construction at the Yanshan Iron Mine, providing important theoretical and practical support for mineralization prediction and intelligent exploration in the mining area. Full article
(This article belongs to the Special Issue Green Mining: Theory, Methods, Computation and Application)
Show Figures

Figure 1

25 pages, 14926 KiB  
Article
Plant Height Estimation in Corn Fields Based on Column Space Segmentation Algorithm
by Huazhe Zhang, Nian Liu, Juan Xia, Lejun Chen and Shengde Chen
Agriculture 2025, 15(3), 236; https://doi.org/10.3390/agriculture15030236 - 22 Jan 2025
Cited by 1 | Viewed by 1343
Abstract
Plant genomics have progressed significantly due to advances in information technology, but phenotypic measurement technology has not kept pace, hindering plant breeding. As maize is one of China’s three main grain crops, accurately measuring plant height is crucial for assessing crop growth and [...] Read more.
Plant genomics have progressed significantly due to advances in information technology, but phenotypic measurement technology has not kept pace, hindering plant breeding. As maize is one of China’s three main grain crops, accurately measuring plant height is crucial for assessing crop growth and productivity. This study addresses the challenges of plant segmentation and inaccurate plant height extraction in maize populations under field conditions. A three-dimensional dense point cloud was reconstructed using the structure from motion–multi-view stereo (SFM-MVS) method, based on multi-view image sequences captured by an unmanned aerial vehicle (UAV). To improve plant segmentation, we propose a column space approximate segmentation algorithm, which combines the column space method with the enclosing box technique. The proposed method achieved a segmentation accuracy exceeding 90% in dense canopy conditions, significantly outperforming traditional algorithms, such as region growing (80%) and Euclidean clustering (75%). Furthermore, the extracted plant heights demonstrated a high correlation with manual measurements, with R2 values ranging from 0.8884 to 0.9989 and RMSE values as low as 0.0148 m. However, the scalability of the method for larger agricultural operations may face challenges due to computational demands when processing large-scale datasets and potential performance variability under different environmental conditions. Addressing these issues through algorithm optimization, parallel processing, and the integration of additional data sources such as multispectral or LiDAR data could enhance its scalability and robustness. The results demonstrate that the method can accurately reflect the heights of maize plants, providing a reliable solution for large-scale, field-based maize phenotyping. The method has potential applications in high-throughput monitoring of crop phenotypes and precision agriculture. Full article
Show Figures

Figure 1

43 pages, 19436 KiB  
Article
Quantification of Forest Regeneration on Forest Inventory Sample Plots Using Point Clouds from Personal Laser Scanning
by Sarah Witzmann, Christoph Gollob, Ralf Kraßnitzer, Tim Ritter, Andreas Tockner, Lukas Moik, Valentin Sarkleti, Tobias Ofner-Graff, Helmut Schume and Arne Nothdurft
Remote Sens. 2025, 17(2), 269; https://doi.org/10.3390/rs17020269 - 14 Jan 2025
Viewed by 1251
Abstract
The presence of sufficient natural regeneration in mature forests is regarded as a pivotal criterion for their future stability, ensuring seamless reforestation following final harvesting operations or forest calamities. Consequently, forest regeneration is typically quantified as part of forest inventories to monitor its [...] Read more.
The presence of sufficient natural regeneration in mature forests is regarded as a pivotal criterion for their future stability, ensuring seamless reforestation following final harvesting operations or forest calamities. Consequently, forest regeneration is typically quantified as part of forest inventories to monitor its occurrence and development over time. Light detection and ranging (LiDAR) technology, particularly ground-based LiDAR, has emerged as a powerful tool for assessing typical forest inventory parameters, providing high-resolution, three-dimensional data on the forest structure. Therefore, it is logical to attempt a LiDAR-based quantification of forest regeneration, which could greatly enhance area-wide monitoring, further supporting sustainable forest management through data-driven decision making. However, examples in the literature are relatively sparse, with most relevant studies focusing on an indirect quantification of understory density from airborne LiDAR data (ALS). The objective of this study is to develop an accurate and reliable method for estimating regeneration coverage from data obtained through personal laser scanning (PLS). To this end, 19 forest inventory plots were scanned with both a personal and a high-resolution terrestrial laser scanner (TLS) for reference purposes. The voxelated point clouds obtained from the personal laser scanner were converted into raster images, providing either the canopy height, the total number of filled voxels (containing at least one LiDAR point), or the ratio of filled voxels to the total number of voxels. Local maxima in these raster images, assumed to be likely to contain tree saplings, were then used as seed points for a raster-based tree segmentation, which was employed to derive the final regeneration coverage estimate. The results showed that the estimates differed from the reference in a range of approximately −10 to +10 percentage points, with an average deviation of around 0 percentage points. In contrast, visually estimated regeneration coverages on the same forest plots deviated from the reference by between −20 and +30 percentage points, approximately −2 percentage points on average. These findings highlight the potential of PLS data for automated forest regeneration quantification, which could be further expanded to include a broader range of data collected during LiDAR-based forest inventory campaigns. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

18 pages, 2394 KiB  
Article
Unsupervised Anomaly Detection for Improving Adversarial Robustness of 3D Object Detection Models
by Mumuxin Cai, Xupeng Wang, Ferdous Sohel and Hang Lei
Electronics 2025, 14(2), 236; https://doi.org/10.3390/electronics14020236 - 8 Jan 2025
Cited by 4 | Viewed by 1390
Abstract
Three-dimensional object detection based on deep neural networks (DNNs) is widely used in safety-related applications, such as autonomous driving. However, existing research has shown that 3D object detection models are vulnerable to adversarial attacks. Hence, the improvement on the robustness of deep 3D [...] Read more.
Three-dimensional object detection based on deep neural networks (DNNs) is widely used in safety-related applications, such as autonomous driving. However, existing research has shown that 3D object detection models are vulnerable to adversarial attacks. Hence, the improvement on the robustness of deep 3D detection models under adversarial attacks is investigated in this work. A deep autoencoder-based anomaly detection method is proposed, which has a strong ability to detect elaborate adversarial samples in an unsupervised way. The proposed anomaly detection method operates on a given Light Detection and Ranging (LiDAR) scene in its Bird’s Eye View (BEV) image and reconstructs the scene through an autoencoder. To improve the performance of the autoencoder, an augmented memory module with typical normal patterns recorded is introduced. It is designed to help the model to amplify the reconstruction errors of malicious samples with normal samples negligibly affected. Experiments on several public datasets show that the proposed anomaly detection method achieves an AUC of 0.8 under adversarial attacks and improves the robustness of 3D object detection. Full article
Show Figures

Figure 1

25 pages, 13628 KiB  
Article
Gradient Enhancement Techniques and Motion Consistency Constraints for Moving Object Segmentation in 3D LiDAR Point Clouds
by Fangzhou Tang, Bocheng Zhu and Junren Sun
Remote Sens. 2025, 17(2), 195; https://doi.org/10.3390/rs17020195 - 8 Jan 2025
Cited by 1 | Viewed by 1159
Abstract
The ability to segment moving objects from three-dimensional (3D) LiDAR scans is critical to advancing autonomous driving technology, facilitating core tasks like localization, collision avoidance, and path planning. In this paper, we introduce a novel deep neural network designed to enhance the performance [...] Read more.
The ability to segment moving objects from three-dimensional (3D) LiDAR scans is critical to advancing autonomous driving technology, facilitating core tasks like localization, collision avoidance, and path planning. In this paper, we introduce a novel deep neural network designed to enhance the performance of 3D LiDAR point cloud moving object segmentation (MOS) through the integration of image gradient information and the principle of motion consistency. Our method processes sequential range images, employing depth pixel difference convolution (DPDC) to improve the efficacy of dilated convolutions, thus boosting spatial information extraction from range images. Additionally, we incorporate Bayesian filtering to impose posterior constraints on predictions, enhancing the accuracy of motion segmentation. To handle the issue of uneven object scales in range images, we develop a novel edge-aware loss function and use a progressive training strategy to further boost performance. Our method is validated on the SemanticKITTI-based LiDAR MOS benchmark, where it significantly outperforms current state-of-the-art (SOTA) methods, all while working directly on two-dimensional (2D) range images without requiring mapping. Full article
Show Figures

Graphical abstract

18 pages, 12334 KiB  
Article
Canopy Height Integration for Precise Forest Aboveground Biomass Estimation in Natural Secondary Forests of Northeast China Using Gaofen-7 Stereo Satellite Data
by Caixia Liu, Huabing Huang, Zhiyu Zhang, Wenyi Fan and Di Wu
Remote Sens. 2025, 17(1), 47; https://doi.org/10.3390/rs17010047 - 27 Dec 2024
Cited by 1 | Viewed by 1117
Abstract
Accurate estimates of forest aboveground biomass (AGB) are necessary for the accurate tracking of forest carbon stock. Gaofen-7 (GF-7) is the first civilian sub-meter three-dimensional (3D) mapping satellite from China. It is equipped with a laser altimeter system and a dual-line array stereoscopic [...] Read more.
Accurate estimates of forest aboveground biomass (AGB) are necessary for the accurate tracking of forest carbon stock. Gaofen-7 (GF-7) is the first civilian sub-meter three-dimensional (3D) mapping satellite from China. It is equipped with a laser altimeter system and a dual-line array stereoscopic mapping camera, which enables it to synchronously generate full-waveform LiDAR data and stereoscopic images. The bulk of existing research has examined how accurate GF-7 is for topographic measurements of bare land or canopy height. The measurement of forest aboveground biomass has not received as much attention as it deserves. This study aimed to assess the GF-7 stereo imaging capability, displayed as topographic features for aboveground biomass estimation in forests. The aboveground biomass model was constructed using the random forest machine learning technique, which was accomplished by combining the use of in situ field measurements, pairs of GF-7 stereo images, and the corresponding generated canopy height model (CHM). Findings showed that the biomass estimation model had an accuracy of R2 = 0.76, RMSE = 7.94 t/ha, which was better than the inclusion of forest canopy height (R2 = 0.30, RMSE = 21.02 t/ha). These results show that GF-7 has considerable application potential in gathering large-scale high-precision forest aboveground biomass using a restricted amount of field data. Full article
Show Figures

Figure 1

12 pages, 1210 KiB  
Article
VirtualFilter: A High-Performance Multimodal 3D Object Detection Method with Semantic Filtering
by Mingcheng Qu and Ganlin Deng
Appl. Sci. 2024, 14(24), 11555; https://doi.org/10.3390/app142411555 - 11 Dec 2024
Viewed by 1078
Abstract
Three-dimensional object detection is a key task in the field of autonomous driving that is aimed at identifying the position and category of objects in the scene. Due to the 3D nature of data generated by LiDAR, most models use it as input [...] Read more.
Three-dimensional object detection is a key task in the field of autonomous driving that is aimed at identifying the position and category of objects in the scene. Due to the 3D nature of data generated by LiDAR, most models use it as input data for detection. However, the low scanning resolution of LiDAR for distant objects has inherent limitations to the method, and multimodal fusion 3D object detection methods have attracted widespread attention, mostly using both LiDAR and camera data as inputs for detection. Certainly, multimodal methods can also lead to many problems, the two main ones being the incomplete utilization of camera features and rough fusion methods. In this study, we proposed a novel multimodal 3D object detection method named VirtualFilter, which uses 3D point clouds and 2D images as inputs. In order to better utilize camera features, VirtualFilter utilizes the image semantic segmentation model to generate image semantic features and uses the semantic information to filter the virtual point cloud data during the virtual point cloud generation process to enhance the data accuracy of the virtual cloud. In addition, VirtualFilter utilizes a better RoI feature fusion strategy named 3D-DGAF (3D Distance-based Grid Attentional Fusion), which employs a attention mechanism based on distance gridding to better fuse the RoI features of the original and virtual point clouds. The experimental results on the authoritative autonomous driving dataset KITTI show that this multimodal 3D object detection method outperforms the baseline method in several evaluation metrics. Full article
Show Figures

Figure 1

21 pages, 1344 KiB  
Review
Tackling Heterogeneous Light Detection and Ranging-Camera Alignment Challenges in Dynamic Environments: A Review for Object Detection
by Yujing Wang, Abdul Hadi Abd Rahman, Fadilla ’Atyka Nor Rashid and Mohamad Khairulamirin Md Razali
Sensors 2024, 24(23), 7855; https://doi.org/10.3390/s24237855 - 9 Dec 2024
Cited by 1 | Viewed by 1325
Abstract
Object detection is an essential computer vision task that identifies and locates objects within images or videos and is crucial for applications such as autonomous driving, robotics, and augmented reality. Light Detection and Ranging (LiDAR) and camera sensors are widely used for reliable [...] Read more.
Object detection is an essential computer vision task that identifies and locates objects within images or videos and is crucial for applications such as autonomous driving, robotics, and augmented reality. Light Detection and Ranging (LiDAR) and camera sensors are widely used for reliable object detection. These sensors produce heterogeneous data due to differences in data format, spatial resolution, and environmental responsiveness. Existing review articles on object detection predominantly focus on the statistical analysis of fusion algorithms, often overlooking the complexities of aligning data from these distinct modalities, especially dynamic environment data alignment. This paper addresses the challenges of heterogeneous LiDAR-camera alignment in dynamic environments by surveying over 20 alignment methods for three-dimensional (3D) object detection, focusing on research published between 2019 and 2024. This study introduces the core concepts of multimodal 3D object detection, emphasizing the importance of integrating data from different sensor modalities for accurate object recognition in dynamic environments. The survey then delves into a detailed comparison of recent heterogeneous alignment methods, analyzing critical approaches found in the literature, and identifying their strengths and limitations. A classification of methods for aligning heterogeneous data in 3D object detection is presented. This paper also highlights the critical challenges in aligning multimodal data, including dynamic environments, sensor fusion, scalability, and real-time processing. These limitations are thoroughly discussed, and potential future research directions are proposed to address current gaps and advance the state-of-the-art. By summarizing the latest advancements and highlighting open challenges, this survey aims to stimulate further research and innovation in heterogeneous alignment methods for multimodal 3D object detection, thereby pushing the boundaries of what is currently achievable in this rapidly evolving domain. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

19 pages, 3085 KiB  
Review
Research Progress of Spectral Imaging Techniques in Plant Phenotype Studies
by Qian Zhang, Rupeng Luan, Ming Wang, Jinmeng Zhang, Feng Yu, Yang Ping and Lin Qiu
Plants 2024, 13(21), 3088; https://doi.org/10.3390/plants13213088 - 2 Nov 2024
Cited by 7 | Viewed by 2866
Abstract
Spectral imaging technique has been widely applied in plant phenotype analysis to improve plant trait selection and genetic advantages. The latest developments and applications of various optical imaging techniques in plant phenotypes were reviewed, and their advantages and applicability were compared. X-ray computed [...] Read more.
Spectral imaging technique has been widely applied in plant phenotype analysis to improve plant trait selection and genetic advantages. The latest developments and applications of various optical imaging techniques in plant phenotypes were reviewed, and their advantages and applicability were compared. X-ray computed tomography (X-ray CT) and light detection and ranging (LiDAR) are more suitable for the three-dimensional reconstruction of plant surfaces, tissues, and organs. Chlorophyll fluorescence imaging (ChlF) and thermal imaging (TI) can be used to measure the physiological phenotype characteristics of plants. Specific symptoms caused by nutrient deficiency can be detected by hyperspectral and multispectral imaging, LiDAR, and ChlF. Future plant phenotype research based on spectral imaging can be more closely integrated with plant physiological processes. It can more effectively support the research in related disciplines, such as metabolomics and genomics, and focus on micro-scale activities, such as oxygen transport and intercellular chlorophyll transmission. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

Back to TopTop