Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = LiDAR super-resolution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 11908 KB  
Article
Super-Resolving Digital Terrain Models Using a Modified RCAN
by Mohamed Helmy, Emanuele Mandanici, Luca Vittuari and Gabriele Bitelli
Remote Sens. 2026, 18(1), 20; https://doi.org/10.3390/rs18010020 - 21 Dec 2025
Viewed by 386
Abstract
High-resolution Digital Terrain Models (DTMs) are essential for precise terrain analysis, yet their production remains constrained by the high cost and limited coverage of LiDAR surveys. This study introduces a deep learning framework based on a modified Residual Channel Attention Network (RCAN) to [...] Read more.
High-resolution Digital Terrain Models (DTMs) are essential for precise terrain analysis, yet their production remains constrained by the high cost and limited coverage of LiDAR surveys. This study introduces a deep learning framework based on a modified Residual Channel Attention Network (RCAN) to super-resolve 10 m DTMs to 1 m resolution. The model was trained and validated on a 568 km2 LiDAR-derived dataset using custom elevation-aware loss functions that integrate elevation accuracy (L1), slope gradients, and multi-scale structural components to preserve terrain realism and vertical precision. Performance was evaluated across 257 independent test tiles representing flat, hilly, and mountainous terrains. A balanced loss configuration (α = 0.5, γ = 0.5) achieved the best results, yielding Mean Absolute Error (MAE) as low as 0.83 m and Root Mean Square Error (RMSE) of 1.14–1.15 m, with near-zero bias (−0.04 m). Errors increased moderately in mountainous areas (MAE = 1.29–1.41 m, RMSE = 1.84 m), confirming the greater difficulty of rugged terrain. Overall, the approach demonstrates strong potential for operational applications in geomorphology, hydrology, and landscape monitoring, offering an effective solution for high-resolution DTM generation where LiDAR data are unavailable. Full article
Show Figures

Figure 1

23 pages, 4237 KB  
Article
Debris-Flow Erosion Volume Estimation Using a Single High-Resolution Optical Satellite Image
by Peng Zhang, Shang Wang, Guangyao Zhou, Yueze Zheng, Kexin Li and Luyan Ji
Remote Sens. 2025, 17(14), 2413; https://doi.org/10.3390/rs17142413 - 12 Jul 2025
Cited by 1 | Viewed by 1114
Abstract
Debris flows pose significant risks to mountainous regions, and quick, accurate volume estimation is crucial for hazard assessment and post-disaster response. Traditional volume estimation methods, such as ground surveys and aerial photogrammetry, are often limited by cost, accessibility, and timeliness. While remote sensing [...] Read more.
Debris flows pose significant risks to mountainous regions, and quick, accurate volume estimation is crucial for hazard assessment and post-disaster response. Traditional volume estimation methods, such as ground surveys and aerial photogrammetry, are often limited by cost, accessibility, and timeliness. While remote sensing offers wide coverage, existing optical and Synthetic Aperture Radar (SAR)-based techniques face challenges in direct volume estimation due to resolution constraints and rapid terrain changes. This study proposes a Super-Resolution Shape from Shading (SRSFS) approach enhanced by a Non-local Piecewise-smooth albedo Constraint (NPC), hereafter referred to as NPC SRSFS, to estimate debris-flow erosion volume using single high-resolution optical satellite imagery. By integrating publicly available global Digital Elevation Model (DEM) data as prior terrain reference, the method enables accurate post-disaster topography reconstruction from a single optical image, thereby reducing reliance on stereo imagery. The NPC constraint improves the robustness of albedo estimation under heterogeneous surface conditions, enhancing depth recovery accuracy. The methodology is evaluated using Gaofen-6 satellite imagery, with quantitative comparisons to aerial Light Detection and Ranging (LiDAR) data. Results show that the proposed method achieves reliable terrain reconstruction and erosion volume estimates, with accuracy comparable to airborne LiDAR. This study demonstrates the potential of NPC SRSFS as a rapid, cost-effective alternative for post-disaster debris-flow assessment. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

11 pages, 2032 KB  
Communication
Super-Resolution Reconstruction of LiDAR Images Based on an Adaptive Contour Closure Algorithm over 10 km
by Liang Shi, Xinyuan Zhang, Fei Han, Yicheng Wang, Shilong Xu, Xing Yang and Yihua Hu
Photonics 2025, 12(6), 569; https://doi.org/10.3390/photonics12060569 - 5 Jun 2025
Viewed by 1002
Abstract
Reflective Tomography LiDAR (RTL) imaging, an innovative LiDAR technology, offers the significant advantage of an imaging resolution independent of detection distance and receiving optical aperture, evolving from Computed Tomography (CT) principles. However, distinct from transmissive imaging, RTL requires precise alignment of multi-angle echo [...] Read more.
Reflective Tomography LiDAR (RTL) imaging, an innovative LiDAR technology, offers the significant advantage of an imaging resolution independent of detection distance and receiving optical aperture, evolving from Computed Tomography (CT) principles. However, distinct from transmissive imaging, RTL requires precise alignment of multi-angle echo data around the target’s rotation center before image reconstruction. This paper presents an adaptive contour closure algorithm for automated multi-angle echo data registration in RTL. A 10.38 km remote RTL imaging experiment validates the algorithm’s efficacy, showing that it improves the quality factor of reconstructed images by over 23% and effectively suppresses interference from target/detector jitter, laser pulse transmission/reception fluctuations, and atmospheric turbulence. These results support the development of advanced space target perception capabilities and drive the transition of space-based LiDAR from “point” measurements to “volumetric” perception, marking a crucial advancement in space exploration and surveillance. Full article
(This article belongs to the Special Issue Technologies and Applications of Optical Imaging)
Show Figures

Figure 1

18 pages, 15380 KB  
Article
A High-Precision Method for Warehouse Material Level Monitoring Using Millimeter-Wave Radar and 3D Surface Reconstruction
by Wenxin Zhang and Yi Gu
Sensors 2025, 25(9), 2716; https://doi.org/10.3390/s25092716 - 25 Apr 2025
Viewed by 977
Abstract
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform [...] Read more.
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform (CZT) super-resolution processing to enhance spectral resolution and measurement accuracy. To improve grain surface identification, an anomalous signal correction method based on angle–range feature fusion is introduced, mitigating errors caused by weak reflections and multipath effects. The point cloud data acquired by the radar undergo denoising, smoothing, and enhancement using statistical filtering, Moving Least Squares (MLS) smoothing, and bicubic spline interpolation to ensure data continuity and accuracy. A Poisson Surface Reconstruction algorithm is then applied to generate a continuous 3D model of the grain heap. The vector triple product method is used to estimate grain volume. Experimental results show a reconstruction volume error within 3%, demonstrating the method’s accuracy, robustness, and adaptability. The reconstructed surface accurately represents grain heap geometry, making this approach well suited for real-time warehouse monitoring and providing reliable support for material balance and intelligent storage management. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

21 pages, 6473 KB  
Article
Reconstruction for Scanning LiDAR with Array GM-APD on Mobile Platform
by Di Liu, Jianfeng Sun, Wei Lu, Sining Li and Xin Zhou
Remote Sens. 2025, 17(4), 622; https://doi.org/10.3390/rs17040622 - 11 Feb 2025
Cited by 1 | Viewed by 1543
Abstract
Array Geiger-mode avalanche photodiode (GM-APD) Light Detection and Ranging (LiDAR) has the advantages of high sensitivity and long imaging range. However, due to its operating principle, GM-APD LiDAR requires processing based on multiple-laser-pulse data to complete the target reconstruction. Therefore, the influence of [...] Read more.
Array Geiger-mode avalanche photodiode (GM-APD) Light Detection and Ranging (LiDAR) has the advantages of high sensitivity and long imaging range. However, due to its operating principle, GM-APD LiDAR requires processing based on multiple-laser-pulse data to complete the target reconstruction. Therefore, the influence of the device’s movement or scanning motion during GM-APD LiDAR imaging cannot be ignored. To solve this problem, we designed a reconstruction method based on coordinate system transformation and the Position and Orientation System (POS). The position, attitude, and scanning angles provided by POS and angular encoders are used to reduce or eliminate the dynamic effects in multiple-laser-pulse detection. Then, an optimization equation is constructed based on the negative-binomial distribution detection model of GM-APD. The spatial distribution of photons in the scene is ultimately computed. This method avoids the need for field-of-view registration, improves data utilization, and reduces the complexity of the algorithm while eliminating the effect of LiDAR motion. Moreover, with sufficient data acquisition, this method can achieve super-resolution reconstruction. Finally, numerical simulations and imaging experiments verify the effectiveness of the proposed method. For a 1.95 km building scene with SBR ~0.137, the 2 × 2-fold super-resolution reconstruction results obtained by this method reduce the distance error by an order of magnitude compared to traditional methods. Full article
Show Figures

Figure 1

19 pages, 3375 KB  
Article
Enhancing Cross-Modal Camera Image and LiDAR Data Registration Using Feature-Based Matching
by Jennifer Leahy, Shabnam Jabari, Derek Lichti and Abbas Salehitangrizi
Remote Sens. 2025, 17(3), 357; https://doi.org/10.3390/rs17030357 - 22 Jan 2025
Cited by 3 | Viewed by 3851
Abstract
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This [...] Read more.
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This paper introduces a new pipeline for camera–LiDAR post-registration to produce colorized point clouds. Utilizing deep learning-based matching between 2D spherical projection LiDAR feature layers and camera images, we can map 3D LiDAR coordinates to image grey values. Various LiDAR feature layers, including intensity, bearing angle, depth, and different weighted combinations, are used to find correspondence with camera images utilizing state-of-the-art deep learning matching algorithms, i.e., SuperGlue and LoFTR. Registration is achieved using collinearity equations and RANSAC to remove false matches. The pipeline’s accuracy is tested using survey-grade terrestrial datasets from the TX5 scanner, as well as datasets from a custom-made, low-cost mobile mapping system (MMS) named Simultaneous Localization And Mapping Multi-sensor roBOT (SLAMM-BOT) across diverse scenes, in which both outperformed their baseline solutions. SuperGlue performed best in high-feature scenes, whereas LoFTR performed best in low-feature or sparse data scenes. The LiDAR intensity layer had the strongest matches, but combining feature layers improved matching and reduced errors. Full article
(This article belongs to the Special Issue Remote Sensing Satellites Calibration and Validation)
Show Figures

Figure 1

15 pages, 3509 KB  
Article
Dense Feature Pyramid Deep Completion Network
by Xiaoping Yang, Ping Ni, Zhenhua Li and Guanghui Liu
Electronics 2024, 13(17), 3490; https://doi.org/10.3390/electronics13173490 - 2 Sep 2024
Viewed by 1575
Abstract
Most current point cloud super-resolution reconstruction requires huge calculations and has low accuracy when facing large outdoor scenes; a Dense Feature Pyramid Network (DenseFPNet) is proposed for the feature-level fusion of images with low-resolution point clouds to generate higher-resolution point clouds, which can [...] Read more.
Most current point cloud super-resolution reconstruction requires huge calculations and has low accuracy when facing large outdoor scenes; a Dense Feature Pyramid Network (DenseFPNet) is proposed for the feature-level fusion of images with low-resolution point clouds to generate higher-resolution point clouds, which can be utilized to solve the problem of the super-resolution reconstruction of 3D point clouds by turning it into a 2D depth map complementation problem, which can reduce the time and complexity of obtaining high-resolution point clouds only by LiDAR. The network first utilizes an image-guided feature extraction network based on RGBD-DenseNet as an encoder to extract multi-scale features, followed by an upsampling block as a decoder to gradually recover the size and details of the feature map. Additionally, the network connects the corresponding layers of the encoder and decoder through pyramid connections. Finally, experiments are conducted on the KITTI deep complementation dataset, and the network performs well in various metrics compared to other networks. It improves the RMSE by 17.71%, 16.60%, 7.11%, and 4.68% compared to the CSPD, Spade-RGBsD, Sparse-to-Dense, and GAENET. Full article
(This article belongs to the Special Issue Digital Signal and Image Processing for Multimedia Technology)
Show Figures

Figure 1

15 pages, 4809 KB  
Article
LiDAR Point Cloud Super-Resolution Reconstruction Based on Point Cloud Weighted Fusion Algorithm of Improved RANSAC and Reciprocal Distance
by Xiaoping Yang, Ping Ni, Zhenhua Li and Guanghui Liu
Electronics 2024, 13(13), 2521; https://doi.org/10.3390/electronics13132521 - 27 Jun 2024
Cited by 4 | Viewed by 3028
Abstract
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution [...] Read more.
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution point clouds with higher-resolution point clouds at the data level, the algorithm generates high-resolution point clouds, achieving the super-resolution reconstruction of lidar point clouds. This method effectively reduces noise in the higher-resolution point clouds while preserving the structure of the low-resolution point clouds, ensuring that the semantic information of the generated high-resolution point clouds remains consistent with that of the low-resolution point clouds. Specifically, the algorithm constructs a K-d tree using the low-resolution point cloud to perform a nearest neighbor search, establishing the correspondence between the low-resolution and higher-resolution point clouds. Next, the improved RANSAC algorithm is employed for point cloud alignment, and inverse distance weighting is used for point-by-point weighted fusion, ultimately yielding the high-resolution point cloud. The experimental results demonstrate that the proposed point cloud super-resolution reconstruction method outperforms other methods across various metrics. Notably, it reduces the Chamfer Distance (CD) metric by 0.49 and 0.29 and improves the Precision metric by 7.75% and 4.47%, respectively, compared to two other methods. Full article
(This article belongs to the Special Issue Digital Security and Privacy Protection: Trends and Applications)
Show Figures

Figure 1

17 pages, 4413 KB  
Article
Super-Resolution Reconstruction of an Array Lidar Range Profile
by Xuelian Liu, Xulang Zhou, Guan Xi, Rui Zhuang, Chunhao Shi and Chunyang Wang
Appl. Sci. 2024, 14(12), 5335; https://doi.org/10.3390/app14125335 - 20 Jun 2024
Cited by 1 | Viewed by 1709
Abstract
Aiming at the problem that the range profile of the current array lidar has a low resolution and contains few target details and little edge information, a super-resolution reconstruction method based on projection onto convex sets (POCS) combining the Lucas–Kanade (LK) optical flow [...] Read more.
Aiming at the problem that the range profile of the current array lidar has a low resolution and contains few target details and little edge information, a super-resolution reconstruction method based on projection onto convex sets (POCS) combining the Lucas–Kanade (LK) optical flow method with a Gaussian pyramid was proposed. Firstly, the reference high-resolution range profile was obtained by the nearest neighbor interpolation of the single low-resolution range profile. Secondly, the LK optical flow method was introduced to achieve the motion estimation of low-resolution image sequences, and the Gaussian pyramid was used to perform multi-scale correction on the estimated vector, effectively improving the accuracy of motion estimation. On the basis of data consistency constraints, gradient constraints were introduced based on the distance value difference between the target edge and the background to enhance the reconstruction ability of the target edge. Finally, the residual between the estimated distance and the actual distance was calculated, and the high-resolution reference range profile was iteratively corrected by using the point spread function according to the residual. Bilinear interpolation, bicubic interpolation, POCS, POCS with adaptive correction threshold, and the proposed method were used to reconstruct the range profile of the datasets and the real datasets. The effectiveness of the proposed method was verified by the range profile reconstruction effect and objective evaluation index. The experimental results show that the index of the proposed method is improved compared to the interpolation method and the POCS method. In the redwood-3dscan dataset experiments, compared to the traditional POCS, the average gradient (AG) of the proposed method is increased by at least 8.04%, and the edge strength (ES) is increased by at least 4.84%. In the real data experiments, compared to the traditional POCS, the AG of the proposed method is increased by at least 5.85%, and the ES is increased by at least 7.01%, which proves that the proposed method can effectively improve the resolution of the reconstructed range map and the quality of the detail edges. Full article
Show Figures

Figure 1

15 pages, 3303 KB  
Article
TSE-UNet: Temporal and Spatial Feature-Enhanced Point Cloud Super-Resolution Model for Mechanical LiDAR
by Lu Ren, Deyi Li, Zhenchao Ouyang and Zhibin Zhang
Appl. Sci. 2024, 14(4), 1510; https://doi.org/10.3390/app14041510 - 13 Feb 2024
Viewed by 2402
Abstract
The mechanical LiDAR sensor is crucial in autonomous vehicles. After projecting a 3D point cloud onto a 2D plane and employing a deep learning model for computation, accurate environmental perception information can be supplied to autonomous vehicles. Nevertheless, the vertical angular resolution of [...] Read more.
The mechanical LiDAR sensor is crucial in autonomous vehicles. After projecting a 3D point cloud onto a 2D plane and employing a deep learning model for computation, accurate environmental perception information can be supplied to autonomous vehicles. Nevertheless, the vertical angular resolution of inexpensive multi-beam LiDAR is limited, constraining the perceptual and mobility range of mobile entities. To address this problem, we propose a point cloud super-resolution model in this paper. This model enhances the density of sparse point clouds acquired by LiDAR, consequently offering more precise environmental information for autonomous vehicles. Firstly, we collect two datasets for point cloud super-resolution, encompassing CARLA32-128in simulated environments and Ruby32-128 in real-world scenarios. Secondly, we propose a novel temporal and spatial feature-enhanced point cloud super-resolution model. This model leverages temporal feature attention aggregation modules and spatial feature enhancement modules to fully exploit point cloud features from adjacent timestamps, enhancing super-resolution accuracy. Ultimately, we validate the effectiveness of the proposed method through comparison experiments, ablation studies, and qualitative visualization experiments conducted on the CARLA32-128 and Ruby32-128 datasets. Notably, our method achieves a PSNR of 27.52 on CARLA32-128 and a PSNR of 24.82 on Ruby32-128, both of which are better than previous methods. Full article
(This article belongs to the Collection Space Applications)
Show Figures

Figure 1

8 pages, 2512 KB  
Communication
Base Study of Bridge Inspection by Modeling Touch Information Using Light Detection and Ranging
by Tomotaka Fukuoka, Takahiro Minami and Makoto Fujiu
Appl. Sci. 2024, 14(4), 1449; https://doi.org/10.3390/app14041449 - 9 Feb 2024
Viewed by 2420
Abstract
In Japan, bridges are inspected via close visual examinations every five years. However, these inspections are labor intensive, and a shortage of engineers and budget constraints will restrict such inspections in the future. In recent years, efforts have been made to reduce the [...] Read more.
In Japan, bridges are inspected via close visual examinations every five years. However, these inspections are labor intensive, and a shortage of engineers and budget constraints will restrict such inspections in the future. In recent years, efforts have been made to reduce the labor required for inspections by automating various aspects of the inspection process. In this study, we proposed and evaluated a method of applying super-resolution technology to obtain precise point cloud information to create distance information images to enable the use of tactile information (e.g., human touch) on the surface to be inspected. We measured the distance to the specimen using LiDAR, generated distance information images, performed super-resolution on the pseudo-created low-resolution images, and evaluated them in comparison with the existing magnification method. The evaluation results suggest that the adaptation of the super-resolution technique is effective in increasing the resolution of the boundary of the distance change. Full article
(This article belongs to the Special Issue Advances in Civil Infrastructures Engineering)
Show Figures

Figure 1

10 pages, 10341 KB  
Communication
Multi-Scale Histogram-Based Probabilistic Deep Neural Network for Super-Resolution 3D LiDAR Imaging
by Miao Sun, Shenglong Zhuo and Patrick Yin Chiang
Sensors 2023, 23(1), 420; https://doi.org/10.3390/s23010420 - 30 Dec 2022
Cited by 2 | Viewed by 3949
Abstract
LiDAR (Light Detection and Ranging) imaging based on SPAD (Single-Photon Avalanche Diode) technology suffers from severe area penalty for large on-chip histogram peak detection circuits required by the high precision of measured depth values. In this work, a probabilistic estimation-based super-resolution neural network [...] Read more.
LiDAR (Light Detection and Ranging) imaging based on SPAD (Single-Photon Avalanche Diode) technology suffers from severe area penalty for large on-chip histogram peak detection circuits required by the high precision of measured depth values. In this work, a probabilistic estimation-based super-resolution neural network for SPAD imaging that firstly uses temporal multi-scale histograms as inputs is proposed. To reduce the area and cost of on-chip histogram computation, only part of the histogram hardware for calculating the reflected photons is implemented on a chip. On account of the distribution rule of returned photons, a probabilistic encoder as a part of the network is first proposed to solve the depth estimation problem of SPADs. By jointly using this neural network with a super-resolution network, 16× up-sampling depth estimation is realized using 32 × 32 multi-scale histogram outputs. Finally, the effectiveness of this neural network was verified in the laboratory with a 32 × 32 SPAD sensor system. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

15 pages, 4573 KB  
Article
Up-Sampling Method for Low-Resolution LiDAR Point Cloud to Enhance 3D Object Detection in an Autonomous Driving Environment
by Jihwan You and Young-Keun Kim
Sensors 2023, 23(1), 322; https://doi.org/10.3390/s23010322 - 28 Dec 2022
Cited by 28 | Viewed by 6778
Abstract
Automobile datasets for 3D object detection are typically obtained using expensive high-resolution rotating LiDAR with 64 or more channels (Chs). However, the research budget may be limited such that only a low-resolution LiDAR of 32-Ch or lower can be used. The lower the [...] Read more.
Automobile datasets for 3D object detection are typically obtained using expensive high-resolution rotating LiDAR with 64 or more channels (Chs). However, the research budget may be limited such that only a low-resolution LiDAR of 32-Ch or lower can be used. The lower the resolution of the point cloud, the lower the detection accuracy. This study proposes a simple and effective method to up-sample low-resolution point cloud input that enhances the 3D object detection output by reconstructing objects in the sparse point cloud data to produce more dense data. First, the 3D point cloud dataset is converted into a 2D range image with four channels: x, y, z, and intensity. The interpolation on the empty space is calculated based on both the pixel distance and range values of six neighbor points to conserve the shapes of the original object during the reconstruction process. This method solves the over-smoothing problem faced by the conventional interpolation methods, and improves the operational speed and object detection performance when compared to the recent deep-learning-based super-resolution methods. Furthermore, the effectiveness of the up-sampling method on the 3D detection was validated by applying it to baseline 32-Ch point cloud data, which were then selected as the input to a point-pillar detection model. The 3D object detection result on the KITTI dataset demonstrates that the proposed method could increase the mAP (mean average precision) of pedestrians, cyclists, and cars by 9.2%p, 6.3%p, and 5.9%p, respectively, when compared to the baseline of the low-resolution 32-Ch LiDAR input. In future works, various dataset environments apart from autonomous driving will be analyzed. Full article
Show Figures

Figure 1

21 pages, 24326 KB  
Article
Elevation Extraction from Spaceborne SAR Tomography Using Multi-Baseline COSMO-SkyMed SAR Data
by Lang Feng, Jan-Peter Muller, Chaoqun Yu, Chao Deng and Jingfa Zhang
Remote Sens. 2022, 14(16), 4093; https://doi.org/10.3390/rs14164093 - 21 Aug 2022
Cited by 8 | Viewed by 3320
Abstract
SAR tomography (TomoSAR) extends SAR interferometry (InSAR) to image a complex 3D scene with multiple scatterers within the same SAR cell. The phase calibration method and the super-resolution reconstruction method play a crucial role in 3D TomoSAR imaging from multi-baseline SAR stacks, and [...] Read more.
SAR tomography (TomoSAR) extends SAR interferometry (InSAR) to image a complex 3D scene with multiple scatterers within the same SAR cell. The phase calibration method and the super-resolution reconstruction method play a crucial role in 3D TomoSAR imaging from multi-baseline SAR stacks, and they both influence the accuracy of the 3D SAR tomographic imaging results. This paper presents a systematic processing method for 3D SAR tomography imaging. Moreover, with the newly released TanDEM-X 12 m DEM, this study proposes a new phase calibration method based on SAR InSAR and DEM error estimation with the super-resolution reconstruction compressive sensing (CS) method for 3D TomoSAR imaging using COSMO-SkyMed Spaceborne SAR data. The test, fieldwork, and results validation were executed at Zipingpu Dam, Dujiangyan, Sichuan, China. After processing, the 1 m resolution TomoSAR elevation extraction results were obtained. Against the terrestrial Lidar ‘truth’ data, the elevation results were shown to have an accuracy of 0.25 ± 1.04 m and a RMSE of 1.07 m in the dam area. The results and their subsequent validation demonstrate that the X band data using the CS method are not suitable for forest structure reconstruction, but are fit for purpose for the elevation extraction of manufactured facilities including buildings in the urban area. Full article
(This article belongs to the Special Issue Recent Progress and Applications on Multi-Dimensional SAR)
Show Figures

Graphical abstract

13 pages, 6898 KB  
Article
Three-Dimensional Laser Imaging with a Variable Scanning Spot and Scanning Trajectory
by Ao Yang, Jie Cao, Yang Cheng, Chuanxun Chen and Qun Hao
Photonics 2021, 8(6), 173; https://doi.org/10.3390/photonics8060173 - 21 May 2021
Cited by 7 | Viewed by 2728
Abstract
Traditional lidar scans the target with a fixed-size scanning spot and scanning trajectory. Therefore, it can only obtain the depth image with the same pixels as the number of scanning points. In order to obtain a high-resolution depth image with a few scanning [...] Read more.
Traditional lidar scans the target with a fixed-size scanning spot and scanning trajectory. Therefore, it can only obtain the depth image with the same pixels as the number of scanning points. In order to obtain a high-resolution depth image with a few scanning points, we propose a scanning and depth image reconstruction method with a variable scanning spot and scanning trajectory. Based on the range information and the proportion of the area of each target (PAET) contained in the multi echoes, the region with multi echoes (RME) is selected and a new scanning trajectory and smaller scanning spot are used to obtain a finer depth image. According to the range and PAET obtained by scanning, the RME is segmented and filled to realize the super-resolution reconstruction of the depth image. By using this method, the experiments of two overlapped plates in space are carried out. By scanning the target with only forty-three points, the super-resolution depth image of the target with 160 × 160 pixels is obtained. Compared with the real depth image of the target, the accuracy of area representation (AOAR) and structural similarity (SSIM) of the reconstructed depth image is 99.89% and 98.94%, respectively. The method proposed in this paper can effectively reduce the number of scanning points and improve the scanning efficiency of the three-dimensional laser imaging system. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Graphical abstract

Back to TopTop