Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Authors = Xiuxiao Yuan

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 11683 KiB  
Article
Voronoi Centerline-Based Seamline Network Generation Method
by Xiuxiao Yuan, Yang Cai and Wei Yuan
Remote Sens. 2023, 15(4), 917; https://doi.org/10.3390/rs15040917 - 7 Feb 2023
Cited by 5 | Viewed by 2771
Abstract
Seamline network generation is a crucial step in mosaicking multiple orthoimages. It determines the topological and mosaic contribution area for each orthoimage. Previous methods, such as Voronoi-based and AVOD (area Voronoi)-based, may generate mosaic holes in low-overlap and irregular orthoimage cases. This paper [...] Read more.
Seamline network generation is a crucial step in mosaicking multiple orthoimages. It determines the topological and mosaic contribution area for each orthoimage. Previous methods, such as Voronoi-based and AVOD (area Voronoi)-based, may generate mosaic holes in low-overlap and irregular orthoimage cases. This paper proposes a Voronoi centerline-based seamline network generation method to address this problem. The first step is to detect the edge vector of the valid orthoimage region; the second step is to construct a Voronoi triangle network using the edge vector points and extract the centerline of the network; the third step is to segment each orthoimage by the generated centerlines to construct the image effective mosaic polygon (EMP). The final segmented EMP is the mosaic contribution region. All EMPs are interconnected to form a seamline network. The main contribution of the proposed method is that it solves the mosaic holes in the Voronoi-based method when processing with low overlap, and it solves the limitation of the AVOD-based method polygon shape requirement, which can generate a complete mosaic in any overlap and any shape of the orthoimage. Five sets of experiments were conducted, and the results show that the proposed method surpasses the well-known state-of-the-art method and commercial software in terms of adaptability and effectiveness. Full article
Show Figures

Figure 1

18 pages, 33075 KiB  
Article
Large Aerial Image Tie Point Matching in Real and Difficult Survey Areas via Deep Learning Method
by Xiuliu Yuan, Xiuxiao Yuan, Jun Chen and Xunping Wang
Remote Sens. 2022, 14(16), 3907; https://doi.org/10.3390/rs14163907 - 12 Aug 2022
Cited by 8 | Viewed by 3605
Abstract
Image tie point matching is an essential task in real aerial photogrammetry, especially for model tie points. In current photogrammetry production, SIFT is still the main matching algorithm because of the high robustness for most aerial image tie points matching. However, when there [...] Read more.
Image tie point matching is an essential task in real aerial photogrammetry, especially for model tie points. In current photogrammetry production, SIFT is still the main matching algorithm because of the high robustness for most aerial image tie points matching. However, when there is a certain number of weak texture images in a surveying area (mountain, grassland, woodland, etc.), these models often lack tie points, resulting in the failure of building an airline network. Some studies have shown that the image matching method based on deep learning is better than the SIFT method and other traditional methods to some extent (even for weak texture images). Unfortunately, these methods are often only used in small images, and they cannot be directly applied to large image tie point matching in real photogrammetry. Considering the actual photogrammetry needs and motivated by the Block-SIFT and SuperGlue, this paper proposes a SuperGlue-based LR-Superglue matching method for large aerial image tie points matching, which makes learned image matching possible in photogrammetry application and promotes the photogrammetry towards artificial intelligence. Experiments on real and difficult aerial surveying areas show that LR-Superglue obtains more model tie points in forward direction (on average, there are 60 more model points in each model) and more image tie points between airline(on average, there are 36 more model points in each adjacent images). Most importantly, the LR-Superglue method requires a certain number of points between each adjacent model, while the Block-SIFT method made a few models have no tie points. At the same time, the relative orientation accuracy of the image tie points matched by the proposed method is significantly better than block-SIFT, which reduced from 3.64 μm to 2.85 μm on average in each model (the camera pixel is 4.6 μm). Full article
Show Figures

Graphical abstract

18 pages, 73552 KiB  
Letter
Image Stitching Based on Nonrigid Warping for Urban Scene
by Lixia Deng, Xiuxiao Yuan, Cailong Deng, Jun Chen and Yang Cai
Sensors 2020, 20(24), 7050; https://doi.org/10.3390/s20247050 - 9 Dec 2020
Cited by 6 | Viewed by 3197
Abstract
Image stitching based on a global alignment model is widely used in computer vision. However, the resulting stitched image may look blurry or ghosted due to parallax. To solve this problem, we propose a parallax-tolerant image stitching method based on nonrigid warping in [...] Read more.
Image stitching based on a global alignment model is widely used in computer vision. However, the resulting stitched image may look blurry or ghosted due to parallax. To solve this problem, we propose a parallax-tolerant image stitching method based on nonrigid warping in this paper. Given a group of putative feature correspondences between overlapping images, we first use a semiparametric function fitting, which introduces a motion coherence constraint to remove outliers. Then, the input images are warped according to a nonrigid warp model based on Gaussian radial basis functions. The nonrigid warping is a kind of elastic deformation that is flexible and smooth enough to eliminate moderate parallax errors. This leads to high-precision alignment in the overlapped region. For the nonoverlapping region, we use a rigid similarity model to reduce distortion. Through effective transition, the nonrigid warping of the overlapped region and the rigid warping of the nonoverlapping region can be used jointly. Our method can obtain more accurate local alignment while maintaining the overall shape of the image. Experimental results on several challenging data sets for urban scene show that the proposed approach is better than state-of-the-art approaches in both qualitative and quantitative indicators. Full article
(This article belongs to the Special Issue Remote Sensing Big Data for Improving the Urban Environment)
Show Figures

Figure 1

17 pages, 4290 KiB  
Article
Detecting Matching Blunders of Multi-Source Remote Sensing Images via Graph Theory
by Cailong Deng, Xiuxiao Yuan, Lixia Deng and Jun Chen
Sensors 2020, 20(13), 3712; https://doi.org/10.3390/s20133712 - 2 Jul 2020
Cited by 2 | Viewed by 2687
Abstract
Large radiometric and geometric distortion in multi-source images leads to fewer matching points with high matching blunder ratios, and global geometric relationship models between multi-sensor images are inexplicit. Thus, traditional matching blunder detection methods cannot work effectively. To address this problem, we propose [...] Read more.
Large radiometric and geometric distortion in multi-source images leads to fewer matching points with high matching blunder ratios, and global geometric relationship models between multi-sensor images are inexplicit. Thus, traditional matching blunder detection methods cannot work effectively. To address this problem, we propose two matching blunder detection methods based on graph theory. The proposed methods can build statistically significant clusters in the case of few matching points with high matching blunder ratios, and use local geometric similarity constraints to detect matching blunders when the global geometric relationship is not explicit. The first method (named the complete graph-based method) uses clusters constructed by matched triangles in complete graphs to encode the local geometric similarity of images, and it can detect matching blunders effectively without considering the global geometric relationship. The second method uses the triangular irregular network (TIN) graph to approximate a complete graph to reduce to computational complexity of the first method. We name this the TIN graph-based method. Experiments show that the two graph-based methods outperform the classical random sample consensus (RANSAC)-based method in recognition rate, false rate, number of remaining matching point pairs, dispersion, positional accuracy in simulated and real data (image pairs from Gaofen1, near infrared ray of Gaofen1, Gaofen2, panchromatic Landsat, Ziyuan3, Jilin1and unmanned aerial vehicle). Notably, in most cases, the mean false rates of RANSAC, the complete graph-based method and the TIN graph-based method in simulated data experiments are 0.50, 0.26 and 0.14, respectively. In addition, the mean positional accuracy (RMSE measured in units of pixels) of the three methods is 2.6, 1.4 and 1.5 in real data experiments, respectively. Furthermore, when matching blunder ratio is no higher than 50%, the computation time of the TIN graph-based method is nearly equal to that of the RANSAC-based method, and roughly 2 to 40 times less than that of the complete graph-based method. Full article
(This article belongs to the Special Issue Remote Sensor Based Geoscience Applications)
Show Figures

Figure 1

20 pages, 3440 KiB  
Article
Dense Image-Matching via Optical Flow Field Estimation and Fast-Guided Filter Refinement
by Wei Yuan, Xiuxiao Yuan, Shu Xu, Jianya Gong and Ryosuke Shibasaki
Remote Sens. 2019, 11(20), 2410; https://doi.org/10.3390/rs11202410 - 17 Oct 2019
Cited by 15 | Viewed by 4496
Abstract
The development of an efficient and robust method for dense image-matching has been a technical challenge due to high variations in illumination and ground features of aerial images of large areas. In this paper, we propose a method for the dense matching of [...] Read more.
The development of an efficient and robust method for dense image-matching has been a technical challenge due to high variations in illumination and ground features of aerial images of large areas. In this paper, we propose a method for the dense matching of aerial images using an optical flow field and a fast-guided filter. The proposed method utilizes a coarse-to-fine matching strategy for a pixel-wise correspondence search across stereo image pairs. The pyramid Lucas–Kanade (L–K) method is first used to generate a sparse optical flow field within the stereo image pairs, and an adjusted control lattice is then used to derive the multi-level B-spline interpolating function for estimating the dense optical flow field. The dense correspondence is subsequently refined through a combination of a novel cross-region-based voting process and fast guided filtering. The performance of the proposed method was evaluated on three bases, namely, the matching accuracy, the matching success rate, and the matching efficiency. The evaluative experiments were performed using sets of unmanned aerial vehicle (UAV) images and aerial digital mapping camera (DMC) images. The results showed that the proposed method afforded the root mean square error (RMSE) of the reprojection errors better than ±0.5 pixels in image, and a height accuracy within ±2.5 GSD (ground sampling distance) from the ground. The method was further compared with the state-of-the-art commercial software SURE and confirmed to deliver more complete matches for images with poor-texture areas, the matching success rate of the proposed method is higher than 97% while SURE is 96%, and there is 47% higher matching efficiency. This demonstrates the superior applicability of the proposed method to aerial image-based dense matching with poor texture regions. Full article
Show Figures

Graphical abstract

19 pages, 7789 KiB  
Article
Matching Multi-Sensor Remote Sensing Images via an Affinity Tensor
by Shiyu Chen, Xiuxiao Yuan, Wei Yuan, Jiqiang Niu, Feng Xu and Yong Zhang
Remote Sens. 2018, 10(7), 1104; https://doi.org/10.3390/rs10071104 - 11 Jul 2018
Cited by 10 | Viewed by 6007
Abstract
Matching multi-sensor remote sensing images is still a challenging task due to textural changes and non-linear intensity differences. In this paper, a novel matching method is proposed for multi-sensor remote sensing images. To establish feature correspondences, an affinity tensor is used to integrate [...] Read more.
Matching multi-sensor remote sensing images is still a challenging task due to textural changes and non-linear intensity differences. In this paper, a novel matching method is proposed for multi-sensor remote sensing images. To establish feature correspondences, an affinity tensor is used to integrate geometric and radiometric information. The matching process consists of three steps. First, features from an accelerated segment test are extracted from both source and target images, and two complete graphs are constructed with their nodes representing these features. Then, the geometric and radiometric similarities of the feature points are represented by the three-order affinity tensor, and the initial feature correspondences are established by tensor power iteration. Finally, a tensor-based mismatch detection process is conducted to purify the initial matched points. The robustness and capability of the proposed method are tested with a variety of remote sensing images such as Ziyuan-3 backward, Ziyuan-3 nadir, Gaofen-1, Gaofen-2, unmanned aerial vehicle platform, and Jilin-1. The experiments show that the average matching recall is greater than 0.5, which outperforms state-of-the-art multi-sensor image-matching algorithms such as SIFT, SURF, NG-SIFT, OR-SIFT and GOM-SIFT. Full article
(This article belongs to the Special Issue Multisensor Data Fusion in Remote Sensing)
Show Figures

Figure 1

19 pages, 5624 KiB  
Article
Automatic Power Line Inspection Using UAV Images
by Yong Zhang, Xiuxiao Yuan, Wenzhuo Li and Shiyu Chen
Remote Sens. 2017, 9(8), 824; https://doi.org/10.3390/rs9080824 - 10 Aug 2017
Cited by 165 | Viewed by 14029
Abstract
Power line inspection ensures the safe operation of a power transmission grid. Using unmanned aerial vehicle (UAV) images of power line corridors is an effective way to carry out these vital inspections. In this paper, we propose an automatic inspection method for power [...] Read more.
Power line inspection ensures the safe operation of a power transmission grid. Using unmanned aerial vehicle (UAV) images of power line corridors is an effective way to carry out these vital inspections. In this paper, we propose an automatic inspection method for power lines using UAV images. This method, known as the power line automatic measurement method based on epipolar constraints (PLAMEC), acquires the spatial position of the power lines. Then, the semi patch matching based on epipolar constraints (SPMEC) dense matching method is applied to automatically extract dense point clouds within the power line corridor. Obstacles can then be automatically detected by calculating the spatial distance between a power line and the point cloud representing the ground. Experimental results show that the PLAMEC automatically measures power lines effectively with a measurement accuracy consistent with that of manual stereo measurements. The height root mean square (RMS) error of the point cloud was 0.233 m, and the RMS error of the power line was 0.205 m. In addition, we verified the detected obstacles in the field and measured the distance between the canopy and power line using a laser range finder. The results show that the difference of these two distances was within ±0.5 m. Full article
Show Figures

Graphical abstract

16 pages, 7226 KiB  
Article
UAV Low Altitude Photogrammetry for Power Line Inspection
by Yong Zhang, Xiuxiao Yuan, Yi Fang and Shiyu Chen
ISPRS Int. J. Geo-Inf. 2017, 6(1), 14; https://doi.org/10.3390/ijgi6010014 - 12 Jan 2017
Cited by 79 | Viewed by 9721
Abstract
When the distance between an obstacle and a power line is less than the discharge distance, a discharge arc can be generated, resulting in the interruption of power supplies. Therefore, regular safety inspections are necessary to ensure the safe operation of power grids. [...] Read more.
When the distance between an obstacle and a power line is less than the discharge distance, a discharge arc can be generated, resulting in the interruption of power supplies. Therefore, regular safety inspections are necessary to ensure the safe operation of power grids. Tall vegetation and buildings are the key factors threatening the safe operation of extra high voltage transmission lines within a power line corridor. Manual or laser intensity direction and ranging (LiDAR) based inspections are time consuming and expensive. To make safety inspections more efficient and flexible, a low-altitude unmanned aerial vehicle (UAV) remote-sensing platform, equipped with an optical digital camera, was used to inspect power line corridors. We propose a semi-patch matching algorithm based on epipolar constraints, using both the correlation coefficient (CC) and the shape of its curve to extract three dimensional (3D) point clouds for a power line corridor. We use a stereo image pair from inter-strip to improve power line measurement accuracy by transforming the power line direction to an approximately perpendicular to epipolar line. The distance between the power lines and the 3D point cloud is taken as a criterion for locating obstacles within the power line corridor automatically. Experimental results show that our proposed method is a reliable, cost effective, and applicable way for practical power line inspection and can locate obstacles within the power line corridor with accuracy better than ±0.5 m. Full article
Show Figures

Figure 1

Back to TopTop