Next Article in Journal
Use Remote Sensing and Machine Learning to Study the Changes of Broad-Leaved Forest Biomass and Their Climate Driving Forces in Nature Reserves of Northern Subtropics
Next Article in Special Issue
Cross-Modal Feature Representation Learning and Label Graph Mining in a Residual Multi-Attentional CNN-LSTM Network for Multi-Label Aerial Scene Classification
Previous Article in Journal
Lunar Mare Fecunditatis: A Science-Rich Region and a Concept Mission for Long-Distance Exploration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UAV Image Stitching Based on Optimal Seam and Half-Projective Warp

1
School of Automation, China University of Geosciences, Wuhan 430074, China
2
Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan 430074, China
3
Engineering Research Center of Intelligent Technology for Geo-Exploration, Ministry of Education, Wuhan 430074, China
4
Electronic Information School, Wuhan University, Wuhan 430072, China
5
School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan 430074, China
6
Faculty of Engineering, China University of Geosciences, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(5), 1068; https://doi.org/10.3390/rs14051068
Submission received: 14 January 2022 / Revised: 18 February 2022 / Accepted: 19 February 2022 / Published: 22 February 2022
(This article belongs to the Special Issue Advances in Hyperspectral Remote Sensing: Methods and Applications)

Abstract

:
This paper introduces an Unmanned Aerial Vehicle (UAV) image stitching method, based on the optimal seam algorithm and half-projective warp, that can effectively retain the original information of the image and obtain the ideal stitching effect. The existing seam stitching algorithms can eliminate the ghosting and blurring problems on the stitched images, but the deformation and angle distortion caused by image registration will remain in the stitching results. To overcome this situation, we propose a stitching strategy based on optimal seam and half-projective warp. Firstly, we define a new difference matrix in the overlapping region of the aligned image, which includes the color, structural and line difference information. Then, we constrain the search range of the seam by the minimum energy, and propose a seam search algorithm based on the global minimum energy to obtain the seam. Finally, combined with the seam position and half-projective warp, the shape of the stitched image is rectified to keep more regions in their original shape. The experimental results of several groups of UAV images show that our method has a superior stitching effect.

1. Introduction

With the development of UAV remote sensing technology, its research has been extensively used in urban building planning [1], resources and environment detection [2,3] and other fields. UAV remote sensing has the characteristics of high image resolution, low cost and strong flexibility. It is suitable for collecting low-altitude, high-resolution remote sensing images [4]. In addition to obtaining common RGB images, UAV image remote sensing can also obtain hyperspectral images. Hyperspectral images can provide more spectral information than RGB images [5]. However, due to the limitation of flight altitude, it is difficult for UAV remote sensing to obtain large-area observation images [6]. Therefore, it is necessary to stitch the obtained remote sensing images to improve the information acquisition ability of remote sensing images [7,8].
Image stitching can usually be classified into two main categories. One is the alignment method, which achieves the goal of stitching by accurately aligning the images. The other is seam cutting, which cuts the image by finding a seam with the smallest difference. Advanced image stitching technology can solve the stitching task in most scenes.
The main task of the alignment method is to establish an accurate alignment model [9,10]. Global homography is a common alignment model, and its representative algorithm is Autostitch [11]. This alignment model establishes the corresponding relationship between images through scale-invariant feature transform (SIFT) [12] feature points to realize image alignment. In order to obtain more accurate alignment ability, some studies use multiple homography [13,14]. As-projective-as-possible warp (APAP) [15] proposes an alignment model based on mesh deformation, which greatly enhances the alignment ability. The alignment ability can also be improved by removing incorrect feature matching [16,17,18,19,20,21]. In addition, combining line features and point features also has an effect on improving the alignment ability [22,23]. Insufficient alignment ability often leads to ghosting and blurring on the stitched image.
The optimal seam algorithm is an effective solution to eliminate ghosting. The optimal seam refers to finding a seam with the smallest difference in the overlapping area of two images, which should also meet human visual perception and avoid passing through structural objects as much as possible. The whole algorithm is divided into two steps: defining the cost of differences and searching the seams. Defining the precise difference cost can restrict the search range of seams and indirectly improve the effectiveness of seams [24,25,26]. Most algorithms use the combination of color difference and gradient difference [27].
The stitched image may suffer from lighting inconsistency and other issues. Appropriate fusion technology can reduce exposure differences [28]. A variety of image fusion methods have been used for image stitching, such as mobile phone panorama [29] and UAV image stitching [30]. In addition, many studies focus on obtaining stitched images that are more in line with human visual perception. Some researchers improve the visual effect by reducing the angle distortion [31] and adjusting the rotation angle of the image [32]. In addition, distortion caused by image alignment can also be reduced by adding similarity constraints [33].
In this paper, we propose an image stitching strategy based on the optimal seam algorithm and half-projective warp to solve the image stitching task of UAV with parallax. Firstly, we use global homography to obtain the aligned image. Then, a new difference matrix is defined according to the overlapping part of the aligned image, and a seam search algorithm based on global energy minimization is designed. Finally, according to the position of the seam, we divide the overlapping area of the aligned image, and combine the half-projective warp to obtain a more aesthetic stitching effect.
The contributions of this work involve the following three aspects:
  • We propose a new optimal seam algorithm and define a new difference matrix, including color, structure and line differences. It can better reflect the difference degree of overlapping regions.
  • We use the minimum energy to constrain the difference matrix to further limit the search range of the seam, and design a seam search algorithm based on the minimum global energy, which can improve the probability of the seam avoiding structural objects.
  • According to the position of the seam, we use half-projective warp to correct the image shape, so that more areas maintain the original shape and the stitching effect is improved.

2. Related Works

An overview of image stitching and previous posting can be found in [34]. Autostitch [11] is a classical algorithm using the global homography alignment model. It describes the correspondence between two images by detecting feature points between images and calculating a homography. Autostitch must satisfy the requirement that the input images are parallax-free. Otherwise, ghosting and blur will occur due to insufficient alignment ability.
A lot of work has been done to obtain more accurate alignment methods. Dual topography warping (DHW) [13] divides the scene into two planes and aligns them with two homography matrices. Lin et al. [14] used multiple reflection transformations to improve the alignment ability, which can overcome some slight ghosting problems. APAP [15] proposed to divide the image into dense grids, and each grid corresponds to a homography matrix, which greatly enhances the image alignment ability. Robust Elastic Warping (REW) [17] was proposed as a feature refinement model based on Bayesian theory to remove mismatched points in image matching and design a robust deformation function to increase the alignment ability. Yuan et al. [22] gave a set of line segment feature detection and matching methods, combined with point features to align the image, and achieved good results. However, due to the fluctuation of the ground and the movement of the camera, UAV images have large parallax. The viewing angle or distance of the same object on the adjacent UAV images will be different, which makes the alignment method unable to effectively solve the parallax stitching problem of UAV images.
Seam-Driven [27] finds the best seam from limited alignment assumptions according to the predefined seam quality measurement. Liao et al. [35] proposed a new iterative seam estimation method to improve the seam vision effect. Fast and robust seam estimation (FARSE) [25] searches for seams by defining the gray weighted distance and differential gradient domain as the difference cost. Li et al. [36] designed a two-image stitching method based on foreground segmentation. A. Eden et al. [37] proposed a two-step optimal seam algorithm, which can stitch the image smoothly even if there is scene motion and alignment error. For most existing seam algorithms, the seam search is usually realized by combining various optimization algorithms [38,39,40], and is rarely designed according to the defined difference cost.
Shape-Preserving Half-Projective Warps (SPHP) [31] improved the image appearance of non-overlapping regions by transitioning projection transformation to similarity transformation. However, SPHP cannot solve the parallax problem of overlapping regions. Adaptive As-Natural-As-Possible (AANAP) [32] reduces perspective distortion in non-overlapping regions by linearizing the homography and slowly changing it to global similarity, which improves the natural appearance of the stitching results. Chen et al. [33] used line alignment constraints to determine the angle selection of the transformation matrix, and used local and global similarity constraints to preserve the original shape of the image. However, these algorithms for improving the visual performance are usually based on alignment methods and are not combined with optimal seaming algorithms.

3. Materials and Methods

In this section, we detail our proposed algorithm, including the optimal seam algorithm and half-projective warps. Figure 1 demonstrates the overall flow of the algorithm. Firstly, we use global homography to register image 1 and image 2; then, we find an optimal seam and cut two registered images, respectively; we use the original image 1 and the cut image 2 to register using the half-projective warp; finally, we stitch the two images obtained by half-projective warp according to the optimal seam.

3.1. Optimal Seam Algorithm

In the registration phase, we first use global homography to construct a warp from the reference image to the target image. Given the input images I and I along with the corresponding speeded up robust features (SURF) [41] feature matching points x = [ x , y ] T and x = [ x , y ] T , the linear transformation of homogeneous coordinates between two images can be represented as
x ˜ = H x ˜ ,
where x ˜ is x in homogeneous coordinates. H R 3 × 3 defines the homography. The rows of H are given by h 1 = h 1 , h 2 , h 3 , h 2 = h 4 , h 5 , h 6 , h 3 = h 7 , h 8 , 1 . The mapping between two images can be written as
x = h 1 x + h 2 y + h 3 h 7 x + h 8 y + 1 ,
y = h 4 x + h 5 y + h 6 h 7 x + h 8 y + 1 .
Then, we obtain the resultant images by warping the input images with H and place them on the same reference plane.
The basis of the optimal seam algorithm is to estimate a seam with the smallest difference from the overlapping regions of two registered images. Then, the two registered images are segmented and reorganized according to the seam. Our proposed method is divided into two steps:
(1)
Construct the difference matrix of the overlapping regions of two registered images;
(2)
Search for the optimal seam on the difference matrix.
Firstly, we extract the overlapping regions of two registered images, which are denoted as Ω and Ω . Then, we define a difference matrix reflecting the similarity between overlapping regions Ω and Ω . Previous algorithms usually use the color difference and structure difference as the criteria to judge the similarity, and experiments show that this combination of differences is effective. In order to better cater to the human eye’s perception of color, we use the color difference in LAB color space to calculate the color difference.
r ¯ = Ω R + Ω R 2 ,
Δ L = 2 + r ¯ 256 × ( Ω R Ω R ) 2 ,
Δ A = 4 × ( Ω G Ω G ) 2 ,
Δ B = 2 + 255 r ¯ 256 × ( Ω B Ω B ) 2 ,
E c o l o r = Δ L + Δ A + Δ B ,
where E c o l o r is the color difference. Ω R , Ω G , Ω B are the RGB (red, green, blue) channel values of Ω . Ω R , Ω G , Ω B are the RGB channel values of Ω .
However, combined with the experiments of UAV images, the errors in the results of the seam algorithm often focus on some structural objects. Due to the large parallax of UAV images, it is difficult to achieve accurate registration of structural objects. If the seam passes through these structural objects, there is a high possibility of visual deviation on the structural objects. We find that some ideal seam paths often follow some roads or grasslands, which generally belong to the low-frequency part of the image. Therefore, we use the high-frequency parts of the overlapping regions to construct the structural difference in order to reduce the structural difference in the low-frequency part. We use Gaussian filtering with parameter σ 1 for Ω and Ω to obtain Ω 1 and Ω 2 . Then, we use Gaussian differential edge detection to calculate the structural difference between  Ω 1 and  Ω 2 .
E Ω i = 1 2 π 1 σ 2 e x i 2 + y i 2 / 2 σ 2 2 1 σ 3 e x i 2 + y i 2 / 2 σ 3 2 ,
E s t r u c t u r e = E Ω 1 E Ω 2 ,
where i = 1 , 2 . E s t r u c t u r e is the structural difference, σ 2 and σ 3 are the difference parameters, and specific values will be mentioned in the experimental section.
In addition, in order to make the difference in structural objects more obvious, we also introduce linear difference. By detecting the line segment information of the object, the seam can avoid passing through the object with straight line edges. In particular, we use the line segment detector (LSD) [42] to obtain the linear information of the overlapping regions Ω and Ω and subtract them to obtain the linear difference, denoted as E l i n e .
We add up the above three differences:
E = E c o l o r + E s t r u c t u r e + E l i n e ,
where E is the difference matrix, which is a two-dimensional numerical matrix, as shown in Figure 2. The value of E represents the difference. The start point and end point of the seam are usually at the junction of the two registered images. If two pixels can be connected into an uninterrupted line on the two-dimensional matrix, they must be in the same eight-connected region.
We set a threshold e and limit the difference value of pixels on the seam to be less than e. Under the condition that the start point and the end point are in the same eight-connected region, e should be minimized as much as possible. We can sort all the difference values on E and quickly calculate the minimum threshold e by using the binary search algorithm. Under the condition of minimum threshold e, the eight-connected region where the start point and the end point are located is denoted as R, and the search region of the seam is limited to R, as shown in Figure 2. The calculation flow of R is shown in Algorithm 1.
Algorithm 1: Calculation of search region
   Input: Different matrix E R , start point p 1 ( x 1 , y 1 ) , end point p 2 ( x 2 , y 2 )
   Output: search region R R
Remotesensing 14 01068 i001
We obtain the search region R through numerical constraints. Each pixel in the search region R has a specific difference value. Next, we propose a seam search algorithm based on the minimum global difference. We begin from the start point and expand in eight adjacent directions, update the pixel difference value in R to the sum of the minimum difference from the starting point, and take the updated pixels as new expansion points until we expand to the end point. Then, we begin from the end point, along the pixel path with the smallest difference sum value, and return to the start point and obtain our optimal seam. The specific simplified process is shown in Figure 3. The position of the optimal seam in the difference matrix is shown in Figure 2, represented by a red line.

3.2. Half-Projective Warps

The components of the stitched image obtained by the seam algorithm come from the registered images. However, in the registration process of UAV images with large parallax, the non-overlapping region will deform, and the object will have unfriendly shape distortion. These problems remain in the results obtained by the seam algorithm.
In order to solve this problem, we introduce half-projective warps. Homography matrix H usually corresponds to the projection transformation, resulting in tensile deformation of the object. Meanwhile, similarity transformation only changes the size and direction of the object, and maintains the original shape. Half-projective warp performs projection transformation in overlapping regions and performs similarity transformation in non-overlapping regions, and there is a smooth transition region between the two transformations.
First, we use θ to rotate the coordinate system ( x , y ) to ( u , v ) .
θ = atan 2 h 8 , h 7 ,
x = u cos θ v sin θ ,
y = u sin θ + v cos θ .
Substituting Equations (13) and (14) into Equations (2) and (3), the new mapping can be written as
H ( u , v ) = x y = h ^ 2 1 c u v + h ^ 1 u + h ^ 3 1 c u h ^ 5 1 c u v + h ^ 4 u + h ^ 6 1 c u ,
where c = h 7 2 + h 8 2 , h ^ 1 , h ^ 2 , h ^ 3 , h ^ 4 , h ^ 5 , h ^ 6 are the new constant coefficients. Then, we divide R 2 by the line u = u 1 and u = u 2 into three spaces, where each space corresponds to a warp. For the whole space, the warping function is defined as
w ( u , v ) = H ( u , v ) , u u 1 T ( u , v ) , u 1 < u < u 2 S ( u , v ) , u 2 u ,
T ( u , v ) = f 1 ( u ) v + f 2 ( u ) f 3 ( u ) v + f 4 ( u ) ,
S ( u , v ) = s 2 v + s 1 u + s 3 s 1 v + s 2 u + s 4 ,
where T ( u , v ) is a transition transformation, and f 1 , f 2 , f 3 , f 4 are quadratic functions of u. S ( u , v ) is a similarity transformation, and s 1 , s 2 , s 3 , s 4 are constant parameters. When u is a constant, H ( u , v ) , T ( u , v ) and S ( u , v ) are linear functions about v. H ( u , v ) and T ( u , v ) , T ( u , v ) and S ( u , v ) can be continuous in u = u 1 , u = u 2 , respectively. Therefore, we have sufficient linear constraints to find f 1 , f 2 , f 3 , f 4 , s 1 , s 2 , s 3 , s 4 .
Next, we give the specific calculation process. When u = u 1 and u = u 2 , according to the continuity of w ( u , v ) , we can obtain the following equations.
f 1 ( u 1 ) = h ^ 2 1 c u 1 ,
f 1 ( u 1 ) = c h ^ 2 ( 1 c u 1 ) 2 ,
f 1 ( u 2 ) = s 2 ,
f 1 ( u 2 ) = 0 .
Since we have four linear constraints, we can solve four parameters. Where s 2 occupies one parameter, f 1 is the quadratic function of u, with three parameters. By solving the above linear equations, we can obtain f 1 and s 2 . Similarly, we can solve f 3 and s 1 . According to the same strategy, we can also obtain the following equations.
f 2 ( u 1 ) = h ^ 1 u 1 + h ^ 3 1 c u 1 ,
f 2 ( u 1 ) = h ^ 1 + c h ^ 3 ( 1 c u 1 ) 2 ,
f 2 ( u 2 ) = s 1 u 2 + s 3 ,
f 2 ( u 2 ) = s 1 .
Since we have solved s 1 and s 2 , we still have enough linear constraints to solve f 2 and s 3 . Similarly, we can solve f 4 and s 4 .
Next, we describe how to determine the values of u 1 and u 2 in combination with the position of the seam. We denote the regions retained after cutting as R 1 and R 2 , and the regions removed as R 1 and R 2 . We use R 1 and the original target image for half-projective warp, and the overlapping regions of the two images are in R 2 . For R 2 , as the position of the seam will also change according to our warp, R 2 will eventually be removed. We only need to make R 1 and R 2 undergo a similar transformation as much as possible as a constraint for judging u 1 and u 2 . We use the deviation of warp function w ( u , v ) from the nearest similarity transformation in the Frobenius norm as a cost.
C = i = 1 2 min α i , β i ( x , y ) R i x u α i x v + β i y u β i y v α i F 2 d x d y ,
where C is a nonlinear function of u 1 and u 2 , and the positions of u 1 and u 2 are determined by regularly sampling the parameter space ( u 1 , u 2 ) .

4. Results

In this section, we introduce the experiment of our method on UAV images, and compare it with the existing alignment algorithm and optimal seam algorithm. The test images in the experiment were taken outdoors by the feimaD200 UAV equipped with SONY ILCE-600, including some villages, villas and construction sites; this basically reflects the characteristics of UAV images. In order to verify the effectiveness of our method, our experiments were mainly set up in the following aspects: (1) our seam algorithm and the most advanced alignment algorithm comparison; (2) our seam algorithm and other seam algorithms comparison; (3) combined with the half-projective warp of seam, analyzing the effect of image correction.
In particular, in the filtering step of defining the structural difference, we assign 0.4, 0.6 and 0.8 to σ 1 , σ 2 and σ 3 for Gaussian weight, respectively. All the experiments were implemented on a computer with a 2.90 GHz Intel Core i5-10400F CPU and 16-GB RAM.

4.1. Visual Comparison

In this section, we compare several popular alignment stitching algorithms to verify the applicability of our stitching algorithm in UAV image stitching. Comparison methods include Autostitch [11], APAP [15], AANAP [32] and REW [17], and the codes were provided by the authors. These algorithms focus on stitching images through accurate alignment. We applied these algorithms to UAV images and compared them with our methods. We tested three stitching cases in the villa area. There is a large translational movement between the test images in Figure 4, resulting in a large parallax. Autostitch, APAP, AANAP and REW present ghosting and blurring, including houses and cars. Some representative areas are indicated with red boxes. REW and APAP are affected by the most serious artifacts; Autostitch and AANAP are also blurred to varying degrees in complex house structures. On the contrary, our method finds a seam from the overlapping area of the image, which can avoid houses and cars and follow the flat road. Then, the image is segmented and reorganized along the seam. Each object comes from only one image, avoiding ghosting and blurring. Figure 5 shows a case of test images with rotation and translation. It can be seen from the area indicated by the red box that Autostitch, APAP, AANAP and REW all have serious artifacts on the indicated houses. Because the car has different perspectives on the two input images, the car also has ghosting to varying degrees. In Figure 6, AutoStitch and APAP still have some ghosting on the house. AANAP mitigates ghosting on some houses, but reduces the alignment accuracy on some cars. Although REW eliminates ghosting to a certain extent, there is one house that is misplaced. Large parallax makes these algorithms unable to accurately align objects and they produce different degrees of fuzzy ghosting. The experiments show that our method can solve these problems and is more suitable for UAV image stitching.

4.2. Seam Comparison

In this part, we show the results of different seam algorithms to prove the effectiveness of our method. Specifically, we compare a fast and robust seam estimation (FARSE) [25], perceptual-based seam cutting (PSC) [43] and quality evaluation-based iterative seam estimation (QEISE) [35]. Figure 7 shows the results of four different test images. We indicate the detected seams in red and indicate some houses and buildings with blue boxes. These experiments show that the seam algorithms can eliminate ghosting and blurring. The approximate path of the seam obtained by FARSE always passes through some structural objects. The seams obtained by PSC will appear at the boundary of the overlapping area, resulting in a large area of staggering. QEISE has similar problems, and the path of seams is too tortuous. In contrast, the seams detected by our algorithm are shorter, smoother and pass through flat areas. Seams effectively avoid some unnecessary structural objects, and meet human visual perception to a greater extent. The experiment proves that our seam algorithm is better than others and can achieve a better stitching effect.
In addition, we quantitatively evaluate the quality of the seams. We use two different objective evaluation criteria to measure the quality of the seams. Firstly, we utilize the peak signal to noise ratio (PSNR) score to measure the similarity degree, and we judge the similarity degree of the seam on the reference image and the target image according to the score. The PSNR score is directly proportional to the similarity degree.
Q P S N R = 10 × log 10 2 n 1 2 M S E ,
where Q P S N R is the PSNR score of the seam. M S E is the mean square error of the seam in the corresponding pixels of the reference image and the target image. n is the number of bits of each sampling value, taken as 8.
Then, we calculate the structural similarity (SSIM) score of each pixel of the seam on the reference image and the target image, and then define the quality of the seam as:
Q i = 1 SSIM p i 2 ,
Q S S I M = 1 N i N Q i ,
where Q S S I M is the quality score of the seam. p i represents the ith pixel on the seam, and Q i represents the quality score of the ith pixel. N is the sum of pixels of the seam. The SSIM score ranges from −1 to 1. The final quality score is inversely proportional to the seam quality.
We use the four stitching cases corresponding to Figure 7. We evaluate the seams obtained by four different methods according to the above measurement criteria. Q P S N R is shown in Table 1 and Q S S I M is shown in Table 2. The seams obtained by PSC often appear at the boundary of the image, and the seams are quite different in the pixels corresponding to the reference image and the target image. Therefore, the Q P S N R of PSC is lower and the Q S S I M is higher. Visual differences often appear on structural objects. Compared with other methods, our seams can better avoid passing through structural objects, so we have higher Q P S N R and lower Q S S I M .
In particular, we extract 500 pixels equally spaced from the seam lines of the four seaming algorithms on the four stitching examples, and calculate their quality curves separately for comparison. If there is a large peak in the quality curve at the seam, it indicates that the seam passes through an area of large difference. As shown in Figure 8, since the seams of PSC often appear at the boundary with a large difference, the quality curve of the seam obtained by PSC will have a continuous peak, especially in Case 2 and Case 3. Our seam quality curves have fewer and lower peaks than the others, indicating that our seams pass through smaller differences in paths.
In addition, we compare the time consumed by the stitching algorithms. We compare the time required to find the seam from the registered images. As shown in Table 3, FARSE is aimed at the rapid search of seams, and thus takes less time than other methods. In addition to FARSE, our method consumes less time than PSC and QEISE under the condition of ensuring seam quality.
In general, the seams found by our seam algorithm are usually on flat areas with small differences. However, not all UAV test images have obvious flat areas. Figure 9 shows two special stitching cases. The houses on the two stitching cases are relatively dense, and there is no obvious flat area to connect the start point and end point of the seam. In addition, one end of the seam is on a house so that the seam has to pass through the house. As shown in Figure 9, different seam algorithms cannot avoid passing through the houses indicated with red boxes. However, in the case of dense houses, our seam algorithm can minimize passing through avoidable houses as much as possible compared with FARSE, as shown by the green box.

4.3. Shape Correction

In this paper, the half-projective warp combined with the seam algorithm is proposed to solve the problem of shape distortion retained in stitched images. The half-projective warp proposed by SPHP uses projection transform in the overlapping region and similarity transform in the non-overlapping region. After using the seam algorithm to obtain the cut image, we fix the cut reference image and apply the half-projective warp to the original target image. This is equivalent to increasing the non-overlapping region and reducing the overlapping region. Then, the parameters u 1 and u 2 are determined according to the position of the seam to construct the warp. The main distorted regions are concentrated in the overlapping region of the target image. However, we cut and remove the region according to the position of the seam. Finally, we obtain more images from non-overlapping region similarity transformation to form our stitching results.
As mentioned above, UAV images with large parallax are unable to obtain accurate registration, and the objects in the overlapping area will produce shape distortion after transformation. Figure 10 shows several stitching cases. We can see that, on the UAV image stitched according to the seam, some houses will have inclined tensile deformation (indicated by the blue boxes), which is not in line with our human sensory cognition. In our method, the houses originally in the overlapping area are classified into non-overlapping areas, and the original angle and direction are maintained after similar transformation. Finally, our stitching results retain the shape of the original image as much as possible and obtain better visual effects.
In particular, we compare the results with those obtained by combining SPHP and our seam algorithm. We use SPHP instead of global homography for our preliminary registration, and then use our seam algorithm to obtain the stitching image. As shown in Figure 11, since some houses are in the overlapping area at the beginning, SPHP cannot solve the problem of shape distortion of houses in the overlapping area. As indicated by the blue box, the house still tilts and may produce more deformation. Our method can divide the non-overlapping areas through the position of the seam, which can cause more regions to undergo similar transformation and maintain the original shape of the object.

4.4. Extended Experiment

We apply the proposed method to hyperspectral images and compare it with automatic stitching for hyperspectral images using robust feature matching and elastic warp (AHREW) [44]. As shown in Figure 12, neither AHREW nor our method has ghosting and blurring. The difference is that AHREW relies on accurate alignment, so it produces angle distortion in non-overlapping areas, as shown by the red box. The seam searched by our method can still avoid structural objects, and can reduce the angle distortion of non-overlapping areas.

5. Discussion

Due to the limited display range of images taken by a single UAV lens, a method is required in the field of remote sensing to stitch adjacent images. UAV images have the characteristics of large parallax, and it is difficult to achieve accurate registration, which leads to ghosting and blurring of the images. Previous studies have demonstrated that the use of seam stitching algorithms can reliably eliminate ghosting and blurring. Although these studies have revealed some important findings, there are also shortcomings. The seam algorithm usually searches for seams at the cost of the difference reflecting the similarity between the overlapping regions of the two images. Color differences and structural differences are commonly used to describe the cost of differences. Few studies have considered how to use the different characteristics of the rich information contained in UAV images to constrain the path of the seams. In addition, another problem with the seam algorithm is that the distortion or viewing angle distortion caused by the image registration will be preserved.
In order to solve the above shortcomings, we first define a new difference cost that can better constrain the seam search. We use the high-frequency part of the image to construct structural differences to increase the probability that the seam path is in the flat area of the image. The structure object has a large amount of line information, and the line difference is added to the difference cost to reduce the possibility of the seam passing through the structure object. Under the condition that the seam can be searched, we further restrict the search range of the seam. Then, according to the difference cost that we defined, a seam algorithm that can find the smallest cost difference is proposed. Different experimental results show that our method can solve the problem of ghosting and blurring in stitched images (Figure 4, Figure 5 and Figure 6). Compared with other advanced seam algorithms, our seams can better avoid structural objects (Figure 7). This leads to a better evaluation than other seam algorithms on the defined metrics (Table 1, Table 2 and Table 3, Figure 8). The advantage of the seam algorithm is that it can avoid ghosting and blurring of stitched images, but it also has some defects. When one end of the seam appears on an object, the seam will inevitably pass through the object, which can easily cause visual error. When structural objects in UAV images are very dense, it will increase the difficulty of searching for suitable seams. Compared with other algorithms, our proposed seam algorithm can better avoid passing through structural objects and obtain better seams when processing the image stitching of dense structural objects (Figure 9).
In the previously mentioned SPHP method, applying similar transformations in non-overlapping areas can solve the perspective distortion and retain more source image information, but it cannot solve the problem of distortion in the overlapping areas. We found that, after cutting the image according to the seam, using the cut part as the new overlapping area, applying the SPHP method to register the image can successfully solve the distortion problem of the overlapping area. After this, the non-overlapping area is still retained by applying similarity transformation, while the overlapping area is removed according to the seam. This is equivalent to retaining more non-overlapping area information and removing the distorted overlapping area. The experimental results (Figure 10 and Figure 11) show that our method can retain more original image information and improve the perception of stitched images. In addition, our proposed method can be used to stitch hyperspectral remote sensing images (Figure 12), and has good application prospects.

6. Conclusions

In this article, we propose a method for UAV image stitching based on the optimal seam algorithm and half-projective warp. The main purpose of this method is to obtain a natural panoramic image with a good visual effect and no ghosting or blurring. In the seam algorithm, we propose a new definition of the difference matrix and restrict the region of seam search. Then, we propose a seam search algorithm based on global energy minimization, which causes the seam to avoid structural objects and move along the flat area. Finally, according to the position of the seam and combined with the half-projective warp, more areas retain the original shape, so as to improve the sensory effect of the stitched image. Experiments show that our method exceeds popular methods, and our method is also suitable for hyperspectral remote sensing image stitching. Our future research will continue to focus on more efficient seam search algorithms.

Author Contributions

Conceptualization, J.C. and Z.L.; methodology, J.C. and Z.L.; software, Z.L.; validation, Z.L.; formal analysis, J.C., Z.L., C.P., Y.W. and W.G.; resources, J.C. and C.P.; data curation, J.C.; writing—original draft preparation, Z.L.; writing—review and editing, J.C., C.P., Y.W. and W.G.; visualization, Z.L.; supervision, J.C. and C.P.; project administration, J.C.; funding acquisition, J.C. and C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China, nos. 62073304, 41977242 and 61973283, and in part by the China Postdoctoral Science Foundation, no. 2021M702533.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bang, S.; Kim, H.; Kim, H. UAV-based automatic generation of high-resolution panorama at a construction site with a focus on preprocessing for image stitching. Autom. Constr. 2017, 84, 70–80. [Google Scholar] [CrossRef]
  2. Turner, D.; Lucieer, A.; Watson, C. An automated technique for generating georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery, based on structure from motion (SfM) point clouds. Remote Sens. 2012, 4, 1392–1410. [Google Scholar] [CrossRef] [Green Version]
  3. De Grandi, G.; Malingreau, J.P.; Leysen, M. The ERS-1 Central Africa mosaic: A new perspective in radar remote sensing for the global monitoring of vegetation. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1730–1746. [Google Scholar] [CrossRef]
  4. Liu, C.; Zhang, S.; Akbar, A. Ground feature oriented path planning for unmanned aerial vehicle mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1175–1187. [Google Scholar] [CrossRef]
  5. Jiang, J.; Ma, J.; Chen, C.; Wang, Z.; Cai, Z.; Wang, L. SuperPCA: A superpixelwise PCA approach for unsupervised feature extraction of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4581–4593. [Google Scholar] [CrossRef] [Green Version]
  6. Li, X.; Feng, R.; Guan, X.; Shen, H.; Zhang, L. Remote sensing image mosaicking: Achievements and challenges. IEEE Geosci. Remote Sens. Mag. 2019, 7, 8–22. [Google Scholar] [CrossRef]
  7. Ma, J.; Qiu, W.; Zhao, J.; Ma, Y.; Yuille, A.L.; Tu, Z. Robust L2E estimation of transformation for non-rigid registration. IEEE Trans. Signal Process. 2015, 63, 1115–1129. [Google Scholar] [CrossRef]
  8. Jiang, X.; Ma, J.; Jiang, J.; Guo, X. Robust feature matching using spatial clustering with heavy outliers. IEEE Trans. Image Process. 2020, 29, 736–746. [Google Scholar] [CrossRef]
  9. Ma, J.; Jiang, X.; Fan, A.; Jiang, J.; Yan, J. Image matching from handcrafted to deep features: A survey. Int. J. Comput. Vis. 2021, 129, 23–79. [Google Scholar] [CrossRef]
  10. Jiang, X.; Ma, J.; Xiao, G.; Shao, Z.; Guo, X. A review of multimodal image matching: Methods and applications. Inf. Fusion 2021, 73, 22–71. [Google Scholar] [CrossRef]
  11. Brown, M.; Lowe, D.G. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef] [Green Version]
  12. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  13. Gao, J.; Kim, S.J.; Brown, M.S. Constructing image panoramas using dual-homography warping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 49–56. [Google Scholar]
  14. Lin, W.Y.; Liu, S.; Matsushita, Y.; Ng, T.T.; Cheong, L.F. Smoothly varying affine stitching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 345–352. [Google Scholar]
  15. Zaragoza, J.; Chin, T.J.; Brown, M.S.; Suter, D. As-projective-as-possible image stitching with moving DLT. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2339–2346. [Google Scholar]
  16. Ma, J.; Zhao, J.; Tian, J.; Yuille, A.L.; Tu, Z. Robust point matching via vector field consensus. IEEE Trans. Image Process. 2014, 23, 1706–1721. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Li, J.; Wang, Z.; Lai, S.; Zhai, Y.; Zhang, M. Parallax-tolerant image stitching based on robust elastic warping. IEEE Trans. Multimed. 2017, 20, 1672–1687. [Google Scholar] [CrossRef]
  18. Ma, J.; Zhao, J.; Jiang, J.; Zhou, H.; Guo, X. Locality preserving matching. Int. J. Comput. Vis. 2019, 127, 512–531. [Google Scholar] [CrossRef]
  19. Chen, J.; Wan, Q.; Luo, L.; Wang, Y.; Luo, D. Drone image stitching based on compactly supported radial basis function. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4634–4643. [Google Scholar] [CrossRef]
  20. Ma, J.; Jiang, X.; Jiang, J.; Zhao, J.; Guo, X. LMR: Learning a two-class classifier for mismatch removal. IEEE Trans. Image Process. 2019, 28, 4045–4059. [Google Scholar] [CrossRef]
  21. Fan, A.; Ma, J.; Jiang, X.; Ling, H. Efficient Deterministic Search with Robust Loss Functions for Geometric Model Fitting. IEEE Trans. Pattern Anal. Mach. Intell. 2021. [Google Scholar] [CrossRef]
  22. Li, S.; Yuan, L.; Sun, J.; Quan, L. Dual-feature warping-based motion model estimation. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 4283–4291. [Google Scholar]
  23. Joo, K.; Kim, N.; Oh, T.H.; Kweon, I.S. Line meets as-projective-as-possible image stitching with moving DLT. In Proceedings of the IEEE International Conference on Image Processing, Quebec City, QC, Canada, 27–30 September 2015; pp. 1175–1179. [Google Scholar]
  24. Ren, X.; Malik, J. Learning a classification model for segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 10–17. [Google Scholar]
  25. Hejazifar, H.; Khotanlou, H. Fast and robust seam estimation to seamless image stitching. Signal Image Video Process. 2018, 12, 885–893. [Google Scholar] [CrossRef]
  26. Yuan, Y.; Fang, F.; Zhang, G. Superpixel-Based Seamless Image Stitching for UAV Images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1565–1576. [Google Scholar] [CrossRef]
  27. Gao, J.; Li, Y.; Chin, T.J.; Brown, M.S. Seam-Driven Image Stitching. In Proceedings of the Eurographics, Girona, Spain, 6–10 May 2013; pp. 45–48. [Google Scholar]
  28. Burt, P.J.; Adelson, E.H. A multiresolution spline with application to image mosaics. ACM Trans. Graph. (TOG) 1983, 2, 217–236. [Google Scholar] [CrossRef]
  29. Xiong, Y.; Pulli, K. Fast and high-quality image blending on mobile phones. In Proceedings of the IEEE Consumer Communications and Networking Conference, Las Vegas, NV, USA, 9–12 January 2010; pp. 1–5. [Google Scholar]
  30. Fang, F.; Wang, T.; Fang, Y.; Zhang, G. Fast color blending for seamless image stitching. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1115–1119. [Google Scholar] [CrossRef]
  31. Chang, C.H.; Sato, Y.; Chuang, Y.Y. Shape-preserving half-projective warps for image stitching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3254–3261. [Google Scholar]
  32. Lin, C.C.; Pankanti, S.U.; Natesan Ramamurthy, K.; Aravkin, A.Y. Adaptive as-natural-as-possible image stitching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1155–1163. [Google Scholar]
  33. Chen, Y.S.; Chuang, Y.Y. Natural image stitching with the global similarity prior. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 186–201. [Google Scholar]
  34. Pandey, A.; Pati, U.C. Image mosaicing: A deeper insight. Image Vis. Comput. 2019, 89, 236–257. [Google Scholar] [CrossRef]
  35. Liao, T.; Chen, J.; Xu, Y. Quality evaluation-based iterative seam estimation for image stitching. Signal Image Video Process. 2019, 13, 1199–1206. [Google Scholar] [CrossRef]
  36. Li, L.; Tu, J.; Gong, Y.; Yao, J.; Li, J. Seamline network generation based on foreground segmentation for orthoimage mosaicking. ISPRS J. Photogramm. Remote Sens. 2019, 148, 41–53. [Google Scholar] [CrossRef]
  37. Eden, A.; Uyttendaele, M.; Szeliski, R. Seamless image stitching of scenes with large motions and exposure differences. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 2, pp. 2498–2505. [Google Scholar]
  38. Kerschner, M. Seamline detection in colour orthoimage mosaicking by use of twin snakes. ISPRS J. Photogramm. Remote Sens. 2001, 56, 53–64. [Google Scholar] [CrossRef]
  39. Davis, J. Mosaics of scenes with moving objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, USA, 23–25 June 1998; pp. 354–360. [Google Scholar]
  40. Gracias, N.; Mahoor, M.; Negahdaripour, S.; Gleason, A. Fast image blending using watersheds and graph cuts. Image Vis. Comput. 2009, 27, 597–607. [Google Scholar] [CrossRef] [Green Version]
  41. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  42. Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A line segment detector. Image Process. Line 2012, 2, 35–55. [Google Scholar] [CrossRef] [Green Version]
  43. Li, N.; Liao, T.; Wang, C. Perception-based seam cutting for image stitching. Signal Image Video Process. 2018, 12, 967–974. [Google Scholar] [CrossRef]
  44. Zhang, Y.; Wan, Z.; Jiang, X.; Mei, X. Automatic stitching for hyperspectral images using robust feature matching and elastic warp. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3145–3154. [Google Scholar] [CrossRef]
Figure 1. Simplified representation of our proposed method.
Figure 1. Simplified representation of our proposed method.
Remotesensing 14 01068 g001
Figure 2. Specific simplified process of seam searching algorithm.
Figure 2. Specific simplified process of seam searching algorithm.
Remotesensing 14 01068 g002
Figure 3. Specific simplified process of proposed seam searching. (a) is the brief process of expansion. The start point is red, the end point is green, and the yellow region is the search region R. (b) The red region is the final seam path.
Figure 3. Specific simplified process of proposed seam searching. (a) is the brief process of expansion. The start point is red, the end point is green, and the yellow region is the search region R. (b) The red region is the final seam path.
Remotesensing 14 01068 g003
Figure 4. Stitching results among various popular methods. The first line is the input images and the position of our seam. The second, third, fourth, fifth and sixth lines are the results of AutoStitch, APAP, AANAP, REW and our seam algorithm. The red boxes highlight some details. The percentage of overlap between two images in this case is 62.98%.
Figure 4. Stitching results among various popular methods. The first line is the input images and the position of our seam. The second, third, fourth, fifth and sixth lines are the results of AutoStitch, APAP, AANAP, REW and our seam algorithm. The red boxes highlight some details. The percentage of overlap between two images in this case is 62.98%.
Remotesensing 14 01068 g004
Figure 5. Stitching results among various popular methods. The first line is the input images and the position of our seam. The second, third, fourth, fifth and sixth lines are the results of AutoStitch, APAP, AANAP, REW and our seam algorithm. The red boxes highlight some details. The percentage of overlap between two images in this case is 52.74%.
Figure 5. Stitching results among various popular methods. The first line is the input images and the position of our seam. The second, third, fourth, fifth and sixth lines are the results of AutoStitch, APAP, AANAP, REW and our seam algorithm. The red boxes highlight some details. The percentage of overlap between two images in this case is 52.74%.
Remotesensing 14 01068 g005
Figure 6. Stitching results among various popular methods. The first line is the input images and the position of our seam. The second, third, fourth, fifth and sixth lines are the results of AutoStitch, APAP, AANAP, REW and our seam algorithm. The red boxes highlight some details. The percentage of overlap between two images in this case is 81.40%.
Figure 6. Stitching results among various popular methods. The first line is the input images and the position of our seam. The second, third, fourth, fifth and sixth lines are the results of AutoStitch, APAP, AANAP, REW and our seam algorithm. The red boxes highlight some details. The percentage of overlap between two images in this case is 81.40%.
Remotesensing 14 01068 g006
Figure 7. Seam location compared to other seam algorithms. The first and second lines are the input images. The third, fourth, fifth and sixth lines are the results of FARSE, PSC, QEISE and our seam algorithm. Seams are indicated in red. Blue boxes highlight some special structural objects. The percentage of overlap between two images in these four cases is 52.74%, 62.98%, 78.08%, 76.07%, respectively.
Figure 7. Seam location compared to other seam algorithms. The first and second lines are the input images. The third, fourth, fifth and sixth lines are the results of FARSE, PSC, QEISE and our seam algorithm. Seams are indicated in red. Blue boxes highlight some special structural objects. The percentage of overlap between two images in these four cases is 52.74%, 62.98%, 78.08%, 76.07%, respectively.
Remotesensing 14 01068 g007
Figure 8. Quality curve comparison of seams with different seam algorithms.
Figure 8. Quality curve comparison of seams with different seam algorithms.
Remotesensing 14 01068 g008
Figure 9. The comparative experiment between our seam algorithm and FARSE in the case of dense houses. The red line indicates the seam. Red and blue boxes highlight some special structural objects. The percentage of overlap between two images in these two cases is 74.43%, 86.62%, respectively.
Figure 9. The comparative experiment between our seam algorithm and FARSE in the case of dense houses. The red line indicates the seam. Red and blue boxes highlight some special structural objects. The percentage of overlap between two images in these two cases is 74.43%, 86.62%, respectively.
Remotesensing 14 01068 g009
Figure 10. Shape preserving effect of half-projective warp combined with seam position. The first line is the input images. The second line is the location of our seam. The third line is the images without shape preservation. The fourth line is the shape preserved images. The red line indicates the seam. Blue boxes highlight some special structural objects. The percentage of overlap between two images in these three cases is 68.04%, 72.03%, 75.79%, respectively.
Figure 10. Shape preserving effect of half-projective warp combined with seam position. The first line is the input images. The second line is the location of our seam. The third line is the images without shape preservation. The fourth line is the shape preserved images. The red line indicates the seam. Blue boxes highlight some special structural objects. The percentage of overlap between two images in these three cases is 68.04%, 72.03%, 75.79%, respectively.
Remotesensing 14 01068 g010
Figure 11. The comparative experiment between SPHP and our seam algorithm and our method. Blue boxes highlight some special structural objects.
Figure 11. The comparative experiment between SPHP and our seam algorithm and our method. Blue boxes highlight some special structural objects.
Remotesensing 14 01068 g011
Figure 12. The comparative experiment between AHREW and our method on hyperspectral images. The red line indicates the seam. Red boxes highlight angle distortion areas on the results of AHREW.
Figure 12. The comparative experiment between AHREW and our method on hyperspectral images. The red line indicates the seam. Red boxes highlight angle distortion areas on the results of AHREW.
Remotesensing 14 01068 g012
Table 1. Q P S N R of seams with different algorithms (dB). Bold indicates the best results.
Table 1. Q P S N R of seams with different algorithms (dB). Bold indicates the best results.
FARSEPSCQEISEOurs
Case 151.939546.267652.468553.1513
Case 249.007135.154149.454351.8483
Case 347.060034.711848.998650.3483
Case 451.149249.160150.504453.3715
Table 2. Q S S I M of seams with different algorithms. Bold indicates the best results.
Table 2. Q S S I M of seams with different algorithms. Bold indicates the best results.
FARSEPSCQEISEOurs
Case 10.01760.02760.01410.0133
Case 20.00790.23910.01310.0073
Case 30.00880.20390.01350.0080
Case 40.01860.02530.02440.0154
Table 3. Time consumed with different algorithms (s). Bold indicates the best results.
Table 3. Time consumed with different algorithms (s). Bold indicates the best results.
FARSEPSCQEISEOurs
Case 13.1212.2620.247.24
Case 22.707.169.056.31
Case 32.595.245.334.14
Case 42.367.466.5354.68
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, J.; Li, Z.; Peng, C.; Wang, Y.; Gong, W. UAV Image Stitching Based on Optimal Seam and Half-Projective Warp. Remote Sens. 2022, 14, 1068. https://doi.org/10.3390/rs14051068

AMA Style

Chen J, Li Z, Peng C, Wang Y, Gong W. UAV Image Stitching Based on Optimal Seam and Half-Projective Warp. Remote Sensing. 2022; 14(5):1068. https://doi.org/10.3390/rs14051068

Chicago/Turabian Style

Chen, Jun, Zixian Li, Chengli Peng, Yong Wang, and Wenping Gong. 2022. "UAV Image Stitching Based on Optimal Seam and Half-Projective Warp" Remote Sensing 14, no. 5: 1068. https://doi.org/10.3390/rs14051068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop