Ghost Elimination via Multi-Component Collaboration for Unmanned Aerial Vehicle Remote Sensing Image Stitching

: Ghosts are a common phenomenon widely present in unmanned aerial vehicle (UAV) remote sensing image stitching that seriously affect the naturalness of stitching results. In order to effectively remove ghosts and produce visually natural stitching results, we propose a novel image stitching method that can identify and eliminate ghosts through multi-component collaboration without object distortion, segmentation or repetition. Speciﬁcally, our main contributions are as follows: ﬁrst, we propose a ghost identiﬁcation component to locate a potential ghost in the stitching area; and detect signiﬁcantly moving objects in the two stitched images. In particular, due to the characteristics of UAV shooting, the objects in UAV remote sensing images are small and the image quality is poor. We propose a mesh-based image difference comparison method to identify ghosts; and use an object tracking algorithm to accurately correspond to each ghost pair. Second, we design an image information source selection strategy to generate the ghost replacement region, which can replace the located ghost and avoid object distortion, segmentation and repetition. Third, we ﬁnd that the process of ghost elimination can produce natural mosaic images by eliminating the ghost caused by initial blending with selected image information source. We validate the proposed method on VIVID data set and compare our method with Homo, ELA, SPW and APAP using the peak signal to noise ratio (PSNR) evaluation indicator.


Introduction
Unmanned aerial vehicles (UAVs) are widely used in image collection because of their convenient operation and low cost. However, a UAV's low flying altitude limits the field of view of the images. In order to obtain more comprehensive regional information, it is necessary to mosaic some images into a large range of images. UAV remote sensing image stitching is widely used in many real-world applications; for example, environment monitoring, disaster assessment and management [1]. An important feature of UAV remote sensing image acquisition is scanning capture, where there are moving objects in the image scene. In UAV remote sensing images, creating a mosaic is quite challenging; ghosts usually exist in the mosaic results, and are difficult to be remove.
The process of image stitching comprises two stages: image registration and image blending. Image registration is performed to calculate the relative transformation of two stitched images, so that the stitched image can be transformed into a unified coordinate system. Image blending is done to form a panoramic image with a large field of view using overlapped regions. Ghosts usually occur in the process of image blending; a moving object may appear in an adjacent stitched image. In this case, most of the image mosaic methods inevitably produce ghosts; because the blending image region is inconsistent. The ghost phenomenon not only affects the naturalness of stitching results, but also affects the application of stitching results. In this work, our main goal is to mosaic UAV remote sensing images and eliminate ghosts to obtain natural stitching results and achieve a comprehensive image of the scene.
The existing image stitching methods mainly solve the problem of improving registration accuracy by designing a good transformation function. For UAV remote sensing image mosaics, additional information is usually needed, such as camera calibration parameters, or position and rotation data obtained from the GPU/IMU, ground control points or a reference map, to achieve accurate mosaic results. However, these additional parameters do not focus on the ghost phenomenon caused by moving objects, and the ghosts cannot be eliminated by a transform function. The seam-driven image stitching method can alleviate these problems to a certain extent. However, there are still limitations in dealing with ghosts due to the following reasons: first, when there is a moving object in the stitched image, the seam may cut or copy the object; secondly, the seam-driven method selects one of multiple seams as the final seam, and if there are fast moving objects in the stitched images, the process of seam finding method becomes challenging.
In short, the seam-driven method cannot deal with all the challenges of eliminating ghosts, and ghost elimination is an urgent problem. In particular, when shooting images that are generally used for stitching, they may be at the same position or have a small rotation. However, when the UAV's bird's-eye view camera captures images, important factors are not only translation and rotation, but also the distance between the camera and the object, and the level of shaking, resulting in small objects in UAV remote sensing images and poor image quality, which in turn increases the difficulty of image stitching and the removal of ghosts.
In this paper, we propose a multi-component collaboration method to stitch UAV remote sensing images while eliminating ghosts without a seam finding process. Intuitively, we first identify the ghost region, then select the image information source, and finally replace the ghost region with the selected image information source.
In summary, our main contributions are as follows: • We propose a method based on multi-component collaboration to stitch UAV remote sensing images, which can achieve a natural stitching performance; • We design a ghost identification component that can identify the ghost area of small objects in UAV remote sensing images and accurately identify multiple ghosts; • We realize a UAV remote sensing image stitching method without requiring additional information and seam optimization that can produce natural stitched images by using selected image sources to eliminate ghosts in the initial blending result.

Related Work
This paper focuses on eliminating ghosts in UAV remote sensing image stitching. Therefore, in this section, we introduce the current mainstream work from two aspects: UAV remote sensing image stitching and ghost elimination.

UAV Remote Sensing Image Stitching
The existing image mosaic methods focus on two aspects: registration accuracy and distortion reduction. Registration accuracy is an important research problem in image stitching. After more than 10 years of research, some progress has been made in this area.
The key to improving registration accuracy is to design a good transform function. The initial image stitching calculates a global transform function to register two stitched images. This method is called homography, and is used as the baseline of the stitching method. In 2007, AutoStitch [2] was proposed to stitch multiple images, in the image registration stage, a global transform is also used to register images. These methods are good for images with only translation or small rotation; but are insufficient for images with large rotation; that is, they cannot solve the parallax problem. To address parallax, multiple transformations are used to register images rather than a single global homography. In 2011, Gao et al. [3] divided a stitched image into two planes, and then calculated two homography matrices to register the images. Because some images may not be clearly divided into two planes, this Remote Sens. 2021, 13, 1388 3 of 18 method is not applicable to all images. Lin et al. [4] used multiple affine transformation matrices to improve the accuracy of image registration. In 2014, APAP [5] was proposed, which is a classic image stitching method. It divides an image into several grids of fixed size and computes multiple homography matrices to register the image. This method achieves good results for images with large parallax. Liu et al. [6] proposed a shape-optimizing and illumination-smoothing image stitching method and an automatic grid generation strategy. Lee et al. [7] proposed a new concept of the warping residual, which is based on a warping residual to stitch large parallax images. Zheng et al. [8] used the normal vector's orientations of local regions and the projection errors of aligned images to segment an image; to improve the image registration accuracy. Li et al. [9] proposed an image stitching method based on robust elastic warping, which can stitch large parallax images and achieve accurate registration. In addition, for low texture image stitching, Xiang et al. [10] used the line-guided local warping method to achieve accurate registration. For large parallax video stitching, Lee and Sim [11] used the epipolar geometry method to stitch.
Through various registration methods, the overlapping areas of two images can be well aligned, and the non-overlapping areas usually have serious distortions. Chang et al. [12] proposed (the SPHP) algorithm, which corrects the shape of the stitched image, and reduces the projection distortion. Lin et al. [13] proposed a homography linearization method, which is also a shape correction problem; the natural appearance of the stitching results is improved with this method by a level compared with SPHP. Chen et al. [14] proposed the global similarity prior (GSP) to obtain natural stitching results, which uses line alignment constraints to set constraint terms. Liao et al. [15] used two single-perspective warps to register images to produce natural stitching results. Li et al. [16] proposed a novel quasihomography to solve the line blending problem between the homography transformation and the similarity transformation by linearly scaling the horizontal component of the homography, to create a more natural panorama.
UAV image mosaic usually needs additional information, such as camera calibration parameters, and position and rotation data from the GPS/IMU, ground control points or a reference map, to achieve accurate mosaic results [17][18][19][20][21][22][23][24]. When the GPS/IMU data are not sufficiently accurate, the stitching may be affected. Xue et al. [24] proposed a novel image mosaic method without the requirements of camera calibration parameters, camera poses, or any 3D reconstruction procedure. This method can obtain visually satisfactory mosaic results automatically using only the raw 2D imagery captured from a UAV. Liu et al. [25] considered mosaic for low overlap UAV aerial images and found that the commonly used methods and software were not suitable for this type of data. Then, a novel adjustment model for mosaic low-overlap sweeping images was proposed. Cai et al. [26] proposed a robust transform estimator based on residual analysis and its application to UAV aerial images. Estimating the transformation between two images from the same scene is a fundamental step for image registration, image stitching and 3D reconstruction.

Ghost Elimination
At present, there are few studies on ghost elimination. Kakli et al. [27] noted that the presence of moving objects in a stitched image would cause blurring or ghosts, and proposed the minimization of parallax artifacts in video stitching for moving foregrounds. Davis et al. [28] proposed a stitching method for scenes with moving objects: first, the stitched image is divided into disjointed regions, and the pixels of each region are sampled from a single source image; then, the intensity difference images of the two images are calculated and the Dijkstra algorithm is used to find the seam. The work in [29] used the seam-driven method to eliminate ghosts. The purpose of that study was to remove pedestrians from Google Street View images and obtain pedestrian positions by using the pedestrian detection method.
These methods all use the seam-driven method to eliminate ghosts. The seam-driven method can avoid and eliminate ghosts to a certain extent, but it cannot deal with all the challenges of eliminating ghosts. For example, when a seam passes through a moving object, the moving object is cut. When a seam is placed between two moving objects, the moving objects are copied. Therefore, it is difficult to eliminate the ghosts by using the seam-driven method. Our method can effectively deal with ghosts produced by moving objects in UAV remote sensing mosaic without using the seam-driven method.

Overview
An image mosaic is used to stitch two images with overlapping areas to form a panorama with a large field of view. Given two image I 1 and I 2 , I 1 is the reference image. Generally, the image stitching process is split into the following steps: first, we perform image registration and calculate the homography matrix as I 2 relative to I 1 . Second, we transform I 2 through the homography matrix withÎ 2 = warp = (I, H). Finally, we perform an image blend between I 1 andÎ 2 .
In real-world applications, two images are usually taken from a camera twice and rotated in the same scene with different angles or small camera positions, or appropriate key frames are selected from a video. Since the two images are not taken at the same time, the objects move through the scene, resulting in ghosts in the stitching result and leading to a major challenge for the image stitching problem. There are two challenges for UAV remote sensing image stitching: (1) Due to the large shooting distance between the UAV and the object, the object in the UAV remote sensing image is very small; (2) The image quality of UAV remote sensing images is poor due to the shaking of UAV shooting equipment.
In order to deal with the above challenges, we focus on the image blending process and propose a multi-component cooperation method for ghost identification and elimination, which is used for UAV remote sensing image mosaic. Intuitively, we first identify potential ghost regions in the blending region, and then explore how to eliminate them using appropriate image information sources, which are selected from two input images in the same ghost environment. Finally, the image information source is used to replace the ghost area to form a natural mosaic result. The above process has the following difficulties: (1) It is difficult to identify the ghost region because of the variety of motion patterns and the different types of ghost. In order to deal with this problem, we propose the ghost identification component in Section 3.2 for effective ghost identification; (2) It is also a challenge to choose the image information source for the ghost's replacement. We should ensure that the objects in the final result are not copied or cut. To deal with this challenge, we propose an image information source selection component to select the appropriate image information source. In Section 3.3, we design three rules to avoid object distortion, cutting and repetition; (3) Using the identified ghost region and image information source, we naively perform region substitution, which easily causes a serious seam in the ghost region. In order to avoid this problem, we propose a ghost elimination component in Section 3.4 that can effectively eliminate ghosts and enhance the naturalness of stitching results.
The proposed multi-component cooperative ghost identification and elimination method is divided into three components: ghost identification, image information source selection and ghost elimination. The three components cooperate to complete the ghost identification and elimination in UAV remote sensing image mosaic to produce natural mosaic results. We show the collaboration flow of the three components in Figure 1.
to avoid this problem, we propose a ghost elimination component in Section 3.4 that can effectively eliminate ghosts and enhance the naturalness of stitching results.
The proposed multi-component cooperative ghost identification and elimination method is divided into three components: ghost identification, image information source selection and ghost elimination. The three components cooperate to complete the ghost identification and elimination in UAV remote sensing image mosaic to produce natural mosaic results. We show the collaboration flow of the three components in Figure 1.

Ghost Identification Component
Different ghosts are generated according to different motion trajectories of objects. As shown in Figure 2, the boxes with the same color represent the same object, correspondingly, the boxes with different colors represent different objects, and the dotted line describes the moving track of the object.

Ghost Identification Component
Different ghosts are generated according to different motion trajectories of objects. As shown in Figure 2, the boxes with the same color represent the same object, correspondingly, the boxes with different colors represent different objects, and the dotted line describes the moving track of the object. The performance of ghosts is diverse in image mosaic. Because the performance and motion modes of objects are different between cases, it is difficult to identify the corresponding ghosts accurately. Intuitively, only objects in the overlapping region may produce ghosts, which lead to the inclusion of different or non-aligned objects in the overlapping area of two images as shown in Figure 3. The region marked by the red box is the overlapping region. We observe the object marked by the green box. In Figure 3a, the object is in the overlapping region; in Figure 3b, the object is in the non-overlapping region. In the final stitching result, the object in the overlapping region is shown as a ghost, due to the corresponding region not being an object in Figure 3b, while the object in the non-overlapping region is still a real object. The performance of ghosts is diverse in image mosaic. Because the performance and motion modes of objects are different between cases, it is difficult to identify the corresponding ghosts accurately. Intuitively, only objects in the overlapping region may produce ghosts, which lead to the inclusion of different or non-aligned objects in the overlapping area of two images as shown in Figure 3. The region marked by the red box is the overlapping region. We observe the object marked by the green box. In Figure 3a, the object is in the overlapping region; in Figure 3b, the object is in the non-overlapping region. In the final stitching result, the object in the overlapping region is shown as a ghost, due to the corresponding region not being an object in Figure 3b, while the object in the non-overlapping region is still a real object.
ping area of two images as shown in Figure 3. The region marked by the red box is the overlapping region. We observe the object marked by the green box. In Figure 3a, the object is in the overlapping region; in Figure 3b, the object is in the non-overlapping region. In the final stitching result, the object in the overlapping region is shown as a ghost, due to the corresponding region not being an object in Figure 3b, while the object in the non-overlapping region is still a real object. Based on the above analysis, because there are non-aligned objects in the overlapping region, ghosts are produced. In order to correctly identify the ghost region, we first find the overlapping region of the stitched images, and then identify the non-aligned object regions (ghost regions) in the overlapping region. Specifically, we identify ghosts with the following steps: 1) The overlapping region of stitched images is calculated according to the homography transformation matrix, i.e., Ω 1 and Ω 2 corresponding to the overlapping regions of I 1 and I 2 respectively; 2) Because the object in the UAV remote sensing image is too small and the image quality is poor, it is difficult to obtain the object information from the scene by using the object detection method directly. Therefore, we propose a mesh-based ghost identification method. Meshing is performed on the overlapping regions of two images (i.e., Ω 1 and Ω 2 ). The divided grids are denoted as M 1 = { 1 } and M 2 = { 2 } Based on the above analysis, because there are non-aligned objects in the overlapping region, ghosts are produced. In order to correctly identify the ghost region, we first find the overlapping region of the stitched images, and then identify the non-aligned object regions (ghost regions) in the overlapping region. Specifically, we identify ghosts with the following steps: (1) The overlapping region of stitched images is calculated according to the homography transformation matrix, i.e., Ω 1 and Ω 2 corresponding to the overlapping regions of I 1 and I 2 respectively; (2) Because the object in the UAV remote sensing image is too small and the image quality is poor, it is difficult to obtain the object information from the scene by using the object detection method directly. Therefore, we propose a mesh-based ghost identification method. Meshing is performed on the overlapping regions of two images (i.e., where di f f i is the difference between the i th mesh in I 1 and I 2 . Intuitively, if di f f i is large (i.e., larger than a specified threshold di f f i ≥ t 1 ), the blended regions are not the same and the blended result tends to be a ghost in the final result.
Remote Sens. 2021, 13, x FOR PEER REVIEW 7 of 20 where 1 and 2 are the i ℎ meshes in 1 and 2 , respectively, as shown in Figure  4; where is the difference between the i ℎ mesh in 1 and 2 . Intuitively, if is large (i.e., larger than a specified threshold ≥ 1 ), the blended regions are not the same and the blended result tends to be a ghost in the final result.
In a real UAV remote sensing image, there are usually multiple objects, so it is challenging to distinguish each ghost: In a real UAV remote sensing image, there are usually multiple objects, so it is challenging to distinguish each ghost: (4) Utilizing a state-of-the-art object tracker, i.e., KCF [30], to perform object tracking to accurately correspond to each ghost object.

Image Information Source Selection Component
In order to obtain a natural stitching result, we choose the appropriate image information source to eliminate ghosts. When we choose the image information source, we should avoid selecting distorted objects as much as possible. Because the objects in the warped image will have different degrees of distortion, we should choose the objects in the reference image as the image information source as far as possible. When there are two non-aligned objects in the overlapping region, we choose one object region and the corresponding region of another object as the image information source. When there is an object on the boundary of the overlapping region, we must select the object as the image information source.
In order to deal with the above problems, we designed three rules for the selection of image information sources: • Shape-stabilized rule: This rule was created to make the final stitching result more natural and keep the object as natural as possible. When there are objects in both images, we usually take the left image as the reference image while the right image undergoes transformation, and then the two images are blended. Correspondingly, the objects in the right image also have deformation. Therefore, we should choose the objects in the reference image as the image information source; • Completeness rule: This rule mainly eradicates the ghosts at the edge of the overlapping region. When a complete ghost object and ghost at the edge of the overlapping region exist simultaneously in the final stitching result, we should select the object at the edge of the overlapping region as the image information source to ensure the completeness of the object; • Uniqueness rule: This rule is used to ensure that the same object appears only once in the final stitching result. It can avoid the repetition of objects; and also ensure that the scene information is complete.

Ghost Elimination Component
This section focuses on the elimination of ghosts. The basic principle of eliminating ghosts is to try to make the moving objects in the stitched image come from an image. Specifically, we first obtain an initialized blended image I by naively blending I 1 andÎ 2 , where we average the overlap regions of the two images as the blending region. Then, we combine the initialized blending result with the image information source selected from I 1 orÎ 2 ; we aim to generate a blended image I S by a seamless clone [31]: where, obj is the selected image information source. Seamless clone is a method based on the Poisson equation for seamless image blending. In the process of image blending, it is applied in the gradient domain of the image. Using a color gradient instead of the color intensity can produce a more realistic effect. Our seamless blending is guided by the gradient field in the source image block, and the difference between the target scene and the source image on the blended boundary is smoothly diffused into the blended image block; in this way, the blended image block can be seamlessly blended into the target scene, and its hue color and illumination can be consistent with the target scene. Specially, for the initialized blended image I and the image information source obj, we want to interpolate obj into I; for this, we only need to use the gradient of I as the guide gradient and solve the Poisson equation.

Implementation Details
In this section, we first show the entire flow of our proposed algorithm, as shown in Figure 5, and then we elaborate on some details for implementing our method. We used the SIFT [32] feature point detection algorithm of VLFeat [33] to get the feature points of the image, and used RANSAC [34] to remove mismatches and calculate the transformation 7b,c is the result with a threshold value of 15, which allowed the two regions to be correctly identified, while the second line is the result of a threshold value of 10, which did not allow the two regions to be distinguished. If the value of 1 is too large, the ghost region cannot be identified, as shown in Figure 8. Figure 8b shows the result for a threshold value of 15, with which three ghost regions could be identified. Figure 8c shows the result for a threshold value of 20, where only two regions could be identified.
In the case of an inappropriate threshold, the ghost region cannot be identified, and our technology reverts to the standard stitching.  For the average pixel difference of the corresponding mesh, we set the threshold t 1 to be 15 to obtain a mesh with a large difference. The threshold t 1 was used to determine whether the region was the ghost region. The larger the threshold, the greater the difference between the two regions (the region tends to be a ghost). The threshold was determined by experiments. The experimental results are shown in Figure 6. We tested with the threshold t 1 set between 10 and 25, and the step size as 5. When t 1 was 10, the correct rate of identifying ghosts was 87.5%; when t 1 was 15, the correct rate of identifying ghosts was 100%; when t 1 was 20, the correct rate of identifying ghosts was 62.5%; when t 1 was 25, the correct rate of identifying ghosts was 50%. Therefore, in this paper, t 1 was set as 15.
Remote Sens. 2021, 13, x FOR PEER REVIEW 1 Figure 6. The comparison of threshold 1 . If the value of t 1 is too small, the object cannot be distinguished, as shown in Figure 7. In Figure 7a, two ghost regions are marked with red and blue; the first line of Figure 7b,c is the result with a threshold value of 15, which allowed the two regions to be correctly identified, while the second line is the result of a threshold value of 10, which did not allow the two regions to be distinguished. If the value of t 1 is too large, the ghost region cannot be identified, as shown in Figure 8. Figure 8b shows the result for a threshold value of 15, with which three ghost regions could be identified. Figure 8c shows the result for a threshold value of 20, where only two regions could be identified.

Experimental Evaluation
In this section, we show the results of our proposed method and compare with the following state-of-the-art methods to demonstrate its effectiveness: Homography (Homo), APAP [Error! Reference source not found.], ELA [Error! Reference source not found.] and SPW [Error! Reference source not found.]. Homography is conducted to calculate a global transform function to register two stitched images. APAP divides the image into several grids of fixed size and computes multiple homography matrices to register the

Experimental Evaluation
In this section, we show the results of our proposed method and compare with the following state-of-the-art methods to demonstrate its effectiveness: Homography (Homo), APAP [Error! Reference source not found.], ELA [Error! Reference source not found.] and SPW [Error! Reference source not found.]. Homography is conducted to calculate a global transform function to register two stitched images. APAP divides the image into several grids of fixed size and computes multiple homography matrices to register the In the case of an inappropriate threshold, the ghost region cannot be identified, and our technology reverts to the standard stitching.

Experimental Evaluation
In this section, we show the results of our proposed method and compare with the following state-of-the-art methods to demonstrate its effectiveness: Homography (Homo), APAP [5], ELA [11] and SPW [15]. Homography is conducted to calculate a global transform function to register two stitched images. APAP divides the image into several grids of fixed size and computes multiple homography matrices to register the image. This method achieves good results for images with large parallax. ELA is an image stitching method based on robust elastic warping, which can stitch large parallax images and achieve accurate registration. SPW uses two single-perspective warps to register images to produce natural stitching results. In particular, the SPW method has two blending parameters: average and linear. We obtained the stitching results using the source codes provided by the authors. We also verified the effectiveness of the proposed mesh-based ghost identification method. All of our experiments were conducted on the VIVID data set. We will evaluate the proposed method in terms of processing time, quantitative comparison, qualitative comparison and user study.

Effectiveness of Mesh-Based Ghost Identification Method
In the ghost identification process, in order to obtain information about objects in the scene, if we intuitively use an object detection method to obtain object information, there may be some limitations, such as poor image quality and small objects. Therefore, we propose a mesh-based ghost identification method. In order to verify the effectiveness of our proposed mesh-based ghost identification method, we conducted a comparative experiment with the object detection method. Here, we took the SSD [35] object detection method as an example. The comparative results are shown in Table 1. Table 1. Comparison between the SSD and the proposed mesh-based ghost identification method.

Experimental Data
In Table 1, "Proposed" stands for the proposed method, "T" indicates that the ghost object could be identified successfully, "F" indicates that the object detection failed and "E" indicates that the object detection was incorrect.
It can be seen from Table 1 that the object detection method does have limitations in our experimental data. However, the mesh-based ghost identification method could correctly identify the ghost objects. Figure 9 shows an error case of object detection. Figure 9a shows the input images, and Figure 9b shows the detected result of SSD. "cat:63%" means that the probability that the detected object was a cat was 63%. The region marked with a blue rectangle is the object detected by SSD. Obviously, this region was not a single object, which proves that the detection result of SSD was incorrect.
Remote Sens. 2021, 13, x FOR PEER REVIEW the inaccurate registration caused by large parallax. However, the proposed me successfully eliminate the registration ghosts in the background, as shown in Fig   In Figure 10, we compare the Homo, APAP, ELA and SPW methods. The red the green box represent two different cars. It can be seen that only our results g the uniqueness of each car. In the final result, each car only appears once, and it object rather than ghost. The yellow box is the ghost produced by the content in ground, which could not be achieved by the ghost identification method based detection. Moreover, the ghost identification method based on a mesh could ide ghost, and the proposed method could correct the ghost very well.  The ghost identification method based on object detection is limited by the object; for example, it only works on the objects in the training set, the content in the background of the mosaic image cannot be detected, and a registration ghost may be produced due to the inaccurate registration caused by large parallax. However, the proposed method can successfully eliminate the registration ghosts in the background, as shown in Figure 10.

Processing Time Comparison
We compared the processing times of Homo, APAP, ELA, SPW and the proposed method, and the results are shown in Table 2. In the comparison of the five methods, on the whole, APAP had the longest processing time. There were two groups of data stitching failures in the SPW method, and the processing time of SPW was also relatively long. Although the processing time of the Homo method and ELA were lower than that of the In Figure 10, we compare the Homo, APAP, ELA and SPW methods. The red box and the green box represent two different cars. It can be seen that only our results guarantee the uniqueness of each car. In the final result, each car only appears once, and it is a real object rather than ghost. The yellow box is the ghost produced by the content in the background, which could not be achieved by the ghost identification method based on object detection. Moreover, the ghost identification method based on a mesh could identify the ghost, and the proposed method could correct the ghost very well.

Processing Time Comparison
We compared the processing times of Homo, APAP, ELA, SPW and the proposed method, and the results are shown in Table 2. In the comparison of the five methods, on the whole, APAP had the longest processing time. There were two groups of data stitching failures in the SPW method, and the processing time of SPW was also relatively long. Although the processing time of the Homo method and ELA were lower than that of the proposed method, Homo and ELA did not pay attention to moving objects, and there were ghosts in the stitching results. The processing time of the proposed method was lower than that of APAP and SPW.

Quantitative Comparison
We compared the proposed method with Homo, APAP, ELA and SPW by the Peak Signal to Noise Ratio (PSNR). Since there were moving objects in the two stitched images, comparison involved calculating the PSNR value of the overlapping region of the stitching result, the overlapping region of I 1 and the overlapping region of I 2 , and then calculating the average value of the two values as the final result. The calculation method for PSNR is shown in Equation (3). The comparison results are shown in Table 3. The PSNR value of the proposed method outperforms the other methods: where R I S is the overlapping region of the final stitching result, R I is the overlapping region of the stitched image (i.e., I 1 , I 2 ), the size of the overlapping region is m × n and the pixel value is represented by B-bit binary; then, MAX = 2 B − 1.

Qualitative Comparison
In this section, in order to show the effectiveness of the proposed method, we make a qualitative comparison by comparing four state-of-the-art methods: Homo, APAP, ELA and SPW. The comparison results are shown in  data set. There are multiple ghosts in each pair of stitched images. Different colored boxes represent different objects. Correspondingly, the same color box represents the same object. In the final stitching result, the same color box appears many times; that is, the object is repeatedly displayed, which violates the principle of object uniqueness. The dotted line box indicates that the object is as ghost, and the solid line box indicates a real object. In the result of the proposed method, the ghosts are eliminated to ensure that each object appears only once in the final mosaic result, and the object is not segmented. Figure 11. Comparison between the proposed method and SOTA methods on eg02 data.  Figures 13 and 14 show the stitching results of eg01 and eg05 in VIVID, which are compared with Homo, APAP and ELA methods. It was not possible to stitch these two sets of data with the SPW method. Therefore, the results of only four methods are shown in Figures. As above, different colored boxes represent different objects, dotted boxes represent ghosts, and solid boxes represent real objects. Our results still exhibit advantages. In our final mosaic results, the objects are not repeated; at the same time, the objects only appear once, and they are real objects rather than ghosts.  Figures 13 and 14 show the stitching results of eg01 and eg05 in VIVID, which are compared with Homo, APAP and ELA methods. It was not possible to stitch these two sets of data with the SPW method. Therefore, the results of only four methods are shown in Figures. As above, different colored boxes represent different objects, dotted boxes represent ghosts, and solid boxes represent real objects. Our results still exhibit advantages. In our final mosaic results, the objects are not repeated; at the same time, the objects only appear once, and they are real objects rather than ghosts.    In particular, our method is also effective for dim images, and the results are shown in Figures 15-17. The stitching results show another advantage of our method; that is, stitching dim images and removing ghosts.   In particular, our method is also effective for dim images, and the results are shown in Figures 15-17. The stitching results show another advantage of our method; that is, stitching dim images and removing ghosts.      Figures 11 and 12 show the comparison results of eg02 and eg03 data in the VIVID data set. There are multiple ghosts in each pair of stitched images. Different colored boxes represent different objects. Correspondingly, the same color box represents the same object. In the final stitching result, the same color box appears many times; that is, the object is repeatedly displayed, which violates the principle of object uniqueness. The dotted line box indicates that the object is as ghost, and the solid line box indicates a real object. In the result of the proposed method, the ghosts are eliminated to ensure that each object appears only once in the final mosaic result, and the object is not segmented. Figures 13 and 14 show the stitching results of eg01 and eg05 in VIVID, which are compared with Homo, APAP and ELA methods. It was not possible to stitch these two sets of data with the SPW method. Therefore, the results of only four methods are shown in Figures. As above, different colored boxes represent different objects, dotted boxes represent ghosts, and solid boxes represent real objects. Our results still exhibit advantages. In our final mosaic results, the objects are not repeated; at the same time, the objects only appear once, and they are real objects rather than ghosts.
In particular, our method is also effective for dim images, and the results are shown in Figures 15-17. The stitching results show another advantage of our method; that is, stitching dim images and removing ghosts.
In Figures 11-17, we present experiments on eight data in the VIVID data set and compare four state-of-the-art methods. The results show that the proposed method is superior to other methods. Moreover, our method can not only mosaic UAV remote sensing images and eliminate ghosts but is also effective with multiple small objects in the scene, especially for low resolution and dim images.

User Study
In order to investigate whether our method would be popular with users, we performed a user study and compared it with four state-of-the-art methods: Homo, APAP, ELA and SPW. We invited 16 participants to vote on the results of eight data. All the eight data were from the VIVID data set, as shown above. The voting results of participants are shown in Table 4. In Table 4, "S" means state-of-the-art methods (Homo, APAP, ELA, SPW) and "P" means the proposed method. Table 4 shows a summary of user voting. We calculated the score according to the voting results of 16 participants. The calculation method of the score is shown in Equation (5). This user study shows that the mosaic result of our proposed method won the favor of most users: where, λ is the weight coefficient of voting, n is the number of people who voted, (n = 16) and v is the number of votes.

Conclusions
In this paper, we have proposed a novel ghost removal method for UAV remote sensing image mosaics that can jointly identify ghosts and eliminate ghost areas without distortion, cutting and repetition. Our three main contributions to the final performance are as follows: firstly, in the ghost identification component, we proposed a mesh-based image difference method to detect significant moving objects as potential ghosts in two stitched images, which accurately correspond to the ghost pairs based on the object tracking method; secondly, we designed an image information source selection strategy in the image information source selection component to generate appropriate image information sources as potential ghosts for localization; finally, we designed a method based on Poisson fusion to further optimize the image block in the ghost elimination component to produce a natural mosaic image. We validated the proposed method on the VIVID data set and compared the proposed method with Homo, ELA, SPW and APAP using the PSNR evaluation indicator. The comparative results show that the results of the proposed method are superior to other methods. The proposed method has some limitations in threshold selection. In future work, an adaptive threshold adjustment method will be designed to select an appropriate threshold for each group of test images to improve the robustness of the method.