^{1}

^{2}

^{1}

^{2}

^{*}

^{3}

^{3}

^{3}

^{2}

^{4}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

High-quality inner FoV (Field of View) stitching is currently a prerequisite step for photogrammetric processing and application of image data acquired by spaceborne TDI CCD cameras. After reviewing the technical development in the issue, we present an inner FoV stitching method based on sensor geometry and projection plane in object space, in which the geometric sensor model of spaceborne TDI CCD images is used to establish image point correspondence between the stitched image and the TDI CCD images, using an object-space projection plane as the intermediary. In this study, first, the rigorous geometric sensor model of the TDI CCD images is constructed. Second, principle and implementation of the stitching method are described. Third, panchromatic high-resolution (HR) images of ZY-1 02C satellite and triple linear-array images of ZY-3 satellite are utilized to validate the correctness and feasibility of the method. Fourth, the stitching precision and geometric quality of the generated stitched images are evaluated. All the stitched images reached the sub-pixel level in precision. In addition, the geometric models of the stitched images can be constructed with zero loss in geometric precision. Experimental results demonstrate the advantages of the method for having small image distortion when on-orbit geometric calibration of satellite sensors is available. Overall, the new method provide a novel solution for inner FoV stitching of spaceborne TDI CCD images, in which all the sub-images are projected to the object space based on the sensor geometry, performing indirect image geometric rectification along and across the target trajectory. At present, this method has been successfully applied in the daily processing system for ZY-1 02C and ZY-3 satellites.

Time Delay and Integration (TDI) CCD is a new type of photosensitive device that features high sensitivity and a large signal/noise ratio, which has been well-adapted in designing small-aperture, light-weighted, and high-resolution spaceborne optical cameras [

It is necessary to get a comparatively wide swath of image by increasing the field of the spaceborne camera that works in a linear array pushbroom imaging mode. Since the number of CCD detectors for each single sensor line is restricted technically by the manufactures, three or more sensor lines are connected with each other to constitute a large field of view (FoV) of a camera. However, it would be unwise to directly place multiple sensor lines on the focal plane end–to-end to form a complete sensor line, due to the restrictions of installation precision, swath width in cross-track direction, and physical structure of outer cover of each sensor line, especially in the case of small array structure of each TDI CCD sensor [

Currently, two innovations are often adopted by spaceborne TDI CCD cameras for a wide FoV. The first one is to place the multiple sensor lines on the focal plane in a non-collinear way, as illustrated in

Available results indicate that, for TDI CCD cameras, several factors, such as the position of sensor lines on the focal plane, the lens distortion of the optical system, drift angle deviation, satellite platform jitter, the integral time variation of scanlines, large geographic relief,

In a previous work, we analyzed the geometric characteristics of CBERS-02B HR camera with the inner correlation of three sub-images acquired, and summarized two major technical routes,

The image-space-oriented route aims to establish a stitching model from the intrinsic characteristics of TDI CCD images,

However, the object-space-oriented route aims to establish a corresponding relationship between the pixels of a stitched image and the pixels of TDI CCD images from the perspective of sensor geometry, with the object space as an intermediary [

Under certain circumstances, we presents a novel inner FoV stitching method based on sensor geometry and projection plane in object space, where the geometric sensor model of spaceborne TDI CCD images is used to associate the stitched image with the TDI CCD images, using object-space projection plane as the intermediary. In this method, the geometric sensor model of original images is established firstly, and then the principle, error sources and workflow of the method are described in detail; in addition, triple linear-array images of ZY-3 satellite and panchromatic HR images of ZY-1 02C satellite are utilized to validate the correctness and feasibility of such method; finally, the stitching precision and geometric quality of stitched images generated are assessed, where a comparison between the method of this paper and the virtual CCD line based method, another mainstream object-space-oriented method, is also made. In general, this method provides a new solution for inner FoV stitching of spaceborne TDI CCD images. The key of the method is to project the original image to object space based on the sensor geometry, performing an indirect image geometric rectification along and across the direction of target trajectory. It can be widely applied to spaceborne TDI CCD images whether the focal plane layout of the multiple sensor lines is in a collinear style or not.

ZY-3 satellite carries three high-resolution panchromatic TDI CCD cameras pointed separately at the front-facing, ground-facing, and rear-facing positions. Both forward and backward cameras are formed by four CCDs, and the nadir camera is formed by three CCDs (

ZY-1 02C satellite carries two identical high-resolution panchromatic TDI CCD cameras (HR1 and HR2) with the same focal plane assembly as CBERS-02B HR camera. As illustrated in

The geometric characteristics of panchromatic high-resolution TDI CCD cameras onboard ZY-3 and ZY-1 02C are listed in

As we know, to meet the requirement of high-precision geometric processing and applications of high-resolution satellites, it is a key technical issue to establish a rigorous geometric sensor model for the raw image products [

For each scanline of a sub-image, parameters of the position and orientation at the exposure moment can be interpolated from the provided auxiliary data of time, ephemeris, and attitude [_{x}_{y}

In practice, regarding the auxiliary data provided, the rigorous geometric sensor model in the form of _{c}_{c}Y_{c}Z_{c}_{x}_{y}_{y}_{x}^{T}. According to the installation position of each detector, the two pointing angles are calculated by the camera coordinate system in central projection geometry as

As previous studies indicate, it is acceptable to replace the on-orbit calibration for interior orientation parameters (IOP) by direct modeling the pointing angles [_{j}_{0j}, a_{1j}, a_{2j}, a_{3j}, b_{0j}, b_{1j} b_{2j} b_{3j} are the three-order polynomial coefficients.

An original image is formed by directly aligning and combining the sub-images according to the imaging time,

We propose here an inner FoV stitching method for spaceborne TDI CCD images based on sensor geometry and projection plane in object space. Supposing that the ground surface is approximated as the object-space projection plane with a mean elevation value, and ground cover of a stitched image is denoted as area of interest (AOI), if we segment AOI into several regular grid units along and across the direction of target trajectory as shown in

Theoretically speaking, this method and the virtual CCD line based method (

Accordingly, two factors shall be considered regarding the error sources of such a method: one is the inconsistent geometric accuracy of adjacent sub-images; the other is the height difference between the actual ground surface and the projection plane.

Here, we again take the situation of typical non-collinear TDI CCDs as an instance, to illustrate more visually distinct.

The first factor is associated with sensor geometric constraints between the adjacent sub-images that are the guarantees of a strict and high-precision inner FoV stitching process. As illustrated in

In practice, various observation errors in the auxiliary data would incur the inconsistent geometric precision of adjacent sub-images. By experience, these errors can be compensated to a large extent through on-orbit geometric calibration of cameras periodically [

The second factor, _{y}_{1} and _{y}_{2} represent the nominal pointing angles of CCD1 and CCD2 along the sensor pushbroom direction, respectively,

For HR camera of ZY-1 02C, according to the focal plane assembly (

Apparently, the stitching error is in proportion to the tangent value difference of adjacent pointing angles at the same terrain condition. Δ

As illustrated in

To simplify our description, the coordinate transformation between object space and image space is denoted by _{1} and _{2} denote direct and inverse forms of rigorous geometric sensor model of a spaceborne TDI CCD image, respectively; (_{o}_{o}^{T}_{3} and _{4} represent transformation functions from geodetic coordinates to 3-D Cartesian coordinates in the WGS84 frame, and vice versa, respectively; _{5} and _{6} represent transformation functions from geodetic coordinates to Universal Transverse Mercator (UTM) coordinates in the WGS84 frame, and vice versa, respectively.

Define the object-space projection plane in UTM coordinates system in average elevation _{utm}Y_{utm}_{utm}_{utm}

Choose the pixels at the two ends of the first column of the original image, and decide the positions of their ground points on projection plane by

Suppose _{utm}′Y_{utm}′_{utm}Y_{utm}_{utm}′_{utm}_{utm})^{T}_{utm}′Y_{utm}′_{R}

Find the coverage of the original image,

Segment AOI into regular grid units with uniform size almost identical with ground sample distance (GSD) of the original image, and then the grid units are associated with pixels of stitched image one by one. That is, the number of image rows and columns are the same as the number of grid units along and across the direction of target trajectory, respectively;

Suppose (_{1}, _{1})^{T}^{T}_{utm}′Y_{utm}′

Thus, the coordinates transforming relation between pixels of a stitched image and grid units of AOI is established by

Since for a certain grid unit in ground coordinates (_{utm}_{utm})^{T}

Note that, when a geographic relief is large enough to affect the sub-pixel stitching precision as analyzed in Section 3.2.2, additional geometric rectification shall be done in image-space transformation model for any local parts of the stitched image [

Let (_{1}, _{1})^{T}_{utm}_{utm})^{T}

Perform a back-projection calculation with _{o}_{o}^{T}

In regard to (_{o}_{o}^{T}^{T}

Set up the virtual control points in object space by the strict geometric model, and calculate the coefficients of RFM by least-square adjustment [_{1}, _{1})^{T}^{T}

Thus, the strict geometric model of a stitched image in the form of (^{T}_{1}, _{1}, ^{T}

To validate the correctness, feasibility and advantages of the proposed stitching method, we carried out experiments on triple linear-array images of ZY-3 satellite and HR images of ZY-1 02C satellite. We chose two scenes of image data captured by ZY-3 triple-linear array panchromatic camera. The two scenes covered the Dengfeng and Anyang districts of Henan Province each with different geographical conditions. Meanwhile, two corresponding ones captured by ZY1-02C HR camera were also selected. In addition, digital aerial ortho-map (DOM) and digital elevation model (DEM) were used for reference to automatically extract the reference points that used to evaluate the geometric quality of stitched images.

To evaluate the stitching precision, visual check is carried out on the generated stitched images. A direct approach is to observe the continuity of ground features (roads, buildings, and

To further validate the advantages of our method, in this section, geometric quality of the stitched images is assessed from two aspects: one is about fitting precision of RFM compared with strict geometric model, the other is the internal geometric accuracy of each stitched image.

As mentioned in Section 3.3.4, the RFM parameters of a stitched image are calculated in a “terrain independent” manner [

For each image data, the precision was satisfactory. Taking the Dengfeng image data as an example, the fitting precision of RFM reached 0.01 pixel level or better (shown as

In addition, the geometric accuracy of each stitched image is assessed by the following steps:

First, several number of evenly distributed reference points are extracted from the stitched image by automatic image matching using high-resolution reference data provided.

Second, all the reference points are used as check points to evaluate the geometric accuracy of the stitched image. RFM-based back-projection is performed to decide the pixel coordinates of the reference points, and then the differences from their actual pixel coordinates are determined from which the image location errors are processed statistically.

Third, based on the control points, error compensation for RFM is performed, during which the image-space affine transformation model [

The statistic standard deviations of image location errors before and after RFM compensation are listed in

High-quality inner FoV (Field of View) stitching is currently a prerequisite for photogrammetric processing and application of image data acquired by spaceborne TDI CCD cameras. In this paper, we presented a novel inner FoV stitching method based on sensor geometry and projection plane in object space, where all TDI CCD images are projected to object space based on the sensor geometry, in the manner of indirect image geometric rectification along and across the direction of target trajectory. Experiments with image data of triple linear-array camera onboard ZY-3 satellite and HR cameras onboard ZY-1 02C satellite were performed to prove the correctness, feasibility and advantages of our method. The stitching precision and geometric quality of stitched images generated were objectively assessed; moreover, a comparison was made between the method of this paper and the virtual CCD line based method, another mainstream object-space-oriented method. The experimental results show that, (1) a sub-pixel level stitching precision can be reached for a seamless visual effect; (2) in regard to the geometric model of a stitched image, the RFM (rational function model) can be adopted as a good replacement for the strict geometric model because a high fitting precision is available; (3) the method of this paper has great potential to eliminate the image systematic geometric distortion during the stitching process so that the stitched images can preserve a good geometric quality with relatively small internal distortion; (4) compared with the virtual CCD line based method, our method is not only able to achieve the same level of stitching quality, but also more straightforward in the processing strategy, since it directly utilizes the projection plane in object space instead of the virtual CCD line on the focal plane. Overall, this method provides a new solution for high-precision inner FoV stitching of spaceborne TDI CCD images. Now it has already been applied in the daily processing system of ZY-1 02C and ZY-3 satellites and shall be promising in practice.

The authors thank the editors and the reviewers for their constructive and helpful comments for substantial improvement of this paper. This research is financially supported by the National Basic Research Program of China (The 973 Program) (No. 2014CB744201, 2012CB719901, and 2012CB719902); the National High Technology Research and Development Program of China (No. 2011AA120203); the Natural Science Foundation of China (Nos. 41271394, 41371430, and 40901209); the State Science and Technology Support Program (Nos. 2011BAB01B02, 2011BAB01B05, and 2012BAH28B04-04); and a Foundation for the Author of National Excellent Doctoral Dissertation of PR China (FANEDD, No. 201249).

The first author conceived the study and designed the experiments; the second author developed technical flow of the method and wrote the program; the third author helped optimize the program and perform the experiments; the fourth author helped perform the experiments; the fifth author helped perform the analysis with constructive discussions; the sixth author contributed to manuscript preparation.

The authors declare no conflict of interest.

Layout of the multiple sensor lines in the field of TDI CCD cameras. (

Sensor geometry for the TDI CCD images. (

Two ways of establishing pixel corresponding relations between the stitched image and the original image. (

Analysis on the error sources: (

Workflow of the stitching method based on sensor geometry and projection plane in object space.

Relation between the pixels of a stitched image and grid units of AOI.

Visual check of the stitching results. (

Geometric parameters of high-resolution panchromatic TDI CCD cameras onboard ZY-3 and ZY-1 02C satellites.

Pixel Size (μm) | Nadir: 7 | 10 |

Forward: 10 | ||

Backward: 10 | ||

| ||

Focal Length (mm) | 1700 | 3300 |

| ||

Ground Sample Distance (m) | Nadir: 2.1 | 2.36 |

Forward: 3.5 | ||

Backward: 3.5 | ||

| ||

No. of CCD detectors | Nadir: 8192 × 3 | 4096 × 3 |

Forward: 4096 × 4 | ||

Backward: 4096 × 4 | ||

| ||

Swath Width (km) | Nadir: 51 | 27 |

Forward: 52 | ||

Backward: 52 |

Specific information about the experimental data.

Sensor Name | ZY-3 Triple linear-array | ZY-1 02C HR1/HR2 | |||

Image Size (Pixels) | Nadir: 24,576 × 24,576 (8192 × 3) | 17,575 × 12,288 (4096 × 3) | |||

Forward: 16,384 × 16,384 (4096 × 4) | |||||

Backward: 16,384 × 16,384 (4096 × 4) | |||||

| |||||

Location | Dengfeng, Henan, China | Anyang, Henan, China | Dengfeng, Henan, China | Anyang, Henan, China | |

Range | 51 km × 51 km | 51 km × 51 km | 27 km × 27 km | 27 km × 27 km | |

Acquisition Date | 23 March 2012 | 4 June 2013 | 7 April 2013 | 15 March 2013 | |

| |||||

Terrain Type | Mountainous and hilly | Plain | Mountainous and hilly | Plain | |

Mean altitude: 340 m | Mean altitude: 340 m | ||||

Max. altitude: 1450 m | Max. altitude: 1450 m | ||||

| |||||

Reference Data | DOM | GSD: 1 m | GSD: 1 m | GSD: 1 m | GSD: 1 m |

Planimetric accuracy | Planimetric accuracy | Planimetric accuracy | Planimetric accuracy | ||

(RMSE): 1 m | (RMSE): 0.5 m | (RMSE): 1 m | (RMSE): 0.5 m | ||

DEM | GSD: 5 m | GSD: 2 m | GSD: 5 m | GSD: 2 m | |

Height accuracy | Height accuracy | Height accuracy | Height accuracy | ||

(RMSE): 2 m | (RMSE):0.5 m | (RMSE): 2 m | (RMSE): 0.5 m |

The fitting precision of RFM compared with strict geometric model (in pixel).

ZY-3 Forward | 0.000311 | 0.000081 | 0.010009 | 0.001687 | |

ZY-3 Nadir | 0.000274 | 0.000128 | 0.000306 | 0.000151 | |

ZY-3 Backward | 0.000323 | 0.000081 | 0.006011 | 0.001076 | |

| |||||

ZY-1 02C HR1 | 0.000255 | 0.000114 | 0.000285 | 0.000134 |

Standard deviations of image location errors before and after RFM compensation (in pixel).

| |||||

| |||||

1 | ZY-3 Forward | 0 | 165 | 0.54 | 1.87 |

85 | 80 | ||||

ZY-3 Nadir | 0 | 157 | 1.37 | 1.19 | |

74 | 83 | ||||

ZY-3 Backward | 0 | 170 | 0.49 | 0.46 | |

87 | 83 | ||||

| |||||

2 | ZY-3 Forward | 0 | 110 | 0.63 | 0.86 |

50 | 55 | ||||

ZY-3 Nadir | 0 | 83 | 1.54 | 1.40 | |

44 | 39 | ||||

ZY-3 Backward | 0 | 56 | 0.55 | 0.78 | |

32 | 24 | ||||

| |||||

3 | ZY1-02C HR1 | 0 | 209 | 3.75 | 2.17 |

102 | 107 | ||||

| |||||

4 | ZY1-02C HR2 | 0 | 52 | 1.46 | 2.56 |

27 | 25 | ||||

| |||||

| |||||

| |||||

ZY-3 Forward | 0 | 146 | 0.56 | 0.50 | |

70 | 76 | ||||

ZY-3 Nadir | 0 | 172 | 0.66 | 0.63 | |

90 | 82 | ||||

ZY-3 Backward | 0 | 142 | 0.49 | 0.58 | |

60 | 82 | ||||

| |||||

ZY-3 Forward | 0 | 105 | 0.63 | 0.83 | |

50 | 55 | ||||

ZY-3 Nadir | 0 | 89 | 0.76 | 1.07 | |

40 | 49 | ||||

ZY-3 Backward | 0 | 115 | 0.78 | 1.70 | |

60 | 55 | ||||

| |||||

ZY1-02C HR1 | 0 | 197 | 3.84 | 2.02 | |

90 | 107 | ||||

| |||||

ZY1-02C HR2 | 0 | 41 | 3.52 | 2.91 | |

20 | 21 |