Next Article in Journal
Multibeam Interferometer Using a Photonic Crystal Fiber with Two Asymmetric Cores for Torsion, Strain and Temperature Sensing
Previous Article in Journal
Stylus Tip Center Position Self-Calibration Based on Invariable Distances in Light-Pen Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Epipolar Resampling of Cross-Track Pushbroom Satellite Imagery Using the Rigorous Sensor Model

by
Mojtaba Jannati
*,
Mohammad Javad Valadan Zoej
and
Mehdi Mokhtarzade
Faculty of Geodesy and Geomatics, K. N. Toosi University of Technology, Tehran 19667-15433, Iran
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(1), 129; https://doi.org/10.3390/s17010129
Submission received: 16 October 2016 / Revised: 14 December 2016 / Accepted: 29 December 2016 / Published: 11 January 2017
(This article belongs to the Section Remote Sensors)

Abstract

:
Epipolar resampling aims to eliminate the vertical parallax of stereo images. Due to the dynamic nature of the exterior orientation parameters of linear pushbroom satellite imagery and the complexity of reconstructing the epipolar geometry using rigorous sensor models, so far, no epipolar resampling approach has been proposed based on these models. In this paper for the first time it is shown that the orientation of the instantaneous baseline (IB) of conjugate image points (CIPs) in the linear pushbroom satellite imagery can be modeled with high precision in terms of the rows- and the columns-number of CIPs. Taking advantage of this feature, a novel approach is then presented for epipolar resampling of cross-track linear pushbroom satellite imagery. The proposed method is based on the rigorous sensor model. As the instantaneous position of sensors remains fixed, the digital elevation model of the area of interest is not required in the resampling process. Experimental results obtained from two pairs of SPOT and one pair of RapidEye stereo imagery with different terrain conditions shows that the proposed epipolar resampling approach benefits from a superior accuracy, as the remained vertical parallaxes of all CIPs in the normalized images are close to zero.

1. Introduction

The parallax values of the conjugate image points (CIPs) along the baseline of stereo images (horizontal parallax) reveal valuable information about the height of objects; while the vertical parallaxes not only do not convey any useful information, but they can interrupt the stereo-viewing process. The main objective of the epipolar resampling of stereo images is to generate normalized images where the CIPs have no vertical parallaxes and their horizontal parallaxes are linearly proportional with the height of corresponding points in the object space. These are two main applications of epipolar geometry (EG) which provide the possibility of stereo-viewing, automatic image matching, digital elevation model (DEM) generation, and stereo measurements [1,2].
The EG of frame images is well-known and there are well-established procedures for epipolar resampling [3,4] in such a way that given the relative orientation parameters (ROP) of the stereo images and modifying their attitude parameters, vertical parallaxes of CIPs can be eliminated. To this end, the ROP of the stereo imagery are modified so that all conjugate epipolar lines in those stereo images be simultaneously parallelized with the base line of imagery in a common plane [3]. In contrast, linear pushbroom images have a more complicated geometry where each scan line of these images has its own exterior orientation parameters (EOP) [5]. The availability of the GPS/INS information can positively affect the georeferencing of linear pushbroom imagery, however, the dynamic nature of the EOP of these images makes the reconstruction of EG much more complicated [5,6]. The EG of satellite pushbroom imagery was investigated in various studies using the both rigorous and empirical sensor models [6,7,8,9,10,11,12,13,14,15].
Among the rigorous model-based studies, Gupta and Hartley [7], Kim [5], and Habib et al. [8] made efforts to investigate the EG of linear array images by applying the Multiple Projection Centers (MPC) model [9]. In these studies, the trajectory of the sensor was assumed to be straight (i.e., the simplest case). The only difference between the model employed by Kim [5] with the models used in the two other studies is in the way that the sensor’s orientation parameters were modeled. However, these three studies showed two common important results: (1) rigorously derived epipolar lines in scenes captured by linear array scanners are not straight, rather, their shape is hyperbola-like; and (2) each point on a rigorously defined epipolar curve has a specific conjugate epipolar curve (i.e., no conjugate epipolar curves exist). In another study, Lee and Park [10] derived the equation of epipolar lines using the simplified pushbroom sensor model, which is in line with the results of the previous studies. As an important note, no epipolar resampling method has been proposed for linear pushbroom scenes based on the rigorous sensor models till now.
Among the empirical model-based studies, Morgan et al. [11] used the Parallel Projection (PP) model for epipolar resampling of linear pushbroom imagery, while Oh et al. [12], Wang et al. [13], and Koh and Yang [14] employed the Rational Function Model (RFM). In the former study, for a better compliance of the imaging geometry with the PP model, the perspective projection of the cross-track imaging component of original images should firstly be transformed to the parallel one [15]. Such a transformation assumes a flat terrain or a DEM and also assumes a knowledge of the scanner roll angle [11,16,17,18]. Thereafter, transformation parameters from the original to the normalized images can be estimated using a PP model based on a set of CIPs. The three latter studies are based on two features of epipolar curves, namely straightness approximation and local conjugacy [13,19]. Oh et al. [12] determined the direction of approximately conjugate epipolar lines (ACEL) in the stereo imagery space. Because the sampling distance is supposed to be fixed in the image space, the horizontal parallaxes will be linearly proportional to the ground heights, but there is no guarantee the ground axes will be orthogonal [14]. In contrast, Wang et al. [13] determined the direction of ACEL on a virtual horizontal plane in the object space. Because the sampling distance is supposed to be fixed in the object space, the ground axes will be orthogonal, but the horizontal parallaxes may not be proportional to the ground heights [14]. Koh and Yang [14] determined the direction of ACEL in the image space and supposed the sampling distance to be fixed in the ground space. Therefore, this model benefits from the advantages of the two previous models.
In comparison with the Morgan et al. [11] model, the main advantage of RFM-based models is that there is no need to any CIPs to use these models. However, the lack of any physical or geometrical interpretation of the rational function coefficients (RPCs) hinders theoretical investigation of the nature of the EG. Besides, directly determined RPCs are subject to biases in the scanner’s IOP or EOP [20,21,22] and many ground control points (GCPs) are required to indirectly determine these parameters [23], which therefore limits the use of rational functions. In contrast, the PP parameters have a geometrical interpretation, and therefore, this model can explain the EG of linear pushbroom imagery more effectively. However, the PP model does not completely comply with the imaging geometry of linear pushbroom imagery, and only rigorous models describe the scene formation as it actually happens. Therefore, these models are the most accurate ones [8,24] and have been adopted in a variety of applications [25,26,27,28]. Consequently, it is theoretically and practically significant to develop an approach for epipolar resampling of linear pushbroom imagery based on rigorous sensor models.
In this paper, a novel epipolar resampling approach of cross-track linear pushbroom satellite imagery is proposed based on the MPC model. In the proposed method, the vertical parallaxes of CIPs are eliminated by modifying the instantaneous attitude of the stereo imagery. In this way, the required new attitude parameters are computed based on the direction of the IB of each pair of CIPs. Because the instantaneous position of sensors remains unchanged, there is no need for DEM of the ground coverage in the resampling process.

2. Theoretical Background

2.1. Linear Pushbroom Imaging

In the case of a frame camera, the imaging process is performed by a 2D array of detectors (CCD or CMOS) arranged within its focal plane. In contrast, linear pushbroom scanners have only a long row of detectors within their focal plane; consequently, they capture a 1D (linear) image in a given exposure [29]. A sequence of linear images in the flying direction can be captured by moving the sensor which forms the 2D image frame [29,30]. Therefore, every 1-D image is associated with the specific positional and orientation parameters of the sensor during the time of exposure. Then, each 1-D image will have a distinct set of EOP. This different imaging geometry leads to different mathematical modeling of linear pushbroom imagery. Toward this end, a wide variety of mathematical models have been developed which generally fall into two main rigorous and empirical models [31,32]. Reconstruction of the geometry of an image at the time of imaging is the backbone of rigorous models. Two well-known approaches in this area encompass Orbital Parameters [27,33,34] and Multiple Projection Centers models [6,7,8,9,35]. In the first case, the trajectory of the sensor is modeled based on Keplerian orbital parameters and time-dependent (temporal) polynomials are used to estimate attitude parameters. In the second type of rigorous models, both the sensor trajectory and attitude parameters are estimated using temporal polynomials. In contrast to these models, no physical or geometrical interpretation stands behind empirical models and they attempt to estimate unknown parameters of the model by fitting it to the rigorous sensor model or a set of GCPs. Numerous research studies have been carried out on investigation of the empirical models in the last decade [36,37,38,39].

2.2. MPC Model

In the MPC model, each scanline of linear pushbroom imagery is supposed as a central perspective image, which has a set of unique EOP. Therefore, each scanline has a distinct Image Reference Frame (IRF), which x and y axes are directed toward the satellite’s instantaneous velocity vector and the array of sensor’s detectors, respectively. The origin of IRF is the principal point on the image space, which is usually considered at the middle of scanline (Figure 1a). In this model, any arbitrary Ground Reference Frame (GRF) can be used to explain the object space’s coordinates. For convenience in later computations, it is usually adopted according to a map coordinate system (e.g., UTM).
In order to establish the relation between IRF and GRF, two mediator coordinate systems are defined: Sensor Reference Frame (SRF) and Platform Reference Frame (PRF). SRF is a pseudo-3D coordinate system with its origin on the exposure station. Its x and y axes coincide with the corresponding axes of IRF, and its z axis passes through the instantaneous exposure station (Figure 1b). The relationship between IRF and SRF is established based on the IOP of the sensor. PRF is co-origin with the SRF, and its axes are locked to the body of the platform (Figure 1c).
Like in aerial photogrammetry, the sensor is supposed to be locked to the body of the platform in the MPC. Therefore, the axes of SRF and PRF coincide during the acquisition time. Thus, the 3D rotation matrix required to transform from SRF to PRF will be an identity matrix, given by Equation (1):
[ R Attitude ] = I 3 = [ 1 0 0 0 1 0 0 0 1 ] ,
where [RAttitude] is a 3D rotation matrix from the SRF to the PRF.
Transformation from the PRF to the GRF is performed using a 3D rotation matrix through modeling the orientation parameters of the sensor (Equation (2)):
[ R Orientation ] = R 1 ( ω t ) × R 2 ( ϕ t ) × R 3 ( κ t ) ,
where ωt, ϕt, and κt are the orientation angles of the sensor around the x, y and z axes of the GRF, respectively. Rr (for r = 1, 2, 3) is a 3D rotation matrix around the r-th axis of the GRF. Finally, [ROrientation] is a 3D rotation matrix from the PRF to the GRF. In practice, the orientation angles ωt, ϕt, and κt are modeled as time-dependent polynomials, given by Equation (3):
ω t = ω 0 + ω 1 t + ω 2 t 2 + ϕ t = ϕ 0 + ϕ 1 t + ϕ 2 t 2 + κ t = κ 0 + κ 1 t + κ 2 t 2 + ,
where t is the time parameter which can be considered equivalent to the rows-number of the image points. Obviously, the type and number of essential terms included in the polynomials depend on the perturbations that occurred during image formation [29].
Given the known relation between the SRF, the PRF and the GRF, the overall structure of the co-linearity equation can be expressed using Equation (4):
( X X S Y Y S Z Z S ) GRF = λ [ R Orientation ] × [ R Attitude ] × ( x = 0 y c ) SRF ,
where (x = 0, y) is the coordinate of points in the SRF, c is the principal distance of the sensor, λ is the scale factor, (X, Y, Z) is the object space coordinate in the GRF, and finally, (XS, YS, ZS) is the instantaneous exposure station in the GRF which is modeled by the time-dependent polynomials [29] given by Equation (5):
S t = ( X S Y S Z S ) = ( X 0 Y 0 Z 0 ) + ( X 1 Y 1 Z 1 ) × t + ( X 2 Y 2 Z 2 ) × t 2 ,
where Xr, Yr and Zr (for r = 0, 1, 2) are the unknown coefficients of the polynomials, and t is the time parameter which can be considered equivalent to the rows-number of the image points.

2.3. The EG of Linear Pushbroom Images

The main difference between frame-type cameras and linear pushbroom sensors is that each scanline of linear imagery has a distinct perspective center. As a result of this characteristic, there will be several epipolar planes on the second scene for a given point p in the first scene [1], and hyperbola-shape epipolar curves (instead of epipolar lines) of linear imagery is due to the non-coplanarity of these planes [1,8] (Figure 2a). For cross-track stereo coverage, the necessary and sufficient condition for coplanarity of all epipolar planes and formation of straight epipolar lines is that the scanlines containing CIPs to be parallel with their IB in a common plane (Figure 2b).
With a little attention to Figure 2b it can be easily understood that for a given pair of CIPs, the aforementioned condition can be reconstructed by modifying the attitude parameters of scanlines containing the CIPs. In this way, the epipolar curves of these CIPs will convert to straight Conjugate Epipolar Lines (CEL); and the plane containing all possible epipolar planes of these points will intersect any horizontal plane of the object space in a straight line parallel with their IB (Figure 2b). Hereafter, this direction is called the CEL’s direction. Due to the dynamism of two sensors during the capture of stereo scenes, by changing the rows-number of image points, the direction of IB of CIPs will change, as well as their CEL’s direction (Figure 3a). Moreover, given two distinct points such as p1 and p2 in the i-th scanline of the first scene, their conjugate points will not necessarily be in a unique scanline of the second scene (Figure 3b). Thus, by changing the columns-number of image points, the direction of IB of CIPs and their CEL’s direction will change too. From now on, these two changes in the CEL’s direction will be called the temporal and spatial variations (due to the change of rows- and columns-number of CIPs, respectively) of the IB of CIPs, respectively.
The necessary and sufficient condition for eliminating the vertical parallaxes of all CIPs throughout the linear cross-track stereo imagery is that the epipolar lines of all image points be parallel with each other, and aligned along the scene rows. Therefore, when the attitude parameters of stereo scenes are modified, the vertical parallax of all CIPs can be eliminated indiscriminately by applying a complementary rotation. Due to the temporal and spatial variations of the direction of IB of the CIPs, the required rotation angle will obviously be a function of both the rows- and the columns-number of CIPs.
As satellite scenes are usually acquired in a very short time and they benefit from a fairly stable attitude [8,29], the temporal and spatial variations of the IB’s direction of CIPs can be modeled using some polynomials in the both rows- and columns-number of CIPs. To validate this theoretical consideration, the IB’s direction of some pairs of CIPs from a real dataset (the left scene of the Isfahan dataset, which is introduced in Section 4) were calculated in terms of three orientation parameters roll (ω), pitch (ϕ), and yaw (κ) of the sensor. The scatter plots of computed parameters are illustrated in Figure 4, with respect to the rows- and the columns-number of CIPs.
According to Figure 4, no one of the attitude parameters ω, ϕ, and κ can exactly be modeled in terms of only one of the rows- or the columns-number of CIPs. Thereafter, some polynomials in the rows- and the columns-number of CIPs were used to estimate the computed attitude parameters. The scatter plot of computed and estimated attitude parameters of the IB of CIPs is shown in Figure 5, which illustrates the quality of estimation.
In order to a quantitative assessment, the obtained coefficients of the polynomials fitted to each of the attitude parameters are provided in Table 1, along with their significance values (p-value), standard errors (std-err), coefficients of determination (Pseudo-R2), and the root mean squares of residuals from the fitted straight lines ( σ ^ 0 ); which confirm the high accuracy of the modeling.
According to Table 1, since the IB’s direction of CIPs has a systematic behavior, the complementary rotation angle required for parallelizing the epipolar lines of CIPs can be modeled using some polynomials in the both rows- and columns-number of CIPs, as well as the new attitude parameters required to parallelizing the scanlines containing the CIPs with their IBs.

3. Proposed Method

Suppose two conjugate image points p and p′, which are respectively in the i-th and the j-th scanlines of the first and the second imagery. The proposed epipolar resampling method for cross-track linear imagery is constructed in three steps:
  • parallelizing scanlines containing the CIPs with their IBs to produce straight epipolar lines,
  • parallelizing the epipolar lines of all CIPs with each other to eliminate the vertical parallax of CIPs, and,
  • correcting the scale of the normalized scenes.
These steps are explained in more details in the following subsections.

3.1. Producing Straight Epipolar Lines

In the first step, the y axis of scanlines containing the CIPs should be parallelized with their IB in a common plane, by modifying the attitude parameters of the sensor (Figure 2b). In this way, the direction of IB of CIPs should be computed firstly. Therefore, given the EOP of the stereo imagery, the exposure stations of the i-th scan line of the first scene and the j-th scan line of the second scene, are calculated using Equation (5). Then, the IB of points p and p′ is computed using Equation (6):
( B X B Y B Z ) GRF = B GRF = S j S i ,
where Si and Sj are the exposure stations of the i-th scan line of the first scene and the j-th scan line of the second scene, respectively; and BGRF is the IB of conjugate points p and p′ in the GRF.
Since the calculated IB is in the GRF and the new attitude parameters for parallelizing the SRF’s y axis of scanlines are required in the PRF, the vector BGRF should firstly be transformed to the PRF of each scanline, see Equation (7):
( B X i B Y i B Z i ) PRF = B PRF i = [ R Orientation ] i T × B GRF ,
where [ROrientation]i is the rotation matrix [ROrientation] for the EOP of i-th scanline of the first image, T is the transpose operator, B PRF i is the baseline of the points p and p′ in the PRF of the i-th scanline of the first image, and B X i , B Y i and B Z i are the components of the vector B PRF i along the x, y and z axes of the PRF, respectively.
Given the B PRF i , the required attitude parameters for parallelizing the SRF’s y axis of the i-th scanline of the first image with the IB of CIPs can be calculated using Equations (8) and (9):
ω n i = tan 1 ( B Z i B Y i ) ,
κ n i = tan 1 ( B X i B Y i 2 + B Z i 2 ) ,
where ω n i and κ n i are the new attitude parameters around the PRF’s x and z axes of the i-th scan line of the first image, respectively.
Moreover, for coplanarity of the i-th scanline of the first image and the j-th scanline of the second image, it is sufficient that the new attitude parameters around their PRF’s y axis be equal. In order to simultaneously account for the off-nadir viewing effect of the sensor, it is recommended that this parameter be considered equal to zero, as indicated by Equation (10):
ϕ n i = 0 ,
where ϕ n i is the new attitude parameter around the PRF’s y axis of the i-th scanline of the first image.
Given the new attitude parameters, the required rotation matrix for parallelizing the SRF’s y axis of the i-th scanline of the first image with the IB of the conjugate points p and p′ (i.e., [ R Attitud n ] i ) will be as Equation (11):
[ R Attitud n ] i = R 1 ( ω n i ) × R 2 ( ϕ n i ) × R 3 ( κ n i ) ,
In practice, two parameters ω n i and κ n i should firstly be calculated for a set of CIPs, and by fitting a polynomial with a proper order in both the rows- and the columns-number of CIPs then be used in the structure of rotation matrix [ R Attitud n ] i , Equation (12):
ω n ( i , l ) = ω 0 n + ω 1 n × i + ω 2 n × l + κ n ( i , l ) = κ 0 n + κ 1 n × i + κ 2 n × l + ,
where i and l are the rows- and the columns-number of CIPs, respectively. ω r n and κ r n (for r = 0, 1, 2) are the unknown coefficients of polynomials.
Given the second image’s EOP, the required rotation matrix for parallelizing the SRF’s y axis of the j-th scanline of the second image with the IB of the conjugate points p and p′ (i.e., [ R Attitud n ] j ) can similarly be calculated using Equations (8)–(12).
By applying the rotation matrix [ R Attitud n ] i , the points’ coordinate in the aligned-SRF (i.e., the SRF after than its y axis was parallelized with the IB of CIPs) will be as indicated by Equation (13):
( x a y a c a ) SRF a = λ λ a [ R Attitude n ] T × [ R Orientation ] T × [ R Orientation ] × [ R Attitude ] × ( x = 0 y c ) SRF ,
where (xa, ya, −ca) is the points’ coordinate in the aligned-SRF (SRFa), and λa is the scale factor for transferring from GRF to SRFa. Since [ R Attitude ] = I 3 and, [ R Orientation ] T × [ R Orientation ] = I 3 , Equation (13) can be simplified as Equation (14):
( x a y a c a ) SRF a = k × [ R Attitude n ] T × ( x = 0 y c ) SRF ,
where k = λ λ a is the scale factor for transferring from SRF to SRFa.
As previously mentioned, in the SRFa the epipolar curves will become straight epipolar lines which are aligned along the IB of CIPs.

3.2. Eliminating the Vertical Parallax of CIPs

Due to the temporal and spatial variations of the IB’s direction of CIPs, the CEL’s direction throughout the linear stereo imagery will be variable as well. Therefore, in order to the vertical parallaxes of all CIPs be eliminated at ones, it is necessary that all the CELs be parallelized by applying a complementary rotation around the SRFa’s z axis (Equation (15)):
( x p y p c p ) SRF p = k × R 3 ( θ n ) × ( x a y a c a ) SRF a = k × ( x a   cos ( θ n ) + y a   sin ( θ n ) y a   cos ( θ n ) x a   sin ( θ n ) c a ) ,
where (xp, yp, −cp) is the points’ coordinate in the parallelized-SRF (i.e., the SRFa after that the complementary rotation was applied), θ n is the complementary rotation angle for parallelizing the CEL throughout the stereo imagery; which in order to account for the temporal and spatial variations of the IB’s direction of CIPs, it is modeled by a polynomial in both the rows- and the columns-number of CIPs (Equation (16)):
θ n = θ 0 n + θ 1 n × i + θ 2 n × l + ,
where θ r n (for r = 0, 1, 2, …) are the unknown coefficients of the polynomial.
Given the new attitude parameter of the second imagery, the coordinate of any image point in its parallelized-SRF (SRFp) can similarly be calculated using Equations (13)–(16).
The fundamental condition for parallelism of all epipolar lines throughout the stereo imagery is that the vertical parallaxes of all CIPs be equal to a constant value. In order to computing the vertical parallax of CIPs, their x-ordinate in the SRFp should firstly be calculated. In this way, by substituting Equation (14) into Equation (15) and dividing the first row of Equation (15) by its third row, the x-ordinate of any image point in its SRFp can be computed as Equation (17):
x p i = c p x a   cos ( θ n ) + y a   sin ( θ n ) c a ,
where x p i is the point’s x-ordinate in the SRFp of the i-th scanline of the first image.
Since each scanline of linear imagery has a unique SRF, Equation (17) can only be used for relating the SRF of each image point with its SRFp. In order to establish the relationship of SRFp of consecutive scanlines, a psudo-2D coordinate system was constructed by arranging the scanlines in a sequential manner (Equation (18)):
x p s u d p = x p i + ( i 1 ) × D C C D x p s u d p = x p j + ( j 1 ) × D C C D ,
where i and j are the rows-number of CIPs in the first and second imagery, respectively; D C C D is the dimension of sensor’s CCD, and finally, x p s u d p and x p s u d p are the x-ordinate of CIPs in the psudo-2D coordinate system of the first and the second imagery, respectively. By the way, the condition equation of obtaining the parallelized epipolar lines can be written as Equation (19):
x p s u d p x p s u d p = d ,
where d is the vertical parallax of CIPs; which should be a constant value.
By substituting Equations (14)–(18) into Equation (19), the final formula to compute the rotation angle θ n can be written as Equation (20):
( j i ) × D C C D ( c p x a   cos ( θ n ) + y a   sin ( θ n ) c a c p x a   cos ( θ n ) + y a   sin ( θ n ) c a ) d = 0 ,
The above equation is non-linear in terms of the unknowns (i.e., θ n , θ n and d), which after differentiation and linearization, can be solved using a set of CIPs. By applying the complementary rotation, CIPs will have no vertical parallax in their SRFp. But due to the variations of the IB’s length of CIPs, flying height, and the sensor’s roll angle throughout the stereo imagery, the scale of generated model will be variable.

3.3. Correcting the Scale of the Normalized Scenes

In the third step the model’s scale should be corrected by shifting the CIPs along the y axis of their SRFp (Equation (21)):
( x n y n c n ) SRF n = k × ( x p y p c p ) SRF p + ( 0 Δ y 0 ) ,
where (xn, yn, −cn) the point’s coordinate in the normalized-SRF (SRFn), and Δ y is the required shift to correct the scale of the point p in the first image.
Due to the temporal and spatial variations of the IB of CIPs, the sensor’s roll angle, and flying height, the required shift will obviously be a function of the rows- and the columns-number of CIPs, Equation (22):
Δ y = Δ y 0 + Δ y 1 × i + Δ y 2 × l + Δ y 3 × i × l + ,
where Δ y r (for r = 0, 1, 2) are the unknown coefficients of the polynomial.
In order to computing this parameter, the points’ y-ordinate should firstly be calculated in their SRFp (Equation (23)):
y p i = c p x a   sin ( θ n ) + y a   cos ( θ n ) c a ,
where y p i is the points’ y-ordinate in the SRFp of the i-th scan line of the first image.
The necessary condition to correct the model’s scale is that after than the CIPs is shifted along the y axis of their SRFp, horizontal parallaxes of CIPs and their heights in the object space to be linearly proportional, as indicated by Equation (24):
( y p j + Δ y ) ( y p i + Δ y ) = a Z + b ,
where Δ y and Δ y are the required shifts to correct the scales of the first and the second images, respectively, Z is the height of CIPs in the GRF, a and b are the coefficients of the linear relation of horizontal parallaxes and heights. If b = 0, and a is considered to be equal to the average imaging scale, Equation (24) can be rewritten as:
Δ y Δ y = c H m Z + ( y p j y p i ) ,
where c is the sensor’s principal distance and Hm is the average flying height.
Finally, by substituting Equations (21)–(24) into Equation (25) and forming a system of equations for a set of CIPs, the required shifts are achieved as a function of both the rows- and the columns-number of CIPs. It should be noted that given the EOP of stereo imagery and the image coordinate of CIPs, the points’ heights in the object space can be calculated via the space intersection of two images. Therefore, no additional GCPs are needed for Equation (25) to be to solved.
Given the new attitude parameters, the complementary rotation angle and the required shift to correct the model’s scale, the final formula to transfer from SRF to SRFn will be as Equation (26):
( x n y n Δ y c n ) SRF n = k × R 3 ( θ n ) × [ R Attitude n ] T × ( x = 0 y c ) SRF ,
where (xn, yn, −cn) is the points’ coordinate in the SRFn.
In practice, the invers form of Equation (26) will be used in the epipolar resampling process to transfer image points from normal image space to the original image space and indirect resampling of the linear stereo scenes. In this way, the epipolar resampling procedure can be conducted in a point wise manner as well as the digital rectification process.

4. Study Area and Data Used

Three different datasets have been used in this article, including two SPOT-1A stereo imagery sets covering Isfahan and Zanjan provinces, and one RapidEye stereo imagery set covering Fars province, all in Iran. The first and the second datasets cover a foothills rural area with rare urbanized regions. There is a lake at the bottom-right corner of the second dataset, and a little cloud coverage can be seen in this area. The third dataset covers a semi-mountainous foothill area, including two quite big cities almost in the center of the image. A little cloud coverage at the middle-left area of stereo image overlaps can be observed. Cloud percentages, geometric information and the number of available GCPs for each dataset are shown in Table 2, along with their elevation relief.
GCPs of Isfahan and Zanjan datasets were measured using a double-frequency GPS with a sub-meter accuracy, and their corresponding image coordinates were measured with an approximate precision of 0.5 pixel. For the Fars dataset, GCPs were extracted from 1:2000 scale digital topographic maps produced by the National Cartographic Center of Iran with 0.6 m planimetric and 0.5 m altimetric accuracies, respectively. The points are distinct features such as buildings, pool corners, walls, and road junctions. Their corresponding image coordinates were measured with an approximate precision of one pixel. Distribution of GCPs is illustrated in Figure 6. The lack of GCPs in Zanjan and Fars datasets are due to their land coverage.

5. Results

According to Section 3, there is no need to any GCPs if the EOP of stereo imagery are available; and the proposed method can be solved using only a few CIPs. In this study, the EOP of imagery were computed using the GCPs. At the space resection stage, in order to optimally structure the rotation matrix [ROrientation], different structures were investigated in a try and error manner. From the available GCPs for each dataset, 15 points were used at the space resection stage while the remaining points were used as independent check points in the accuracy assessment process. Accuracies obtained from space resection of each image are illustrated in Table 3 in the image space, along with the accuracies obtained from space intersection of the stereo imagery in the object space.
Then, 40 pairs of well distributed CIPs were manually extracted from each stereo imagery set. These points were employed in two different roles. One part of CIPs was used to compute the new attitude parameters, complementary rotation angles, and the required shifts; which is called Control Conjugate Points (CoCP) in this paper. The other part of CIPs, which is called Check Conjugate Points (ChCP), was used to assess the model’s accuracy. In order to evaluate the accuracy of the proposed method as it is impacted by the number of utilized CoCP, several experiments were performed on each dataset. In each experiment, using the parameters which are computed on CoCP, the ChCP’s coordinate were transformed to the SRFn. The mean absolute values and the maximum absolute values of the residual vertical parallax (PV) of ChCP are provide in Table 4, along with the square root of the estimated variance component from straight-line fitting to horizontal parallaxes (PH) of ChCP and their heights in the object space. In this way, the elevation of ChCP were calculated using the space intersection of their image coordinate in the original stereo imagery.
According to Table 4, the proposed method has a high ability to eliminate the vertical parallaxes of CIPs even if a few CoCP are used, so by increasing the number of CoCP from 10 to 16 (i.e., Experiments 3, 4, 7, 8, 11 and 12), no significant improvement is seen in the results. Finally, the produced normalized scenes through the Experiments 2, 6 and 10 were overlayed to generate the corresponding stereo anaglyphs (Figure 7).

6. Discussion

In order to assess the accuracy of the proposed method two factors were examined: (1) the ability to eliminate the vertical parallaxes of CIPs; and (2) to provide a linear relationship between the horizontal parallaxes of CIPs and their heights in the object space. Before applying the proposed epipolar resampling method, the mean values of vertical parallax of CIPs for Isfahan, Zanjan, and Fars datasets were 174, 184.4, and 106.7 pixels, respectively. According to Table 4, in all experiments the maximum absolute values of the residual vertical parallax of normalized scenes are close to zero. Therefore, the proposed method has a high ability to eliminate the vertical parallaxes of CIPs. The residual vertical parallax of CIPs in Experiments 2, 6, and 10 are illustrated in Figure 8, where the horizontal axis shows the points’ number ordered by increasing the residual parallax, while the vertical axis shows the value of vertical parallax.
Moreover, in order to examine the pattern of the vertical parallaxes on the normalized scenes, the scatter plot of the vertical parallaxes of CIPs in Experiments 2, 6, and 10 are shown in Figure 9. In this figure, the CoCP and ChCP are illustrated with the triangle and circle markers, respectively. The vertical parallaxes of CIPs are drawn with a magnification factor of 10,000.
According to Figure 9, a scatter plot of the residual vertical parallax of ChCP shows no systematic behavior. The proposed method has a low sensitivity to the number of CoCP used, as by increasing this number from 8 to 16, no appreciable variation in the results is observed. Generally, the minimum number of required CoCP can be variable regarding to the number of parameters used in the polynomials of Equations (12), (16) and (22). However, the distribution of these points on the stereo imagery is of great importance. Repetitive experiments show that selecting the CoCP around the outer boundary of the stereo images’ common coverage plays a key role on the accuracy of the model’s parameters (Figure 9).
As the final note on Table 4, the proposed model has a high ability to provide a linear relationship between the vertical parallax of ChCP and their heights in the object space. Since the height of ChCP have been calculated using the space intersection process and the EOP of original stereo imagery have been directly estimated using GCPs, the computed heights’ accuracy of ChCP depends on the accuracy and distribution of GCPs, along with the image coordinates’ accuracy of both the CoCP and ChCP.
The scatter plots of the horizontal parallax of ChCP and their heights in Experiments 2, 6, and 10 are illustrated in Figure 10. According to Figure 10, the proposed method benefits from an admissible ability to provide a linear relationship between the vertical parallax of ChCP and their heights in the object space.

7. Conclusions

This paper presents a novel epipolar resampling approach for cross-track linear pushbroom images. The resampling process is based on the rigorous sensor model, with no approximations or assumptions. In this way, the epipolar curves of linear imagery are generally hyperbola-shaped, but if the scanlines containing the CIPs are parallelized with their IB, these curves are converted into straight lines. However, due to the temporal and spatial variation of the IB’s direction of CIPs, these epipolar lines will not be parallel to each other. Thus, in order to perform epipolar resampling of linear imagery by means of the rigorous sensor model, the attitude parameters of the stereo images should be modified and a complementary rotation should then be applied. Since the new attitude parameters and the complementary rotation angle are dependent to the IB’s direction of CIPs, using some polynomials in both the rows- and the columns-number of CIPs has been proposed for modeling these parameters. As the instantaneous position of sensors remains fixed, the DEM of the area of interest is not required in the resampling process. Given the EOP of stereo imagery, there is no need to any GCP, and the proposed method can be performed using only a few CIPs. Experimental results obtained from two pairs of SPOT-1A and one pair of RapidEye stereo imagery with different terrain conditions proved the feasibility and the success of the proposed method. One of disadvantages of the model is the need for geometrically raw stereo images, which isn’t offered by some providers. However, in the case of archive imagery and those which aren’t supplied with reliable RPCs, the proposed method can be very effective and almost the only option currently available.
Future studies will focus on a study of the feasibility of eliminating the need for CIPs to carry out the normalization procedure. Moreover, the proposed method will be developed for along-track stereo imaging systems, as well as those images captured by the manned and unmanned aerial platforms. Finally, DEM and orthophoto generation based on the normalized scenes will be considered.

Author Contributions

All the authors listed contributed equally to the work presented in this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IBInstantaneous Baseline
CIPsConjugate Image Points
EGEpipolar Geometry
DEMDigital Elevation Model
ROPRelative Orientation Parameters
EOPExterior Orientation Parameters
IOPInterior Orientation Parameters
RFMRational Function Model
ACELApproximate Conjugate Epipolar Lines
RPCsRational Function Coefficients
MPCMultiple projection Centers
PPParallel Projection
CCDsCharged Coupled Devices
CMOSComplementary Metal-Oxide Semiconductor
GCPsGround Control Points
IRFImage Reference Frame
GRFGround Reference Frame
SRFSensor Reference Frame
PRFPlatform Reference Frame
CELConjugate Epipolar Lines
CoCPControl Conjugate Points
ChCPCheck Conjugate Points

References

  1. Morgan, M. Epipolar Resampling of Linear Array Scanner Scenes. Ph.D. Thesis, University of Calgary, Calgary, AB, Canada, 2004. [Google Scholar]
  2. Kornus, W.; Alamús, R.; Ruiz, A.; Talaya, J. DEM generation from SPOT-5 3-fold along track stereoscopic imagery using autocalibration. ISPRS J. Photogramm. Remote Sens. 2006, 60, 147–159. [Google Scholar] [CrossRef]
  3. Cho, W.; Schenk, T.; Madani, M. Resampling Digital Imagery to Epipolar Geometry. Int. Arch. Photogramm. Remote Sens. 1992, 29, 404–408. [Google Scholar]
  4. Torr, P.H.S. Bayesian model estimation and selection for epipolar geometry and generic manifold fitting. Int. J. Comput. Vis. 2002, 50, 35–61. [Google Scholar] [CrossRef]
  5. Kim, T. A Study on the Epipolarity of Linear Pushbroom Images. Photogramm. Eng. Remote Sens. 2000, 66, 961–966. [Google Scholar]
  6. Heipke, C.; Kornus, W.; Pfannenstein, A. The Evaluation of MEOSS Airborne Three-line Scanner Imagery: Processing chain and results. Photogramm. Eng. Remote Sens. 1996, 62, 293–299. [Google Scholar]
  7. Gupta, R.; Hartley, R.I. Linear Pushbroom Cameras. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 963–975. [Google Scholar] [CrossRef]
  8. Habib, A.F.; Morgan, M.; Jeong, S.; Kim, K.O. Analysis of Epipolar Geometry in Linear Array Scanner Scenes. Photogramm. Record. 2005, 20, 27–47. [Google Scholar] [CrossRef]
  9. Kratky, V. Rigorous Stereo Photogrammetric Treatment of SPOT Images, SPOT 1-Utilisation des Images, Bilan, Resultats; CNES: Paris, France, 1987; pp. 1195–1204. [Google Scholar]
  10. Lee, H.Y.; Park, W. A new epipolarity model based on the simplified pushbroom sensor model. Int. Achieves Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 631–636. [Google Scholar]
  11. Morgan, M.; Kim, K.; Jeong, S.; Habib, A. Indirect epipolar resampling of scenes using parallel projection modeling of linear array scanners. Int. Achieves Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 508–513. [Google Scholar]
  12. Oh, J.; Lee, W.H.; Toth, C.K.; Grejner-Brzezinska, D.A.; Lee, C. A Piecewise Approach to Epipolar Resampling of Pushbroom Satellite Images Based on RPC. Photogramm. Eng. Remote Sens. 2010, 76, 1353–1363. [Google Scholar] [CrossRef]
  13. Wang, M.; Hub, F.; Li, J. Epipolar Resampling of Linear Pushbroom Satellite Imagery by a New Epipolarity Model. ISPRS J. Photogramm. Remote Sens. 2011, 66, 347–355. [Google Scholar] [CrossRef]
  14. Koh, J.W.; Yang, H.S. Unified piecewise epipolar resampling method for pushbroom satellite images. EURASIP J. Image Video Process. 2016, 1. [Google Scholar] [CrossRef]
  15. Fraser, C.S.; Yamakawa, T. Insights into the affine model for high-resolution satellite sensor orientation. ISPRS J. Photogramm. Remote Sens. 2004, 58, 275–288. [Google Scholar] [CrossRef]
  16. Morgan, M.; Kim, K.; Jeong, S.; Habib, A. Epipolar geometry of linear array scanners moving with constant velocity and constant attitude. Int. Achieves Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 52–57. [Google Scholar]
  17. Morgan, M.; Kim, K.; Jeong, S.; Habib, A. Epipolar resampling of spaceborne linear array scanner scenes using parallel projection. Photogramm. Eng. Remote Sens. 2006, 72, 1255–1263. [Google Scholar] [CrossRef]
  18. Jaehong, O.H.; Shin, S.W.; Kim, K. Direct epipolar image generation from IKONOS stereo imagery based on RPC and parallel projection model. Korean J. Remote Sens. 2006, 22, 451–456. [Google Scholar]
  19. Habib, A.F.; Kim, E.M.; Morgan, M.; Couloigner, I. DEM generation from high resolution satellite imagery using parallel projection model. Int. Achieves Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 393–398. [Google Scholar]
  20. Baltsavias, E.; Pateraki, M.; Zhang, L. Radiometric and geometric evaluation of Ikonos Geo images and their use for 3D building modeling. In Proceedings of the Joint ISPRS Workshop High Resolution Mapping from Space 2001, Hanover, Germany, 19–21 September 2001.
  21. Fraser, C.; Hanley, H.; Yamakawa, T. Sub-Metre Geopositioning with IKONOS GEO Imagery. In Proceedings of the Joint ISPRS Workshop High Resolution Mapping from Space 2001, Hanover, Germany, 19–21 September 2001.
  22. Fraser, C.; Hanley, H. Bias Compensation in Rational Functions for IKONOS Satellite Imagery. J. Photogramm. Eng. Remote Sens. 2003, 69, 53–57. [Google Scholar] [CrossRef]
  23. Tao, V.; Hu, Y. A Comprehensive Study for Rational Function Model for Photogrammetric Processing. J. Photogramm. Eng. Remote Sens. 2001, 67, 1347–1357. [Google Scholar]
  24. Habib, A.F.; Morgan, M.; Jeong, S.; Kim, K.O. Epipolar Geometry of Line Cameras Moving with Constant Velocity and Attitude. ETRI J. 2005, 27, 172–180. [Google Scholar] [CrossRef]
  25. Lee, Y.; Habib, A. Pose Estimation of Line Cameras Using Linear Features. In Proceedings of the ISPRS Symposium of PCV’02 Photogrammetric Computer Vision, Graz, Austria, 9–13 September 2002.
  26. Lee, C.; Theiss, H.; Bethel, J.; Mikhail, E. Rigorous Mathematical Modeling of Airborne Pushbroom Imaging Systems. J. Photogramm. Eng. Remote Sens. 2000, 66, 385–392. [Google Scholar]
  27. Valadan Zoej, M.J.; Petrie, G. Mathematical Modeling and Accuracy Testing of SPOT Level 1B Stereo Pairs. Photogramm. Record. 1998, 16, 67–82. [Google Scholar] [CrossRef]
  28. Valadan Zoej, M.J.; Sadeghian, S. Orbital Parameter Modeling Accuracy Testing of Ikonos Geo Image. Photogramm. J. Finl. 2003, 18, 70–80. [Google Scholar]
  29. Valadan Zoej, M.J. Photogrammetric Evaluation of Space Linear Array Imagery for Medium Scale Topographic Mapping. Ph.D. Thesis, University of Glasgow, Glasgow, UK, 1997. [Google Scholar]
  30. Wolf, P.R.; Dewitt, B.A. Elements of Photogrammetry with Applications in GIS, 3/e; McGraw-Hill: Toronto, ON, Canada, 2000; p. 608. [Google Scholar]
  31. Aguilar, M.A.; Saldaña, M.; Aguilar, F.J. Generation and quality assessment of stereo-extracted DSM from GeoEye-1 and WorldView-2 imagery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 1259–1271. [Google Scholar] [CrossRef]
  32. Toutin, T. Comparison of 3D Physical and Empirical Models for Generating DSMs from Stereo HR Images. Photogramm. Eng. Remote Sens. 2006, 72, 597–604. [Google Scholar] [CrossRef]
  33. Deltsidis, P.; Ioannidis, C. Orthorectification of World View 2 Stereo Pair Using A New Rigorous Orientation Model. In Proceedings of the ISPRS Hanover 2011 Workshop, Hanover, Germany, 14–17 June 2011.
  34. Crespi, M.; Fratarcangeli, F.; Giannone, F.; Pieralice, F. A new rigorous model for high-resolution satellite imagery orientation: Application to EROS A and QuickBird. Int. J. Remote Sens. 2012, 33, 2321–2354. [Google Scholar] [CrossRef]
  35. Orun, A.B.; Natarajan, K. A Modified Bundle Adjustment Software for SPOT Imagery and Photography: Tradeoff. Photogramm. Eng. Remote Sens. 1994, 60, 1431–1437. [Google Scholar]
  36. Jannati, M.; Valadan Zoej, M.J. Introducing genetic modification concept to optimize rational function models (RFMs) for georeferencing of satellite imagery. GISci. Remote Sens. 2015, 52, 510–525. [Google Scholar] [CrossRef]
  37. Long, T.; Jiao, W.; He, G. RPC Estimation via L1-Norm-Regularized Least Squares (L1LS). IEEE Trans. Geosci. Remote Sens. 2015, 53, 4554–4567. [Google Scholar] [CrossRef]
  38. Yavari, S.; Valadan Zoej, M.J.; Mohammadzadeh, A.; Mokhtarzade, M. Particle Swarm Optimization of RFM for Georeferencing of Satellite Images. IEEE Geosci. Remote Sens. Lett. 2012, 10, 135–139. [Google Scholar] [CrossRef]
  39. Valadan Zoej, M.J.; Mokhtarzade, M.; Mansourian, A.; Ebadi, H.; Sadeghian, S. Rational Function Optimization Using Genetic Algorithms. Int. J. Appl. Earth Observ. Geoinf. 2007, 9, 403–413. [Google Scholar] [CrossRef]
Figure 1. The coordinate systems used in the MPC: (a) The IRF; (b) The SRF; (c) The PRF.
Figure 1. The coordinate systems used in the MPC: (a) The IRF; (b) The SRF; (c) The PRF.
Sensors 17 00129 g001
Figure 2. The EG of cross-track linear pushbroom imagery: (a) General case; (b) Ideal case.
Figure 2. The EG of cross-track linear pushbroom imagery: (a) General case; (b) Ideal case.
Sensors 17 00129 g002
Figure 3. Variations in the direction of the IB of CIPs in the linear stereo scenes: (a) Temporal variations; (b) Spatial variations.
Figure 3. Variations in the direction of the IB of CIPs in the linear stereo scenes: (a) Temporal variations; (b) Spatial variations.
Sensors 17 00129 g003
Figure 4. Scatter plot of the direction of IBs with respect to the rows- and columns-number of CIPs: (a) ω vs. the rows-number; (b) ϕ vs. the rows-number; (c) κ vs. the rows-number; (d) ω vs. the columns-number; (e) ϕ vs. the columns-number; (f) κ vs. the columns-number.
Figure 4. Scatter plot of the direction of IBs with respect to the rows- and columns-number of CIPs: (a) ω vs. the rows-number; (b) ϕ vs. the rows-number; (c) κ vs. the rows-number; (d) ω vs. the columns-number; (e) ϕ vs. the columns-number; (f) κ vs. the columns-number.
Sensors 17 00129 g004
Figure 5. Scatter plot of computed and estimated attitude parameters of the IB of CIPs: (a) Attitude parameter ω; (b) Attitude parameter ϕ; (c) Attitude parameter κ.
Figure 5. Scatter plot of computed and estimated attitude parameters of the IB of CIPs: (a) Attitude parameter ω; (b) Attitude parameter ϕ; (c) Attitude parameter κ.
Sensors 17 00129 g005
Figure 6. Distribution of GCPs throughout the stereo imagery: (a) Isfahan dataset; (b) Zanjan dataset; (c) Fars dataset.
Figure 6. Distribution of GCPs throughout the stereo imagery: (a) Isfahan dataset; (b) Zanjan dataset; (c) Fars dataset.
Sensors 17 00129 g006
Figure 7. Stereo anaglyphs of the normalized scenes through the Experiments 2, 6, and 10: (a) Isfahan dataset; (b) Zanjan dataset; (c) Fars dataset.
Figure 7. Stereo anaglyphs of the normalized scenes through the Experiments 2, 6, and 10: (a) Isfahan dataset; (b) Zanjan dataset; (c) Fars dataset.
Sensors 17 00129 g007
Figure 8. Vertical parallax of CIPs on the normalized scenes in Experiments 2, 6, and 10.
Figure 8. Vertical parallax of CIPs on the normalized scenes in Experiments 2, 6, and 10.
Sensors 17 00129 g008
Figure 9. The scatter plot of the vertical parallaxes on the normalized imagery in Experiments 2, 6, and 10 with a magnification factor of 10,000: (a) Isfahan dataset; (b) Zanjan dataset; (c) Fars dataset.
Figure 9. The scatter plot of the vertical parallaxes on the normalized imagery in Experiments 2, 6, and 10 with a magnification factor of 10,000: (a) Isfahan dataset; (b) Zanjan dataset; (c) Fars dataset.
Sensors 17 00129 g009aSensors 17 00129 g009b
Figure 10. The scatter plot of the horizontal parallax of ChCP and their heights in Experiments 2, 6, and 10: (a) Isfahan dataset; (b) Zanjan dataset; (c) Fars dataset.
Figure 10. The scatter plot of the horizontal parallax of ChCP and their heights in Experiments 2, 6, and 10: (a) Isfahan dataset; (b) Zanjan dataset; (c) Fars dataset.
Sensors 17 00129 g010
Table 1. Statistics of the polynomials fitted to the attitude parameters of the IB of CIPs.
Table 1. Statistics of the polynomials fitted to the attitude parameters of the IB of CIPs.
Attitude ParametersωϕK
StatisticsValuep-ValueStd-ErrValuep-ValueStd-ErrValuep-ValueStd-Err
β00.4494.4 × 10−2011.0 × 10−70.1401.5 × 10−1052.1 × 10−50.0612.4 × 10−1059.3 × 10−6
β1−1.6 × 10−79.3 × 10−993.9 × 10−11−4.8 × 10−109.5 × 10−47.9 × 10−9−2.1 × 10−87.7 × 10−73.5 × 10−9
β21.2 × 10−81.6 × 10−555.5 × 10−112.6 × 10−62.6 × 10−561.1 × 10−81.1 × 10−62.7 × 10−564.9 × 10−9
β33.6 × 10−141.6 × 10−75.6 × 10−15−1.0 × 10−113.8 × 10−111.1 × 10−12−5.1 × 10−125.0 × 10−125.0 × 10−13
β46.0 × 10−131.3 × 10−445.6 × 10−154.4 × 10−124.6 × 10−51.1 × 10−122.0 × 10−122.9 × 10−55.0 × 10−13
β51.0 × 10−133.4 × 10−157.5 × 10−151.8 × 10−115.3 × 10−141.5 × 10−128.6 × 10−121.2 × 10−146.7 × 10−13
R21.001.001.00
σ 0 (s)9.0 × 10−81.8 × 10−68.1 × 10−7
Functional form: Y = β0 + β1 × i + β2 × l + β3 × i × l + β4 × i2 + β5 × l2
Table 2. Specifications of the datasets used.
Table 2. Specifications of the datasets used.
DatasetIsfahanZanjanFars
PlatformSPOT-1SPOT-3Rapid Eye-2
SensorHRVHRVGreen Band
Acquisition dateAugust 1987January 1987July 1993July 1993March 2010March 2010
Pointing angle24.7° W20.84° E19.01° W16.66° E19.64° W7.09° E
Ground resolution10 m10 m6.5 m
Base to height ratio0.9740.7370.534
Elevation relief687.2 m654.2 m842.6 m
Cloud percentage0%>5%>1%
Bands of Rapid Eye’s Raw image data are not co-registered.
Table 3. Accuracy assessment of space resection and intersection of the stereo imagery.
Table 3. Accuracy assessment of space resection and intersection of the stereo imagery.
Dataset# Ctrls# ChksSpace Resection (pix)Space Intersection (m)
δrδcδrcδXδYδXYδZ
IsfahanScene 115200.650.490.816.116.588.976.73
Scene 215200.760.540.93
ZanjanScene 115160.730.620.955.666.128.335.57
Scene 215160.560.70.89
FarsScene 115190.520.40.653.314.375.484.21
Scene 215190.550.380.67
Table 4. Experimental results obtained from the proposed epipolar resampling method.
Table 4. Experimental results obtained from the proposed epipolar resampling method.
DatasetIsfahanZanjanFars
Experiment Number123456789101112
Number of CoCP810121681012168101216
Number of ChCP323028243230282432302824
Mean |PV|, pixels0.030.020.000.000.020.010.010.000.030.020.020.01
Max |PV|, pixels0.170.090.000.000.060.040.030.020.080.060.070.05
σ ^ 0 (line fitting of PH and Z), m5.721.931.470.763.552.151.461.316.315.444.973.83

Share and Cite

MDPI and ACS Style

Jannati, M.; Valadan Zoej, M.J.; Mokhtarzade, M. Epipolar Resampling of Cross-Track Pushbroom Satellite Imagery Using the Rigorous Sensor Model. Sensors 2017, 17, 129. https://doi.org/10.3390/s17010129

AMA Style

Jannati M, Valadan Zoej MJ, Mokhtarzade M. Epipolar Resampling of Cross-Track Pushbroom Satellite Imagery Using the Rigorous Sensor Model. Sensors. 2017; 17(1):129. https://doi.org/10.3390/s17010129

Chicago/Turabian Style

Jannati, Mojtaba, Mohammad Javad Valadan Zoej, and Mehdi Mokhtarzade. 2017. "Epipolar Resampling of Cross-Track Pushbroom Satellite Imagery Using the Rigorous Sensor Model" Sensors 17, no. 1: 129. https://doi.org/10.3390/s17010129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop