Calculating Viewing Angles Pixel by Pixel in Optical Remote Sensing Satellite Imagery Using the Rational Function Model

: In studies involving the extraction of surface physical parameters using optical remote sensing satellite imagery, sun-sensor geometry must be known, especially for sensor viewing angles. However, while pixel-by-pixel acquisitions of sensor viewing angles are of critical importance to many studies, currently available algorithms for calculating sensor-viewing angles focus only on the center-point pixel or are complicated and are not well known. Thus, this study aims to provide a simple and general method to estimate the sensor viewing angles pixel by pixel. The Rational Function Model (RFM) is already widely used in high-resolution satellite imagery, and, thus, a method is proposed for calculating the sensor viewing angles based on the space-vector information for the observed light implied in the RFM. This method can calculate independently the sensor-viewing angles in a pixel-by-pixel fashion, regardless of the speciﬁc form of the geometric model, even for geometrically corrected imageries. The experiments reveal that the calculated values differ by approximately 10 − 40 for the Gaofen-1 (GF-1) Wide-Field-View-1 (WFV-1) sensor, and by ~10 − 70 for the Ziyuan-3 (ZY3-02) panchromatic nadir (NAD) sensor when compared to the values that are calculated using the Rigorous Sensor Model (RSM), and the discrepancy is analyzed. Generally, the viewing angles for each pixel in imagery are calculated accurately with the proposed method.


Introduction
With the development of remote sensing theory and techniques, and the advances in remote sensing applications, the field of remote sensing has evolved from predominantly qualitative interpretations to a field that enables quantitative analysis. In terms of quantitative analysis, most surface parameters in quantitative remote-sensing models, such as the leaf-area index, are based on the surface albedo. The albedo depends on the Bidirectional Reflectance Distribution Function (BRDF), and directly characterizes the surface-energy balance, making it critical for monitoring of global climate change [1,2]. The BRDF of a particular target represents the reflectance at all possible illumination and viewing angles [3]. The albedo of most real-world surfaces is anisotropic, and, thus, the viewing angles can have significant effects on the measured albedo [4]. Therefore, it is increasingly important to measure viewing angles precisely, given the development of quantitative remote sensing techniques.
The viewing angles from sensors can change substantially from one overpass to the other, and data collected by wide-swath sensors can produce different results for different pixels. Several researchers

Rational Function Model Expression
The RFM, based on the rational of two cubic polynomials with 80 rational polynomial coefficients (RPCs), has been proposed as a basic geometric model together with satellite imagery since its successful application in SPOT5 [23], IKONOS, QuickBird, ZiYuan3, and Gaofen series satellites. The RFM models the relationship between an object and image space. Taking the RFM expression into account, the upward rational function (RF) can be expressed as: In addition, the relationship between the coordinates [X S , Y S , Z S ] in geocentric Cartesian coordinate system and coordinates [Lat, Lon, H] in the geographical coordinate system is shown as: Equations (2) and (3)  As shown in Figure 1, the same pixel on the same scan line is first considered, where the point O is the projection center, and [X(t), Y(t), Z(t)] WGS84 indicates the position of O with respect to the WGS84 geocentric system. The irregular three-dimensional (3-D) object in the figure is a real surface. Here, S is a pixel that is studied and the vector OS, i.e., [x, y, z] WGS84 , in the WGS84 geocentric system, is the viewing direction. Two elevation surfaces, with elevations H 1 and H 2 with respect to the geographical coordinate system, are formed. The vector OS intersects with the elevation plane H 1 in ground point A, and with the elevation plane H 2 in ground point B. In this paper, H 1 is the highest elevation and H 2 is the lowest elevation. The position of ground point A is [X S1 , Y S1 , Z S1 ] and the position of ground point B is [X S2 , Y S2 , Z S2 ] in the WGS84 geocentric system. According to three-dimensional geometric knowledge, we obtain the following: where m 1 and m 2 are scale parameters. In Equation (5), the first formula is subtracted from the second formula to give a new equation that can be described as follows: From Equation (6), the viewing direction in the WGS84 geocentric system is proportional to vector AB.
where 1 m and 2 m are scale parameters. In Equation (5), the first formula is subtracted from the second formula to give a new equation that can be described as follows: From Equation (6), the viewing direction in the WGS84 geocentric system is proportional to vector AB.
Subsequently, Equation (7)  Taking S as an example, viewing direction OS intersects elevation planes H 1 and H 2 . The coordinates of A and B are (Lat 1 , Lon 1 , H 1 ) and (Lat 2 , Lon 2 , H 2 ) in the geographical coordinate system, respectively. As the pixel S is common for ground point A and B, the image coordinates (x pixel , y pixel ) are the same. (Lat 1 , Lon 1 ) and (Lat 2 , Lon 2 ) can be calculated, according to Equation (2).
Then, the coordinates of A and B are [X S1 , Y S1 , Z S1 ] and [X S2 , Y S2 , Z S2 ] in WGS84 geocentric system, which can be derived from Equation (3) respectively: Subsequently, Equation (7) is taken into Equation (6), with the following result: Thus, the viewing direction of the current pixel in the WGS84 geocentric system can be described as: When solving the viewing direction in the WGS84 geocentric system, the elevation values of H 1 and H 2 merit attention. In general, the lowest and highest elevations are obtained in accordance with the RFM. Nevertheless, whether or not this method is optimal, it is necessary for experimental verification.

Transformation between the WGS84 Geocentric System and the Local Cartesian Coordinate System
The viewing zenith and azimuth angles are defined by the local Cartesian coordinate system. Thus, it is necessary to transform the viewing vector from the WGS84 geocentric system to the local Cartesian coordinate system. Figure 2 provides the definition of the local Cartesian coordinate system. The point O is the surface point that one sensor pixel observes, and it is regarded as the point of origin. The coordinates for point in the WGS84 geocentric system, and they can be transformed into [lat, lon, H] in the geographic coordinate system. The local Cartesian coordinate system is expressed in the Cartesian form, such that the Y-axis lies on the tangent plane in the direction of true north, and the Z-axis is normal to the tangent plane pointing away from the earth (with the X axis completing a right-handed triad, as usual).
described as: When solving the viewing direction in the WGS84 geocentric system, the elevation values of H1 and H2 merit attention. In general, the lowest and highest elevations are obtained in accordance with the RFM. Nevertheless, whether or not this method is optimal, it is necessary for experimental verification.

Transformation between the WGS84 Geocentric System and the Local Cartesian Coordinate System
The viewing zenith and azimuth angles are defined by the local Cartesian coordinate system. Thus, it is necessary to transform the viewing vector from the WGS84 geocentric system to the local Cartesian coordinate system. Figure 2 provides the definition of the local Cartesian coordinate system. The point O is the surface point that one sensor pixel observes, and it is regarded as the point of origin. The coordinates O O X Y Z in the WGS84 geocentric system, and they can be transformed into [ , , ] lat lon H in the geographic coordinate system. The local Cartesian coordinate system is expressed in the Cartesian form, such that the Y-axis lies on the tangent plane in the direction of true north, and the Z-axis is normal to the tangent plane pointing away from the earth (with the X axis completing a right-handed triad, as usual). The conversion matrix from the WGS84 geocentric system to the local Cartesian coordinate system comprises two simple rotations: (1) a rotation about the Earth's axis (through angle lon + 90) from the Greenwich meridian to the meridian of the local ground; and, (2) a rotation about an axis that is perpendicular to the meridian plane of the local ground (through angle 90 − lat) from the North Pole to the local ground. The conversion matrix can be expressed as follows: Taking into consideration Equations (9) and (10), the viewing vector (x, y, z) in the local Cartesian coordinate system can be described as follows: Remote Sens. 2018, 10, 478 6 of 14

Calculating the Viewing Zenith and Azimuth Angles
Once the viewing vector in the local Cartesian coordinate system is obtained, then the viewing zenith and azimuth angles can be calculated. As shown in Figure 3, the coordinates are the local Cartesian coordinate system, in which the point of origin is O, and S denotes the sensor. Thus, the vector SO is the viewing vector proportional to the vector (x, y, z). Line SA crosses S and is perpendicular to plane XOY at point A. Line AB is perpendicular to plane XOZ and crosses line OX at point B. Line AC is perpendicular to plane YOZ and crosses line OY at point C. Then, ∠SOA is the zenith angle θ, and ∠COA is the azimuth angle ϕ.
Taking into consideration Equations (9) and (10), the viewing vector ( ) , , x y z in the local Cartesian coordinate system can be described as follows:

Calculating the Viewing Zenith and Azimuth Angles
Once the viewing vector in the local Cartesian coordinate system is obtained, then the viewing zenith and azimuth angles can be calculated. As shown in Figure 3, the coordinates are the local Cartesian coordinate system, in which the point of origin is O, and S denotes the sensor. Thus, the vector SO is the viewing vector proportional to the vector ( ) , , x y z . Line SA crosses S and is perpendicular to plane XOY at point A. Line AB is perpendicular to plane XOZ and crosses line OX at point B. Line AC is perpendicular to plane YOZ and crosses line OY at point C. Then, SOA ∠ is the zenith angle θ , and COA ∠ is the azimuth angle ϕ . SO is proportional to vector ( ) , , x y z and this equation can be derived as follows: where m is the scale parameter. Then, the zenith angle θ and the azimuth angle ϕ can be expressed as follows: SO is proportional to vector (x, y, z) and this equation can be derived as follows: where m is the scale parameter. Then, the zenith angle θ and the azimuth angle ϕ can be expressed as follows:

Workfolow
The process workflow is illustrated in Figure 4, and incorporates steps that can be summarized as follows:

Workfolow
The process workflow is illustrated in Figure 4, and incorporates steps that can be summarized as follows: (1) The RFM is obtained, and the view direction in the WGS84 geocentric system is calculated based on the space-vector information of observed light implied by the RFM. (2) The viewing direction is transformed from the WGS84 geocentric system to the local Cartesian coordinate system.  (1) The RFM is obtained, and the view direction in the WGS84 geocentric system is calculated based on the space-vector information of observed light implied by the RFM. (2) The viewing direction is transformed from the WGS84 geocentric system to the local Cartesian coordinate system. (3) The viewing zenith and azimuth angles can be calculated in the local Cartesian coordinate system. (4) If there are pixels that are not calculated, then the first three steps are repeated to solve the next pixel. Finally, all of the the zenith and azimuth angles for all pixels are outputted.

Results and Discussion
To verify the correctness and stability of the proposed method, experiments were performed with GF-1's WFV-1 sensor and ZY3-02's NAD sensor data. In this section, the two test datasets and the experimental results will be discussed.

Test Datasets and Evaluation Criterion
The first dataset is from GF-1's WFV-1 L1 imagery, specifically the RFM file that was taken on The second dataset is from ZY3-02's NAD Level-1 imagery, specifically the RFM file taken on 9 March 2017 in Fuzhou, Fujian Province, China. The image size is 24,576 × 24,576 pixels. The FOV for ZY3-02's NAD sensor is approximately 6 • , and the elevation of the area varies from 0 m to 950 m.
In order to verify the correctness and the stability of the proposed method, the RSM parameters were obtained from the original Level-0 products. Then, the angles calculated with the method above were compared with the angles that were calculated using the real parameters from the original Level-0 products. This comparison was used as the evaluating criterion in the verification of the proposed method. This comparison process can be conducted as follows: (1) Establish the rigorous geometric model based on the corresponding original Level-0 products including interior and exterior parameters. (2) Calculating the viewing angles of any pixel by the rigorous geometric model established in step 1) using Equation (6) and by the proposed method separately, and then the deviation is calculated. (3) Statistics deviations of all pixels to calculate the minimum, maximum, and root mean square (RMS).

Minimum and Maximum Elevation Plane for Calculation
In general, the lowest and highest elevations can be obtained in accordance with the RFM file. However, it is pertinent to establish by experimentation whether this is the best method. In these experiments, the influence of the minimum and maximum elevation can be determined by examining the differences between the angles calculated from the method above and the angles that were calculated from the real parameters of the original Level-0 products. Table 1 shows the level of precision in GF-1's WFV-1 viewing angles calculated from the RFM. Item 1 is based on the lowest and highest elevation that was obtained from the RFM file for the sake of comparison under other circumstances. Item 2 shows that the results are very good when there is little distance between the lowest and the highest elevation. Items 3 and 5 reveal that the precision is lower when the maximum elevation increases. Item 4 is included in order to examine the level of precision when the lowest elevation drops. As shown in Table 1, the precision is very high and stable when the elevation is between −10,000 m and +10,000 m. Therefore, the method is not sensitive to the elevations, and shows good stability in this dataset. In the second dataset, the elevation varies from 0 m to 950 m obtained from corresponding RFM. Thus, different minimum and maximum elevations are set under different circumstances (see Table 2), and the level of precision can be obtained.  Table 2 is based on the lowest and highest elevation that was obtained from the RFM for the sake of comparison with other circumstances. Item 2 shows that the results are considerably precise when there is little distance between the lowest and the highest elevation. Items 3 and 5 reveal that the precision decreases as the maximum elevation increases. Item 4 is included in order to examine the level of precision as the lowest elevation drops. As with the first dataset, Table 2 shows highly precise and stable results for elevations between −10,000 m and +10,000 m. Therefore, the method is not sensitive to the elevations with the stability in this dataset. Moreover, Table 2 also suggests that the elevations are in accordance with the minimum and maximum elevations that were yielded by the RFM, and are the optimal scheme in this dataset.
The above two experiments demonstrate that the method is appropriately insensitive to elevation. The differences are especially small and stable when the elevation is between -10,000 m and +10,000 m, and the results are precise when the distance between the lowest and the highest elevation is small. In general, the minimum and maximum elevation values are set in accordance with the RFM file, because the RFM uses the minimum and maximum elevations for fitting.

Precision of RFM-Calculated Viewing Angles
In order to verify the accuracy of the proposed method, the precision can be assessed by examining the differences between the angles that were calculated using the proposed method and the angles calculated using the real parameters from the original Level-0 products. Thus, checkpoints were established on the image plane in 10-pixel intervals. Every checkpoint calculates two groups of viewing angles, after which the two groups of angles can be compared in order to examine the differences between them and thus to verify the accuracy of the proposed method. The precision distribution for the viewing angles from GF-1's WFV-1 sensor on the image plane is shown in Figure 5.
The precision of the viewing angles from GF-1's WFV-1 sensor is approximately 10 -40 . However, the precision distribution is relatively irregular. The precision of the azimuth angle, as calculated from the RFM, is high in the middle plane, but drops on going away from the center. In the top-right corner of the plane, there is a sudden change from negative to positive. As shown in Figure 6, the precision of the zenith angle, as calculated from the RFM, is roughly similar along the track, repeating a transformation from negative to positive across the track. Overall, the precision of the viewing angles from GF-1's WFV-1 sensor meets the requirements in terms of precision and stability.
The experiment with ZY3-02's NAD sensor dataset was performed using the same steps, and the precision distribution for the viewing angles from ZY3-02's NAD sensor on the image plane is shown in Figures 7 and 8.  Remote Sens. 2018, 10, x FOR PEER REVIEW 10 of 14      The precision of the viewing angles for ZY3-02's NAD sensor is approximately 10 -70 . This time, however, the precision distribution is relatively regular. The precision of the azimuth angle calculated from the RFM is roughly similar along the track and across the track in its transformation from negative to positive. The precision of the zenith angle calculated from the RFM is high in the  The precision of the viewing angles for ZY3-02's NAD sensor is approximately 10 -70 . This time, however, the precision distribution is relatively regular. The precision of the azimuth angle calculated from the RFM is roughly similar along the track and across the track in its transformation from negative to positive. The precision of the zenith angle calculated from the RFM is high in the The precision of the viewing angles for ZY3-02's NAD sensor is approximately 10 -70 . This time, however, the precision distribution is relatively regular. The precision of the azimuth angle calculated from the RFM is roughly similar along the track and across the track in its transformation from negative to positive. The precision of the zenith angle calculated from the RFM is high in the middle plane, and drops in direction away from the center. In the top-left and bottom-right corners, the precision of the zenith angle is positive, whereas in the top-right and bottom-left corners the precision is negative. Overall, the precision of the viewing angles from ZY3-02's NAD sensor meets the actual requirements both in terms of stability and precision.
Theoretically, the viewing angles calculated from the RFM should be strictly equal to the real parameters from the original Level-0 products. However, because of the residual fitting of the RFM from attitude jitters, time jumps, or higher-order aberrations when the RFM is generated, differences emerge between the angles that are calculated from the RFM and the real parameters of the original Level-0 products. From the described experiments, it is apparent that the precision distribution of angles calculated from GF-1's WFV-1 sensor RFM is more irregular than the angles that are calculated from ZY3-02's NAD sensor RFM, and the precision of the angles from ZY3-02's NAD is three orders of magnitude higher than that of the angles from GF-1's WFV-1. The FOV of GF-1's WFV-1 is approximately 16.9 • , whereas the FOV of ZY3-02's NAD is approximately 6 • . A wide-field view means higher distortion at the edge of the field, and the residual fitting of the RFM for a wide-field view is relatively larger than the narrow-field view. Thus, the precision distribution of GF-1's WFV-1 is more irregular and less precise.
In order to examine the statistical properties related to precision, we calculated the RMS in terms of the maximum and minimum precision (Table 3). The precision of the GF-1's WFV-1 sensor is similar between the azimuth angle and the zenith angle. The RMS for the azimuth angle is 0.00020 • , whereas for the zenith angle, it is 0.00032 • . The maximum of the azimuth angle is 0.00065 • , while the maximum of the zenith angle is 0.00056 • . Thus, the precision is approximately 10 -40 , and the maximum precision is approximately two or three times that of the RMS. These results confirm that the method as employed for the GF-1's WFV-1 dataset is accurate and stable.
The precision of the ZY3-02's NAD sensor is similar between the azimuth and zenith angles. The RMS of the azimuth angle is 0.00000024 • , while that of the zenith angle is 0.000000028 • . The maximum of the azimuth angle is 0.00000085 • , whereas the maximum of the zenith angle is 0.000000145 • . Thus, the precision is approximately 10 -70 and the maximum precision is several times that of the RMS. Again, these results demonstrate that the method employed for the ZY3-02's NAD dataset is both accurate and stable.
The accuracy and stability of the method has been successfully verified through the experiments described above, which demonstrated its feasibility for practical applications. The precision is approximately 10 −40 for the GF-1's WFV-1 sensor data, and approximately 10 −70 for the ZY3-02's NAD sensor data.

Conclusions
Relying on the commonly used RFM in high-resolution satellite imagery, this study proposed a new method for the determination of viewing angles based on the space-vector information from the observed light implied by the RFM. In deriving the method, we observed that it does not require specific forms of the geometry model. Indeed, every geometric model that can be written as either an implicit or explicit function to describe the geometric relationship between an object point and its image coordinates can use this method to calculate the viewing angles in satellite imagery. Moreover, with the proposed method, the viewing angles for each pixel are calculated independently. That is, the proposed method calculates viewing angles pixel by pixel. Even though geometric transformation is applied to the imagery (e.g., geometric correction or re-projection), it is possible to calculate the viewing angles in pixel-by-pixel fashion, provided that the geometric model is available.
In the proposed method, only the minimum and maximum elevation planes are uncertain, and only they will influence the results. The experiments that were conducted here demonstrate that the method is not sensitive to elevation. The differences are noticeably small and stable when the elevation is between −10,000 m and +10,000 m, and the results are more precise when there is little distance between the lowest and the highest elevation. In general, the optimal elevations are in agreement with the minimum and maximum elevations that are offered by the RFM, because the RFM uses the minimum and maximum elevations for fitting.
Finally, the experiments showed that the precision of the method relative to the real data is approximately 10 -40 for the GF-1's WFV-1 sensor and approximately 10 -70 for the ZY3-02's NAD sensor. The FOV of GF-1's WFV-1 is approximately 16.9 • , whereas the ZY3-02's NAD is approximately 6 • . The wide-field view resulted in high distortion at the edge of the field, and the residual fitting of the RFM for the wide-field view was relatively larger than that for the narrow-field view. Thus, the precision distribution of GF-1's WFV-1 is more irregular and less precise than that of ZY3-02's NAD. Overall, the method meets the actual demands in terms of precision and stability for both the GF-1's WFV-1 sensor and the ZY3-02's NAD sensor. It is predicted that the method will also meet the requirements of other sensors that are not included in our experiments, and with a comparable level of precision. Despite the promising results that were achieved by comparing with the result acquired from RSM, the proposed method is not yet applied to quantitative remote sensing practically. Thus, the implementation of calculating viewing angles pixel by pixel using RFM for the extraction of surface physical parameters needs further research.