Relative Radiometric Calibration Using Tie Points and Optimal Path Selection for UAV Images

: As the use of unmanned aerial vehicle (UAV) images rapidly increases so does the need for precise radiometric calibration. For UAV images, relative radiometric calibration is required in addition to the traditional vicarious radiometric calibration due to the small field of view. For relative radiometric calibration, some UAVs install irradiance sensors, but most do not. For UAVs without them, an intelligent scheme for relative radiometric calibration must be applied. In this study, a relative radiometric calibration method is proposed to improve the quality of a reflectance map without irradiance measurements. The proposed method, termed relative calibration by the optimal path (RCOP), uses tie points acquired during geometric calibration to define the optimal paths. A calibrated image from RCOP was compared to validation data calibrated with irradiance measurements. As a result, the RCOP method produces seamless mosaicked images with uniform brightness and reflectance patterns. Therefore, the proposed method can be used as a precise relative radiometric calibration method for UAV images. This will in subsequent research, along with precise ‐ to


Introduction
Remote sensing is a science that acquires information without contacting targets. Terrestrial, airborne, and spaceborne sensors have been used. Rapidly advancing technologies are making sensors smaller, more precise, and more popular, and the supply of unmanned aerial vehicle (UAV) images is increasing in various remote sensing fields. UAV images have attracted user attention as a means of efficiently observing limited-access areas [1] and are used to acquire three-dimensional spatial information and biophysical information. In agriculture and forestry, they are used to detect or monitor vegetation conditions, such as vigor, growth, yield, and the effects of disasters and disease [2][3][4][5][6][7][8][9]. Recently, UAV images were used to analyze surface conditions by using artificial intelligence, such as deep learning [10,11].
In order to acquire location and attribute information from remote sensing images, preprocessing, such as geometric and radiometric calibration, is required. In particular, radiometric calibration is required to convert digital numbers (DNs) into spectral reflectance, which is an important step in obtaining biophysical information from images. For radiometric calibration, vicarious calibration and the radiative transfer (RT) model are widely used [12][13][14][15][16][17]. Vicarious calibration converts DNs into spectral reflectance by installing ground reference panels with known reflectance on the ground and estimating conversion coefficients using the DNs of the targets and their reflectance [12,18]. The RT model converts DNs into reflectance based on a mathematical model. It does not require ground reference panels, but it has limitations such as complicate parameters and low accuracy [19][20][21][22][23][24].
UAV images are taken at a relatively low altitude, compared to aerial or satellite images, and they may not possess significant radiometric distortions. On the other hand, their small field of view makes mosaicking a necessary procedure. Each image may experience different turbulence, a different incidence angle, different illumination, or different signal processing chains. As a result, radiometric properties of UAV images may vary significantly. Relative radiometric properties among UAV images have to be adjusted so that the mosaic image can have a consistent spectral reflectance.
Therefore, for UAV images, we need relative radiometric calibration in addition to vicarious calibration, unless we install ground reference panels within the field of view of each image. Images with ground reference panels are calibrated through vicarious calibration. Images without ground reference panels are calibrated relatively to the DNs of the images with ground reference panels and are then calibrated through vicarious calibration [25,26]. Some UAV cameras include an irradiance sensor to measure the amount of sunlight for each image and they use the irradiance measurements for relative radiometric calibration [27]. For UAV cameras without an irradiance sensor, conversion coefficients have been estimated through regression analysis between the DNs of pixels on overlapping regions between two images [28]. However, this method is prone to geometric distortions and pixels with radiometric anomalies [29] and they often result in visual discontinuity between adjacent scenes [30]. Precise relative radiometric calibration of UAV images is still highly in demand.
This paper proposes a new method of relative radiometric calibration for UAV images acquired without an irradiance sensor. The proposed method uses tie points obtained during a geometric calibration process for relative radiometric calibration. Tie points refer to image points from different images, which correspond to the same ground points. Using DNs of tie points recorded in different images, coefficients of relative radiometric calibration between two images are estimated. DNs of images without ground reference panels are then converted to equivalent DNs of a reference image with ground reference panels. Some images may not overlap the reference image and do not have tie points corresponding directly with it. In this case, their DNs cannot be converted to the equivalent DNs of the reference image with a single conversion. Therefore, we need to link such images to the reference image in a cascade manner. The proposed method establishes an image network using tie points and finds an optimal path from one image to the reference image. The proposed method was verified by comparing its results with those from a calibrated image that uses an irradiance sensor.

UAV Images
UAV images were acquired on 15 May 2019 from 15:32 h to 15:36 h at 50 degrees solar elevation under a clear sky. The UAV flew seven courses at an altitude of 100 m and acquired 108 images at a spatial resolution of 3 cm. Figure 1 shows the study area and locations of the images acquired. The area is above the campus of the National Institute of Agricultural Sciences in Wanju-gun, Jeollabuk-do, Korea. It includes trees, grass, soil, and sidewalk blocks as its major cover types. A fixed-wing UAV (Figure 2a) was used for the experiment, equipped with a RedEdge-M camera installed onboard (Figure 2b). The camera captures images in five spectral bands: blue, green, red, red-edge, and near-infrared. It also has an irradiance sensor, a downwelling light sensor (DLS), for radiometric calibration. The measured irradiance from the DLS was used to produce validation data for this study.

Reflectance of Ground Reference Panels
Seven ground reference panels were deployed on the grass of a soccer field for vicarious radiometric calibration of the UAV images ( Figure 3a). The panels were made of a specially coated fabric sized 1.2 m × 1.2 m to maintain constant reflectance through a spectral range of 435 nm to 1100 nm. Spectral reflectance of each panel was measured using a FieldSpec-3 spectro-radiometer ( Figure 3b). The panels used for the experiment provided 3%, 5%, 11%, 22%, 33%, 44%, and 55% reflectance.

Validation Data
For validation, a reference reflectance map was generated using the measured irradiance data and the reference panels [31]. An image map with intensity as its pixel values was generated first through standard processing procedures of tie point extraction, bundle adjustment, digital surface model generation, ortho-rectification, and image resampling. A reflectance map with reflectance as its pixel values was then generated by converting DNs of the mosaicked image into reflectance values. For the conversion procedure, the measured irradiance data and the DN on the ground reference panels were used to calculate coefficients for the conversion. We used a commercial software (SW), Pix4D mapper 4.1 (Pix4D S.A., Switzerland), to produce the reflectance map. Figure  4 shows the reflectance map generated as validation data as natural and pseudo-infrared color composite, respectively. The pseudo-infrared color composite means that near-infrared (NIR), red and green bands are assigned as red, green, and blue, respectively, for color display.  The mean difference was measured between the reflectance map and the ground reference panels. In this study, only ground reference panels were used and in-situ reflectance measurements were not available. Table 1 shows mean differences of the reflectance map for each panel. All values were very small and between 0.001 and 0.101. Visible bands had relatively lower differences than red-edge and NIR bands Table 1. Mean differences of reflectance between the reflectance map and the ground reference panels.

Relative Radiometric Calibration Procedure
This section explains the procedure for relative radiometric calibration of UAV images based on tie points and the minimum distance path method. Figure 5 summarizes the procedure. First, tie points were extracted automatically from UAV images through structure from motion (SfM)-based geometric processing [32][33][34]. Commercial SW or open-source SW can be used for this process. In this study, Pix4D mapper 4.1 was used to extract tie points. Secondly, the image with ground reference panels was selected as the reference image. DN values of the targets within the reference image were measured and coefficients for vicarious calibration were estimated through regression analysis.
Next, an image network is formed by defining each image as a node. When there is a sufficient number of tie points between two images, a link between the two corresponding nodes is defined. After an image network is formed, we can find an optimal path from one image to another by following links between image nodes. In this experiment, we used the Dijkstra algorithm [35,36] to obtain the optimal path.
Next, tie points among the images within the optimal path are processed to estimate coefficients for relative radiometric calibration. DNs of an image are converted to equivalent DNs in the reference image using these relative calibration coefficients and eventually they are converted to reflectance using the vicarious calibration coefficients. Finally, a geometric mosaicking process is carried out on the reflectance images to generate a mosaicked reflectance map.

Vicarious Radiometric Calibration of the Reference Image
DNs of the reference image are converted to spectral reflectance using the vicarious radiometric calibration method. For each ground reference panel, locations of sample pixels were identified manually. To minimize errors, the locations were selected to avoid undulated parts within the reference panels. Figure 6 shows an example of the reference image and the zoomed image-location of the ground targets. The average DN around the location was calculated per spectral band and per reference panel.
DNs of the reference image are converted to reflectance using the following equation: where R is reflectance, is the digital number of an image, is absolute radiometric gain, and is the absolute radiometric offset. These coefficients are estimated for each band via linear regression between the DN and the reflectance of the pixels corresponding to the ground reference panels (Figure 7). All DNs of the reference image were then converted to reflectance using equation (1).

Optimal Path Selection
Generally, UAV cameras acquire hundreds of images per mission, all with extensive overlaps. It is very difficult to consider all the images for relative radiometric calibration, since there will be too many combinations of image pairs. Errors in relative radiometric calibration are accumulated due to the number of image pairs. One image may overlap many other images and there are many paths reaching from one image to another by connecting images with overlaps. Therefore, an intelligent scheme to select images for relative radiometric calibration is required to minimize radiometric errors. This paper suggests an optimal path selection method to cover the entire study area. It can minimize error accumulation by reducing the steps from the reference image to the last image located at the region boundary.
The Dijkstra algorithm [35] finds a path that minimizes the sum of weights from one starting point to all other points. It has been used in various fields, such as navigation, to search for an optimal path [36]. In this paper, the Dijkstra algorithm is used to select an optimal path to the reference image from an image not overlapping the reference image.
An image network is formed by defining each image as a node. Links between nodes are defined based on the number of tie points. When there is a sufficient number of tie points between two images, a link with a weight of 1 is defined between the two corresponding nodes. When the number of tie points is less than a certain threshold, a link with infinite weight is defined. All links have the number of tie points as an attribute.
For the threshold for the number of tie points, a fixed value of 100 was used. At this stage, the threshold value was not critical to overall performance. Tie points were to be filtered out further based on the DN differences between two images. For valid image pairs, the number of tie points was in the order of 100. We set this threshold only to illuminate false image pairs into consideration.
The Dijkstra algorithm finds an optimal path from one image to the reference image, which is the path with the minimum weight. If there are multiple paths with the same minimum weight, the path with the largest number of tie points is selected. Figure 8a shows a schematic diagram of an image network. Here, S indicates an image for starting the optimal path search, which in our case is an image to be calibrated. F is the final destination of the optimal path search, which in our case is the reference image. The symbols a-d represent images distributed between S and F, and the lines between images mean there are tie points. Figure 8b shows the number of tie points between each image pair. Figure 8c shows the weight of the links based on a threshold of 100.
The sum of the weights equals 2 for the two shortest paths, S-a-F and S-d-F, which are represented in Figure 8a by red and green lines, respectively. Tie point number is considered when selecting an optimal path. Figure 9a shows the number of tie points along the two paths. Since the path S-a-F has 400 tie points, and path S-d-F has 550, S-d-F is selected as the optimal path, as shown in Figure 9b.
All images along the optimal path are selected as optimum images for relative radiometric calibration of the image of interest. Two successive images along the optimal path are used to find the coefficients for relative radiometric calibration. In Figure 9, optimal images for the relative radiometric calibration of S are d and F. Therefore, coefficients for relative radiometric calibration for S-d and d-F are estimated. The DNs of S are converted to equivalent DNs of d using S-d conversion, and then, to equivalent DNs of F using d-F conversion.

Relative Radiometric Calibration
As mentioned earlier, tie points are obtained from the geometric calibration process. They are extracted initially from two images using traditional tie point extraction algorithms [37,38]. Initial tie points contain many outliers and many of them are removed by incremental bundle adjustment based on SfM [32,39]. Some tie points that may be geometrically correct may contain radiometric abnormalities due to shadow, saturation, or sun glints. To improve the quality of relative radiometric calibration, tie points undergo radiometric filtering.
In order to remove radiometric outliers in the tie points, it is necessary to define an appropriate threshold for the number of tie points. An optimal threshold was defined by checking values. Mean and standard deviation of DN differences between reference and adjacent image was calculated for all tie points. We then selected tie points whose DN differences were within n times of the standard deviation (DN difference < mean ± n × standard deviation) and was calculated for the selected tie points. We changed the value of n and checked the variation of . We decide the threshold when was saturated. Table 2 shows an example of ratio of selected points to total number of tie points, number and of the tie points by n. When n was 1 and ratio was 68%, was saturated to 0.98 for most of image. Figure 10 shows an example of the distribution of DN values for all tie points (black crosses), and the accepted tie points are red crosses. Figure 11 shows an example of accepted tie points (red x's) among all tie points.   Using the accepted tie points, coefficients of relative radiometric calibration are estimated using equation (2): (2) where is the digital number from the original image, is the digital number from the converted image, is relative radiometric gain, and is relative radiometric offset. Using the estimated conversion coefficients, DNs of the original image are converted to equivalent DNs of the other image. Using successive image pairs along the optimal path, DNs of the original image are converted sequentially to equivalent DNs of the reference image. They are then converted to reflectance using equation (1).

Validation of the Proposed Method
For validation, a mosaicked image (a reference reflectance map) was generated separately using irradiance measurements. For quantitative validation, reflectance obtained from relative radiometric calibration was compared to the validation data at the same points. The test samples were collected for a total of 200 pixels by random sampling (Figure 12). Error was calculated as root mean square error (RMSE) using the following equation: where n is the number of samples, and represent reflectance of the calibrated image and the validation data, respectively, for the i-th validation point.
In order to check the effectiveness of using optimal images for relative calibration, we tested two additional methods. The first is applying a series of relative calibrations in the order of image acquisition. The second is applying the coefficients of vicarious calibration obtained from the reference image to all other images without relative calibration. For clarity, we call the proposed method relative calibration by the optimal path (RCOP); we refer to the first method described directly above as relative calibration by acquisition sequence (RCAS) and the second method above is no relative calibration (NoRC).

Visual Interpretation of Calibration Results
The three relative radiometric calibration methods (RCOP, RCAS, and NoRC) were applied to UAV images without irradiance. Figure 13 shows the reflectance map by natural color composites (a-c) and pseudo-infrared color composites (d-f) from RCOP, RCAS, and NoRC, respectively. In order to compare colors, each band was stretched based on the same range of reflectance. Figure 13. Reflectance map by natural color composites (a-c) and pseudo-infrared color composites (d-f) from relative calibration by the optimal path (RCOP) (top), relative calibration by acquisition sequence (RCAS) (middle), and no relative calibration (NoRC) (bottom).
The results from RCOP had similar color when compared with the validation data using irradiance measurement (Figure 4). The mosaic result was smooth without noticeable color differences at the boundaries between images. The result from RCAS showed good calibration results in the upper part. However, it became darker towards the lower part. The accumulation of errors from the image acquisition sequence seemed severe. Note that for RCOP, optimum paths from all images to the reference image had fewer than five links, whereas for RCAS, a path from an image to the reference increased in the order of acquisition sequence. For relative calibration, the proposed method could provide an efficient path for accumulative radiometric conversion.
The results from NoRC indicate that no relative calibration worked better than relative calibration with accumulation sequence. At a glance, it looks similar to the validation data. However, it deserves careful attention. The left side (west side) of the NoRC result is brighter than other areas. The reason is likely changes in camera exposure. The camera was set to adapt exposure automatically which is general setting in application fields. The exposure has changed depending on the amount of light reaching the camera. An exposure value (EV), which is a function of exposure time, International Organization for Standardization (ISO) value, and focal (F) number, can indicate the degree of exposure [40]. When the EV is high, the amount of the exposure is larger, and an image is brighter.
The whole mosaicked image was divided into 3×3 subregions ( Figure 14) and average EVs for each subregion are listed in Table 3. As shown in the table, EVs of the left subregions are higher than other subregions. The UAV images of some subregions were taken brighter than other subregions. Therefore, the mosaicked image from NoRC on those subregions appears brighter than the validation data. From visual inspection and an analysis of EV, the results from the proposed RCOP provided better performance than the other methods.

Quantitative Accuracy Analysis
The results of the three relative calibration methods (RCOP, RCAS, and NoRC) were compared quantitatively with the validation data. Table 4 shows the RMSE for the reflectance from each method per band. The results from RCOP show lower RMSEs compared to RCAS and NoRC, with the exception of red-edge and NIR bands. For RCOP, RMSE of the visible (blue, green, red) bands was about 0.03 and for red-edge and NIR bands about 0.06 and 0.10, respectively. RMSE for NoRC was larger than RCOP (except for red-edge and NIR bands) and smaller than RCAS. RMSEs from the three methods agreed well with visual inspection. As shown in Table 4, there was an exception for red-edge and NIR bands between the proposed RCOP and NoRC methods. There are two possible reasons. The first reason could be lower contrast of the bands that affects number of tie point for the bands. In this study, tie points were extracted from each band, and the relative radiometric calibration and mosaicking processes were carried out per band. This was to remove problems associated with band-to-band misalignments, since the camera captures five spectral bands through separate optical paths. While tie points from blue, green, and red bands were accurate and large in number, those from red-edge and NIR bands were less accurate and fewer in number. Figure 15 shows an example of the accepted tie points for red, red-edge, and NIR bands for one image pair. The numbers of tie points were 3775, 2108, and 1005 for the red, red-edge, and NIR bands, respectively. For red-edge and NIR bands, the numbers of tie points decreased due to lower contrast. Figure 16 shows the distributions of DNs and the results of regression analysis between two images for red, red-edge, and NIR bands. The DN distributions of red-edge and NIR bands showed weaker linearity than the red band. values decreased from 0.95 for the red band to 0.91 for red-edge, and 0.76 for the NIR band. This weak linearity affected the accuracy of the relative radiometric calibration for red-edge and NIR bands. This issue will be studied further in subsequent research, along with precise band-to-band alignment. The second reason might be the error caused by land-cover type. Spectral reflectance is depending on surface materials. In particular, RMSE of red-edge and NIR bands could be highly affected by high reflectance of vegetation. The 200 validation points were grouped according to land-cover types and RMSE of each land-cover type was compared. Table 5 shows RMSEs of validation points for land-cover types. The RMSE of each class in each band was similar among the three relative radiometric calibration methods. This implies that the reason of higher RMSEs of red-edge and NIR bands is not likely due to spectral characteristics but due to the insufficient number of tie points in those bands.
Despite the exceptions for red-edge and NIR bands, qualitative analysis showed that the use of tie points and an optimal path can improve the quality of relative radiometric calibration.

Conclusion
In this study, we proposed a new method for relative radiometric calibration when irradiance measurement is not available during image acquisition. It can improve radiometric calibration quality using tie points and optimal path selection. It showed higher reliability and stability in the calibration results, compared to other methods. Therefore, the proposed method can be used to obtain a precise reflectance map, improving the quality of relative radiometric calibration.
Most UAVs acquire images without irradiance measurement and are used in applications where precise reflectance retrieval is crucial. The proposed method should contribute to improving accuracy of biophysical factor estimation or classification using UAV images. In further studies, precise band alignment will be carried out and we will check how exceptions reported in this paper can be improved. Additional studies will be carried out with images acquired under various weather condition, various exposure setting and validation with ground reflectance measurements for well-distributed in-situ measurements.