Next Article in Journal
On the Variation of NDVI with the Principal Climatic Elements in the Tibetan Plateau
Next Article in Special Issue
Exploring the Potential for Automatic Extraction of Vegetation Phenological Metrics from Traffic Webcams
Previous Article in Journal
Signal Classification of Submerged Aquatic Vegetation Based on the Hemispherical–Conical Reflectance Factor Spectrum Shape in the Yellow and Red Regions
Previous Article in Special Issue
Azimuth-Variant Signal Processing in High-Altitude Platform Passive SAR with Spaceborne/Airborne Transmitter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generating Virtual Images from Oblique Frames

1
Department of Cartography, UNESP-Univ Estadual Paulista, Presidente Prudente-SP, 19060-900, Brazil
2
Aerocarta SA, São Paulo-SP, 04566-000, Brazil
*
Author to whom correspondence should be addressed.
Current address: Centro de Ci
Remote Sens. 2013, 5(4), 1875-1893; https://doi.org/10.3390/rs5041875
Submission received: 20 February 2013 / Revised: 20 March 2013 / Accepted: 20 March 2013 / Published: 15 April 2013

Abstract

:
Image acquisition systems based on multi-head arrangement of digital cameras are attractive alternatives enabling a larger imaging area when compared to a single frame camera. The calibration of this kind of system can be performed in several steps or by using simultaneous bundle adjustment with relative orientation stability constraints. The paper will address the details of the steps of the proposed approach for system calibration, image rectification, registration and fusion. Experiments with terrestrial and aerial images acquired with two Fuji FinePix S3Pro cameras were performed. The experiments focused on the assessment of the results of self-calibrating bundle adjustment with and without relative orientation constraints and the effects to the registration and fusion when generating virtual images. The experiments have shown that the images can be accurately rectified and registered with the proposed approach, achieving residuals smaller than one pixel.

1. Introduction

Professional digital cameras have a favorable cost/benefit ratio when compared to high-end digital photogrammetric cameras and are also much more flexible for use on different platforms and aircrafts. As a consequence, some companies are using professional medium format cameras in mapping projects, mainly in developing countries [1]. However, compared to large format digital cameras, medium format digital cameras have a smaller ground coverage area, increasing the number of images, flight lines and also the flight costs.
One alternative to augment the coverage area is using two (or more) synchronized oblique cameras. The simultaneously acquired images from the multiple heads can be processed as oblique strips [2] or they can be rectified, registered and mosaicked to generate a larger virtual image [3].
Airborne Remote Sensing technology also uses similar methods to generate multispectral images from multiple cameras. Hunt et al.[4] modified a digital camera to acquire Near Infrared (NIR), green and blue images, avoiding multiple cameras due to the payload restrictions of the Unmanned Aerial Vehicle (UAV) platform on which the camera was mounted. The modified camera was successfully tested in two variably-fertilized fields of winter wheat. However, the absence of the red channel may restrict the use of this type of camera in several applications that requires red-green-blue (RGB) and NIR images for robust classification. UAVs are becoming more and more used as the carrier of light-weight acquisition systems, with several different approaches, varying from single RGB cameras to several combined cameras and others sensors. Examples of systems onboard UAVs for remote sensing applications are presented by Ritchie et al.[5], Chao et al.[6], Schoonmaker et al.[7], Hakala et al.[8], Laliberte et al.[9], D’Oleire-Oltmanns et al.[10] and Grenzdörffer et al.[11]. In many of these systems, two or more cameras are used to improve the coverage area and to introduce more spectral channels.
Ritchie et al.[5] combined two low cost digital cameras in vertical viewing, one RGB and the second one modified to acquire (NIR) radiation, to produce multispectral images after some procedures for radiometric calibration and exposure compensation, converting raw brightness to relative reflectance. The experiments conducted by the authors showed that this low cost system was effective to estimate crop reflectance.
Chao et al.[6] presented a detailed description of several components of a UAV designed for remote sensing applications. To cope with the need for several cameras to augment the coverage area or to acquire more spectral bands, they proposed the use of multiple UAVs in a cooperative way, but this approach requires rigorous synchronization and a more sophisticated flight control system.
Schoonmaker et al.[7] presented the main features and practical results obtained with a multiple-camera and multi-channel imaging (MCI) system which fits into a package compatible with smalls UAVs. The system was designed and tested in two environments: forest and maritime. Depending both on the environment and targets the spectral filters, camera orientation and frame acquisition rates are changed to get optimal results for that specific task.
Hakala et al.[8] used a single consumer camera on board an UAV to estimate the bidirectional reflectance factor (BRF). Comparing the results achieved with a low cost camera with those obtained from ground measurements, the authors concluded that they are similar and, therefore, these types of cameras can be used for BRF estimation, provided that images are correctly calibrated.
Laliberte et al.[9] presented several techniques for efficient processing of multispectral imagery acquired with a lightweight multispectral system, composed of six individual cameras, onboard an UAV. One critical problem in this type of multicamera system is the correction of the misalignment between spectral bands that was performed originally based on parameters of translation, scale and rotation between the master and slave cameras. These parameters are derived in a sequential procedure, but without a rigorous geometric camera calibration. To cope with the residuals in the image edges after band registration the authors proposed a new algorithm using a Local Weighted Mean Transform (LWMT) that compensates for local misalignments. The authors studied the complete workflow, including radiometric calibration, classification, orthorectification and mosaicking.
One study on the use of images taken with a single RGB camera onboard of an UAV for monitoring soil erosion was presented by D’Oleire-Oltmanns et al.[10]. The authors performed a full photogrammetric workflow and compared solutions with and without ground control points (GCPs). They concluded that very accurate results, comparable to direct field surveying, could be achieved with low flying heights and with GCPs.
Grenzdörffer et al.[11] introduced a multicamera system for UAVs, consisting of one nadir and four oblique cameras. Several issues concerning the image acquisition and camera calibration were discussed.
Yang [12] presented an airborne multispectral digital imaging system for remote sensing applications, mentioning several case studies for agricultural applications. The system carries four medium format digital cameras with bandpass filters in vertical viewing. The registration and fusion of the individual images are performed with linear transformations and polynomials.
Holtkamp and Goshtasby [13] also presented an approach for registration and mosaicking of multiple images taken with a system mounted on an UAV consisting of an array of six vertical cameras. The authors considered the variations in the Interior Orientation Parameters (IOP) and the lens distortions negligible and applied projective transformations to relate neighbor images through control points.
Existing techniques for rigorous determination of images parameters aiming at virtual image generation from multiple frames have several steps, requiring laboratory calibration and direct measurement of perspective center coordinates of each camera and the indirect determination of the mounting angles using a bundle block adjustment [3]. This combined measurement process avoids the correlations among unknowns, is reliable but requires specialized laboratory facilities that are not easily accessible.
An alternative is the simultaneous calibration of two or more cameras using self-calibrating bundle adjustment imposing additional constrains. Assuming that the relative position of the cameras is stable during the image acquisition mission it is possible to consider the constraints that the relative rotation matrix and the base components between the cameras heads are stable. The main advantage of this approach is that it can be achieved with an ordinary terrestrial calibration field and all parameters are simultaneously determined, avoiding specialized direct measurements.
The approach proposed in this paper is to generate larger virtual images from dual head cameras following four main steps: (1) dual head system calibration with Relative Orientation Parameters (ROP) constraints; (2) image rectification; (3) image registration; and (4) fusion and brightness adjustment to generate a virtual image.

2. Background

Integrating medium format cameras to produce high-resolution multispectral images is a recognized trend, with several well known systems which adopted distinct approaches [3,7,9,1114].
The generation of the virtual image from oblique frames can be done in a sequential process, with several steps, as presented by Zeitler et al.[3] for the Z/I Imaging® Digital Mapping Camera (DMC®), which has four panchromatic and four multispectral heads. The first step is the laboratory geometric and radiometric calibration of each camera head individually. The positions of each camera perspective centers within the cone are directly measured, but the mounting angles cannot be measured with the required accuracy. These mounting angles are estimated in a bundle adjustment step, known as platform calibration. This bundle adjustment uses tie points extracted in the overlapping areas of the four panchromatic images by image matching techniques and with the IOP of each head being determined in the laboratory calibration. Transformation parameters are then computed to map from each single image to the virtual image and these images are projected to generate a panchromatic virtual image. Finally the four multispectral images are fused with the high resolution virtual panchromatic image. This process is accurate but requires laboratory facilities to perform the first steps.
Depending on the camera configuration and accuracy requirements of the application, the process of virtual image generation can be based on sets of two dimensional projective transformations, as presented by Holtkamp and Goshtasby [13].
The approach presented in this paper is another option, and it is based on the parameters estimated in a bundle adjustment with relative orientation constraints [15,16]. This approach has some similarities to existing techniques, but with some differences that will be discussed in Section 2.2.

2.1. Camera Calibration

Camera calibration aims to determine a set of IOP (usually, focal length, principal point coordinates and lens distortion coefficients) [17,18]. This process can be carried out using laboratory methods, such as goniometer or multicollimator, or stellar and field methods, such as mixed range field, convergent cameras and self-calibrating bundle adjustment. In the field methods, image observations of points or linear features from several images are used to indirectly estimate the IOP through bundle adjustment using the Least Squares Method (LSM). In general, the mathematical model uses the collinearity equations and includes the lens distortion parameters (Equation (1)).
F 1 = x f x 0 + δ x r + δ x d + δ x a + f m 11 ( X X 0 ) + m 12 ( Y Y 0 ) + m 13 ( Z Z 0 ) m 31 ( X X 0 ) + m 32 ( Y Y 0 ) + m 33 ( Z Z 0 ) F 2 = y f y 0 + δ y r + δ y d + δ y a + f m 21 ( X X 0 ) + m 22 ( Y Y 0 ) + m 23 ( Z Z 0 ) m 31 ( X X 0 ) + m 32 ( Y Y 0 ) + m 33 ( Z Z 0 )
where xf, yf are the image coordinates and X, Y, Z the coordinates of the same point in the object space; mij are the rotation matrix elements; X0, Y0, Z0 are the coordinates of the camera perspective center (PC); x0, y0 are the principal point coordinates; f is the camera focal length and δxi, δyi are the effects of radial, decentering lens distortion and affinity model [19].
Using this method, the exterior orientation parameters (EOP), IOP and object coordinates of photogrammetric points are simultaneously estimated by the LSM from image observations and using certain additional constraints. Self-calibrating bundle adjustment, which requires at least seven constraints to define the object reference frame, can also be used without any control points [18]. A linear dependence between some EOP and IOP arises when the camera inclination is near zero and when the flying height exhibits little variation. Under these circumstances the focal length (f) and flying height (ZZ0) are not separable and the system becomes singular or ill-conditioned. In addition to these correlations, the coordinates of the principal point are highly correlated with the perspective center coordinates (x0 and X0; y0 and Y0). To cope with these dependencies, several methods have been proposed, such as the mixed range method [20] and the convergent camera method [17]. The maturity of direct orientation techniques, using Global Navigation Satellite Systems (GNSS) dual frequency receivers and Inertial Measurement Units (IMU), makes feasible the robust integration of sensor orientation and calibration. Position of the PC and camera attitude can be considered as observed values, being introduced as constraints in the bundle adjustment, aiming the minimization of the correlation problem previously mentioned. For applications requiring real time response, such as natural disasters or accidents, Choi and Lee [21] presented a method of real time aerial triangulation combining direct orientation data gathered from GNSS receivers and IMU and indirect orientation techniques. The proposed solution considered that the IOP where previously determined. Camera calibration can be performed considering that the IOP vary for each image but this option is difficult to handle due to the high correlation between some parameters. In order to make this technique feasible, Nakano and Chikatsu [22] presented a camera calibration technique that combines a consumer grade camera with a LASER distance meter aligned with the camera optical axis, without the need for GNSS data and Ground Control Points.

2.2. Multi-Head Camera Calibration

Stereo or multi-head calibration usually involve a two-step calibration: in the first step, the IOP are determined; in a second step, the EOP of pairs are indirectly computed by bundle adjustment, and finally, the ROP are derived [23].
Several previous papers on the topic of stereo camera system calibration considered the use of relative orientation constraints. He et al.[24] considered the following equations that could be used as constraints in the bundle adjustment:
Δ R 13 ( i ) = Δ R 13 ( k ) Δ R 23 ( k ) Δ R 33 ( i ) = Δ R 23 ( i ) Δ R 33 ( k ) Δ R 12 ( k ) Δ R 11 ( i ) = Δ R 12 ( i ) Δ R 11 ( k )
b X ( i ) = b X ( k ) b Y ( i ) = b Y ( k ) b Z ( i ) = b Z ( k )
with ΔRlm(i) being the elements of the relative rotation matrix for an image pair (i) and ΔRlm(k) the relative rotation matrix for an image pair (k). The first three independent Equations (2) reflect the assumption of relative rotation stability. The last three Equations (3) are based on the assumption that the base components of two different stereopairs should also be the same. He et al.[24] also used the base distance between the cameras perspective centers, directly measured by theodolites as an additional weighted constraint which defines the scale of the local coordinate system. He et al.[24] did not mention how they treated the stochastic properties of the constraints given in Equation (2). In the original formulation presented by He et al.[24] the base components were defined as the differences between the perspective center coordinates of the two cameras and were not pre-multiplied by the rotation matrix, restricting the application of the proposed method. They computed simultaneously the IOP and the EOP.
King [25,26] introduced the concept of model invariance, which is the term used by the author to describe the fixed relationships between the EOP in a stereo-camera. King [25] approached the invariance property with two models: with constraints equations and with modified collinearity equations. The first approach considers the base constraint and the convergence constraint, the latter is based on the mean convergence angles for X, Y and Z axes. In this approach, King [25] (page 473, Equation (2)), took into consideration a vector of constraint residuals. The second approach was based on modified collinearity equations for the second camera, introducing its position and orientation with respect to the first camera [25]. King [25] reported experiments using two data sets with IOP previously known and concluded that no significant improvement in the overall accuracy was achieved when introducing the relative orientation constraints. However, with controlled simulated data, King [25] (page 479) concluded that the bundle adjustment with modified collinearity equations produced more accurate results than the conventional bundle adjustment, when the uncertainty of the observations are relatively high. King [26] presented more details on his proposed techniques, in which the constraints values are based on mean values for the camera base and axis convergence angles computed from the a priori exterior orientation.
El-Sheimy [27] also considered the use of relative orientation constraints for the VISAT, a Mobile Mapping System. Tommaselli and Alves [28] working with a stereoscopic video system presented a method for the system calibration that considered both IOP and EOP including additional parameters for relative orientation.
Tommaselli et al.[29] presented an approach for dual head cameras calibration introducing constraints in the bundle adjustment based on the stability of the relative orientation elements, admitting some random variations for these elements. A directly measured distance between the external nodal points can also be included as an additional constraint. Lerma et al.[30] also introduced baseline distance constraint as an additional step in a process for self-calibrating multi-camera systems.
Blázquez and Colomina [31] presented the concept of relative position and attitude control observations, introducing the models and discussing the performance of the proposed techniques in light of the experimental data. The authors also extended their approach to introduce relative orientation constraints for a dual-head camera system presenting some experiments with this technique.
Tommaselli et al.[15] used the concept of bundle adjustment with RO constraints to compute parameters for the generation of virtual images of a system with three camera heads, showing that the approach considering random variations in the RO parameters provided good results in the images fusion. In previous papers [15,29], camera calibrations were performed in a terrestrial field, using the known coordinates of a set of targets, that were determined with topographic intersection techniques, with a standard deviation of 3 mm.
The basic mathematical models for calibration of the dual-head system are the collinearity equations (Equation (1)) and additional constraint equations based on the stability of the Relative Orientation Parameters (ROP) [29]. In [29], the constraints imposed were the Relative Rotation Matrix Stability Constraints (RRMSC) and the Base Length Stability Constraint (BLSC).
The Relative Orientation (RO) matrix can be calculated as a function of the rotation matrix of both cameras by using:
R R O = R C 1 ( R C 2 ) 1
where RRO is the RO matrix; RC1 and RC2 are rotation matrices for cameras 1 and 2, respectively. Other elements that can be considered as stable during the acquisition are the Euclidian distance D between the cameras perspective centers (the base length) or the base components.
Considering RRO(t) as the RO matrix and D2(t) as the squared distance between the cameras perspective centers, for the instant t and, analogously, for the instant t+1, RRO(t+1) and D2(t+1), it can be assumed that the RO matrix and the distance between the perspective centers are stable, but admitting some random variations. Based on these assumptions, the following equations can be written:
R R O ( t ) R R O ( t + 1 ) = 0
D 2 ( t ) D 2 ( t + 1 ) = 0
Considering Equations (5) and (6), based on the EOP for both cameras in consecutive instants (t and t+1), four constraint equations can be written. The first three constraints are derived from the lower triangular part of the resulting matrix of Equation (5), because only three equations are linearly independent and the fourth equation is derived from the squared differences in Equation (6).
G 1 = ( r 21 c 1 r 11 c 2 + r 22 c 1 r 12 c 2 + r 23 c 1 r 13 c 2 ) ( t ) ( r 21 c 1 r 11 c 2 + r 22 c 1 r 12 c 2 + r 23 c 1 r 13 c 2 ) ( t + 1 ) = 0 + ν 1 c
G 2 = ( r 31 c 1 r 11 c 2 + r 32 c 1 r 12 c 2 + r 33 c 1 r 13 c 2 ) ( t ) ( r 31 c 1 r 11 c 2 + r 32 c 1 r 12 c 2 + r 33 c 1 r 13 c 2 ) ( t + 1 ) = 0 + ν 2 c
G 3 = ( r 31 c 1 r 21 c 2 + r 32 c 1 r 22 c 2 + r 33 c 1 r 23 c 2 ) ( t ) ( r 31 c 1 r 21 c 2 + r 32 c 1 r 22 c 2 + r 33 c 1 r 23 c 2 ) ( t + 1 ) = 0 + ν 3 c
G 4 = [ ( X 0 c 2 X 0 c 1 ) 2 + ( Y 0 c 2 Y 0 c 1 ) 2 + ( Z 0 c 2 Z 0 c 1 ) 2 ] ( t ) [ ( X 0 c 2 X 0 c 1 ) 2 + ( Y 0 c 2 Y 0 c 1 ) 2 + ( Z 0 c 2 Z 0 c 1 ) 2 ] ( t + 1 ) = 0 + ν 4 c
in which the 0 value can be considered as a pseudo-observation with a certain variance that is calculated by covariance propagation from the values admitted for the variations in the RO parameters; and vic is a residual in the constraint equation.
The base component elements, relative to camera 1, can be derived from the EOP with Equation (11).
[ b x b y b z ] = R C 1 [ X 0 C 2 X 0 C 1 Y 0 C 2 Y 0 C 1 Z 0 C 2 Z 0 C 1 ]
The base components can also be considered stable during the acquisition, leading to three equations that can be used as the Base Components Stability Constraints (BCSC), instead of just one base length equation (Equation (12)):
[ b x b y b z ] ( t ) [ b x b y b z ] ( t + 1 ) = 0
Thus, for two pairs of images collected at consecutive stations, the RO constraints can be written using Equations (79) for the rotations and Equations (10) or (12) for base length or base components stability, respectively.
The mathematical models corresponding to the self-calibrating bundle adjustment and the mentioned constraints were implemented in C/C++ language on the CMC (Calibration of Multiple Cameras) program, that uses the Least Squares combined model with constraints [32].
Collinearity equations combine adjusted observations (La) and unknowns (Xa) in an implicit model, which can be represented as F(La,Xa) = 0, when the linearized model takes the following form [32], with some minor changes in notation:
B V + A X + W = 0
where A is the design matrix related to the unknown parameters, B is the design matrix related to the adjusted observations, V is the vector of residuals, X is the vector of corrections to the approximated parameters and W is a misclosure vector given by:
W = [ B ( L b L 0 ) + F ( L 0 , X 0 ) ]
Assuming that the additional constraints equations take the form Fc(Lac,Xa), the corresponding linearized equations are:
B C V C + C X + W = 0
where Lac are “pseudo-observations”, Bc is the design matrix of functions Fc with respect to the pseudo-observations, C is the design matrix of functions Fc with respect to the parameters and W′ is also a misclosure vector:
W = [ B c ( L c L c 0 ) + F c ( L c 0 , X 0 ) ]
The vector of corrections X to the approximated parameters X0, which is updated iteratively, is given by:
X = [ ( N + N c ) ] 1 [ ( U + U c ) ]
where N = AT(BQBT)−1A, U=AT(BQBT)−1W, NC=CT(BCQCBCT)−1C and UC=CT(BCQCBCT)−1W′. In Equation (17) NC and UC reflect the effects of constraints.
Q is the a priori cofactor matrix of the observations and QC the a priori cofactor matrix of the “pseudo-observations”, which is derived by covariance propagation from the expected variations in the relative orientation parameters.
If the value of a parameter is known with a certain accuracy, for example, the coordinates of control points, then a weighted constraint can be applied similarly with Equations (1517).

3. Methodology

The approach used in this paper to generate larger images from dual head oblique cameras follows four main steps as previously presented in [15,16]: (1) dual head system calibration with RO constraints; (2) image rectification; (3) image registration with translations and scale check and; (4) fusion and brightness adjustment to generate a large image. The calibration step has now been changed to consider Base Components Stability Constraints (BCSC), instead the Base Length Stability Constraint. In the registration step a further scale check was introduced, to assess the need for a differential scale change to compensate for small differences in the camera position.

3.1. Camera Calibration with RO Constraints

Self-calibrating bundle adjustment is performed with a minimum set of seven constraints, which are defined by the coordinates of three neighbor points and the RO constraints. The distance between two of these points must be accurately measured to define the scale of the photogrammetric network. After estimating the IOP, EOP and object coordinates of all photogrammetric points, a quality check is performed with distances between these points. This approach eliminates the need for accurate surveying of control points, which is difficult to achieve with the required accuracy. In the proposed approach the IOP estimated were the camera focal length, coordinates of the principal point and radial and decentering distortion parameters (Conrady-Brown model). Since that the affine distortion for the cameras used in this work is not significant, these parameters were not considered.

3.2. Image Rectification

The second step requires the rectification of the images with respect to a common reference system, using the EOP and the IOP computed in the calibration step. The derivation of the EOP to be used for rectification was done empirically using the ground data calibration. From the existing pairs of EOP one was selected because the resulting fused image was near parallel to the calibration field.
Firstly, the dimensions and the corners of the rectified image are defined, by using the inverse collinearity equations. Then, the pixel size is defined and the relations of the rectified image with the tilted image are computed with the collinearity equations. The RGB values of each pixel on the rectified image are interpolated in the projected position in the tilted image. The value used for the projection plane is the focal length of camera 1 (Figure 1(a,b)).

3.3. Image Registration

The third step is the registration of the rectified images using tie points located in the overlap area with subpixel precision using area based matching, refined with Least Squares Matching (LSM). Ideally, the coordinates of these points should be the same, but owing to different camera locations and uncertainties in the EOP and IOP, discrepancies are unavoidable. The average values of discrepancies can be introduced as translations in row and columns to generate a virtual image by resampling the rectified right image (step 4). The standard deviations of these discrepancies can be used to assess the overall quality of the fusion process and standard deviations smaller than 1–2 pixels can be obtained without significant discrepancies in the seam-line.
When the standard deviations of the discrepancies in tie points coordinates are higher than a predefined threshold (e.g., 2 pixels) a scale factor can be computed from two corresponding tie points in the limits of the overlap area. This scale factor is used to compute a new projection plane distance and the right image is rectified again. The registration process is repeated to compute new discrepancies in the tie points coordinates and to check their standard deviations.

3.4. Images Fusion

The fourth step is the images fusion, when virtual images are generated (Figure 1(c,d)). The average discrepancies values in rows and columns of tie points are used to correct each pixel coordinates assigning RGB values for the pixels of the final image. This step requires resampling the right image because the translation computed in step 4 has subpixel precision. Brightness adjustment is also applied based on the differences in R, G and B values in tie point areas. A feathering technique can be applied to smooth transitions and the seamline, but in the experiments this options was not used.

4. Experimental Assessment

Two Fuji FinePix S3Pro RGB cameras, with a nominal focal length of 28 mm, were used in the experiments (Figure 2). The dual camera system in depicted in Figure 2 and camera technical data are given in Table 1.
Firstly, the system was calibrated in a terrestrial test field consisting of a wall with signalized targets (Figure 3(a)). Several experiments were conducted to assess the results with distinct approaches, to check their effects in the rectified images, in the virtual fused images and also to define the optimal values for the RO constraints.
Forty images, from five distinct camera stations, were acquired and analyzed. The image coordinates of circular targets in the selected images were extracted with subpixel accuracy using an interactive tool that computes the centroid after automatic threshold estimation. In each exposure station, eight images were captured (four for each camera), with the dual-mount rotated by 0°, 90°, 270° and 180°. After eliminating images with weak point distribution, 37 images were used: 18 images taken with camera 1 and 19 with camera 2. From this set of images acquired by the dual system, 8 image pairs were acquired at the same instant. For these 8 pairs the RO constraints equations were introduced in the system of linearized equations. The remaining images were treated as isolated frames and their EOPs were also computed in the solution along with the IOP of cameras 1 and 2 and ground coordinates of photogrammetric points.
In the experiments reported in previous papers [15,29], the calibrations were performed with known object space coordinates measured with topographic intersection methods. However, the accuracies of such a set of coordinates (σ∼3 mm) were not suitable to make feasible the analysis of the effects of RO constraints. In order to avoid the use of these data, self-calibrating bundle adjustment was used to compute parameters with a set of seven minimum absolute constraints. The 3D coordinates of one target in the object space, the X and Z coordinates of a second one and the Y and Z of a third one were introduced as absolute constraints. The X coordinate of the second point was measured with a precision calliper with an accuracy of 0.1 mm.
A further set of distances (131) between signalized targets (Figure 3(c)) were measured with a precision calliper with an accuracy of 0.1 mm, and these distances were used to check the results of the calibration process. After bundle adjustment the distances between two targets can be computed from the estimated 3D coordinates and compared to the distances directly measured.
To assess the proposed methodology with real data, seven experiments were carried out, without and with different weights for the RO constraints. The experiments were carried out with RRMSC (Relative Rotation Matrix Stability Constraints—Equations (7) to (9)) and BCSC (Base Component Stability Constraint—Equation (12)), but varying the weights in the constraints. The two cameras were also calibrated in two separated runs (Experiment A) and in the same bundle system, but without RO constraints (Experiment B). In the experiments C to G, RO constraints were introduced with different weights, considering different variations admitted for the angular elements. Table 2 summarizes the characteristics of each experiment.
Figure 4 presents the RMSE (Root Mean Squared Error) of the discrepancies in the 131 check distances, in the object space, for all the experiments. It can be seen that the errors in the check distances were slightly higher in the experiments with RO constraints. This result can be explained by the restriction imposed by the RO constraints, which enforce a solution that does not fit well for all the object space points.
In Figure 5 the estimated standard deviations for some of the IOP (f, x0 and y0) for both cameras are presented for each experiment. It can be seen that the similar estimated standard deviations were achieved in all experiments, except when the cameras were calibrated independently (Experiment A1).
The base components were then computed from the estimated EOP (Equation (11)) for those pair of images that received stability constraints (6 pairs). The average values and their standard deviations were computed to assess how these values were estimated. Figure 6 depicts the standard deviations of the base components for all experiments. It can be seen that precise values are achieved when introducing the base components stability constraints, not affecting significantly the estimation of IOP and the overall accuracy of the bundle adjustment.
The relative rotation matrices for the same 6 image pairs for all experiments were also computed with Equation (3). The average values for the angles were then computed with their standard deviations, which are presented in Figure 7. It can be seen that the standard deviation of the RO angles are compatible with the admitted variations, imposed with the Relative Rotation Matrix Stability Constraints (RRMSC). Experiments A and B, without constraints presented values with high standard deviations, as expected.
The second part of the experiments were performed with aerial images taken with the same dual cameras arrangement with flying height of 1,520 m and a GSD (Ground Sample Distance) of 24 cm (see Figure 8). The IOP and EOP computed for each experiment were then used to produce virtual images from the dual frames acquired in this flight. Five image pairs in a single strip were select with the aim of analyzing the processes of registration and image fusion (Figure 8(a)).
Firstly, image pairs were rectified using those IOP estimated in the self-calibration process with terrestrial data, for each group of experiments. Then, tie points were located in the overlap area of pairs of the rectified images using area based correspondence methods (minimum of 20 points for each pair). The average values of discrepancies and their standard deviations are then computed for each images pair. In Figure 9 the standard deviation of the discrepancies in the tie points coordinates (columns and rows) between the rectified image pairs are presented. These deviations show the quality of matching process when mosaicking the dual images to generate a virtual image. It can be seen that images fused using parameters generated by self-calibration without constraints (experiments A and B) present residuals with a standard deviation around 1.3 pixels in columns (σc) and 1 pixels in rows (σr). It is possible to note that the matching of the rectified image pairs when using parameters generated by self-calibration with RO constraints is better, mainly in experiments C and D (in which angular variations of 1″ and 10″ were considered, respectively). The effects of varying the weight in the base components constraints were not assessed in these experiments.
The distances between tie points in the overlap area were used to compute the scale factor and also to generated new rectified images for the right camera. The process of measuring tie points and compute discrepancies and their standard deviations was repeated and the average values of the standard deviations are shown in Figure 10. The results were not significantly improved but it is possible to observe that the fusion is still better when parameters are generated with ROP constraints.
Virtual images generated for all experiments were then used in a bundle block adjustment and the results were assessed with independent check points. From the set of virtual images generated, three images were selected (Figure 8(b)), with six control points and 23 independent check points (Figure 11). The ground coordinates of control and check points were determined with GNSS receivers in relative mode, achieving accuracy within 2 cm. These points were measured interactively with LPS (Leica Photogrammetric Suite) in a reference project. Then, the image points were transferred automatically with image correlation to all images of the experiments, ensuring that the same points were used. The RMSE in the check points coordinates, obtained from the differences between the values estimated by the bundle adjustment and the ground true, are presented in Figure 12(a,b). Figure 12(a) presents the results for the images generated without scale correction of the right image, whilst Figure 12(b) presents the values of the RMSE for the images generated with the right images corrected with a scale change.
The RMSE in check points coordinates were around 1 GSD (X and Y) and 2 GSD (Z) for the experiments D and G. The values for the RMSE in Z were higher in the other experiments (around 3 GSD). In general, it can be concluded that the proposed process works successfully, achieving results similar to a conventional frame camera with a single sensor.

5. Conclusions

In this paper, a set of techniques for dual head camera calibration and virtual images generation were presented and experimentally assessed. Experiments were performed with Fuji FinePix S3Pro RGB cameras. The experiments have shown that the images can be accurately rectified and registered with the proposed approach with residuals smaller than one pixel, and they can be used for photogrammetric projects. The calibration step was assessed with distinct strategies, without and with constraints considering the stability of Relative Orientation between cameras. In comparison with the approach presented in previous papers, some improvements related to the constraints in the base components, rather than on the base length constraint and also the use of self-calibration with independent distances check were introduced. An infrared channel can also be registered to the image, by using a third camera, as was shown in [15]. This technique can be largely used in airborne remote sensing, producing geometrically accurate images.
The advantage of the proposed approach is that an ordinary calibration field can be used and no specialized facilities are required. The same approach can be used in other applications, like generation of panoramas, a suggestion that can be assessed in future work. Additionally, the proposed approach is suitable to be used with lightweight cameras installed in UAVs, improving the area covered in each shot, and also enabling the integration of other cameras with additional spectral channels, that can be important and useful in several airborne remote sensing applications.
The focus of this work was the assessment of the influence of the RO constraints for the virtual image generation. However, it is important to mention that some further implementations are required to achieve a fully automatic processing chain for operational purposes.
Another suggestion for future work is the comparison of the proposed technique with the modified collinearity equations in which the EOPs of one camera are replaced by a function of the ROP and the EOP of the master camera.

Acknowledgments

The authors would like to acknowledge the support of FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo) with Grant No. 07/58040-7. The authors are also thankful to CNPq for supporting the project with Grants 472322/04-4, 481047/04-2, 478782/09-8 and 305111/10-8.

References

  1. Ruy, R.S.; Tommaselli, A.M.G.; Galo, M.; Hasegawa, J.K.; Reis, T.T. Accuracy Analysis of Modular Aerial Digital System SAAPI in Projects of Large Areas. Proceedings of EuroCow2012—International Calibration and Orientation Workshop, Castelldefels, Spain, 8–10 February 2012.
  2. Mostafa, M.M.R.; Schwarz, K.-P. A multi-sensor system for airborne image capture and georreferencing. Photogramm. Eng. Remote Sensing 2000, 66, 1417–1423. [Google Scholar]
  3. Zeitler, D.W.; Doerstel, C.; Jacobsen, D.K. Geometric Calibration of the DMC: Method and Results. Proceedings of the ISPRS Commission I/Pecora 15 Conference, Denver, CO, USA, 10–15 November 2002; pp. 324–333.
  4. Hunt, E.R.; Hively, W.D.; Fujikawa, S.J.; Linden, D.S.; Daughtry, C.S.T.; McCarty, G.W. Acquisition of NIR-green-blue digital photographs from unmanned aircraft for crop monitoring. Remote Sens 2010, 2, 290–305. [Google Scholar]
  5. Ritchie, G.L.; Sullivan, D.G.; Perry, C.D.; Hook, J.E.; Bednarz, C.W. Preparation of a low-cost digital camera system for remote sensing. Appl. Eng. Agric 2008, 24, 885–896. [Google Scholar]
  6. Chao, H.; Jensen, A.M.; Han, Y.; Chen, Y.; McKee, M. AggieAir: Towards Low-Cost Cooperative Multispectral Remote Sensing Using Small Unmanned Aircraft Systems. In Advances in Geoscience and Remote Sensing; Jedlovec, G., Ed.; InTech: Rijeka, Croatia, 2009. [Google Scholar] [CrossRef]
  7. Schoonmaker, J.; Podobna, Y.; Boucher, C.; Saggese, S.; Runnels, D. Multichannel imaging in remote sensing. Proc. SPIE 2009. [Google Scholar] [CrossRef]
  8. Hakala, T.; Suomalainen, J.; Peltoniemi, J.I. Acquisition of bidirectional reflectance factor dataset using a micro unmanned aerial vehicle and a consumer camera. Remote Sens 2010, 2, 819–832. [Google Scholar]
  9. Laliberte, A.S.; Goforth, M.A.; Steele, C.M.; Rango, A. Multispectral remote sensing from unmanned aircraft: Image processing workflows and applications for rangeland environments. Remote Sens 2011, 3, 2529–2551. [Google Scholar]
  10. D’Oleire-Oltmanns, S.; Marzolff, I.; Peter, K.D.; Ries, J.B. Unmanned Aerial Vehicle (UAV) for monitoring soil erosion in Morocco. Remote Sens 2012, 4, 3390–3416. [Google Scholar]
  11. Grenzdörffer, G.; Niemeyer, F.; Schmidt, F. Development of Four Vision Camera System for a Micro-UAV. Proceedings of XXII ISPRS Congress, Melbourne, Australia, 25 August–1 September 2012; pp. 369–374.
  12. Yang, C. A high-resolution airborne four-camera imaging system for agricultural remote sensing. Comput. Electron. Agric 2012, 88, 13–24. [Google Scholar]
  13. Holtkamp, D.J.; Goshtasby, A.A. Precision registration and mosaicking of multicamera images. IEEE Trans. Geosci. Remote Sens 2009, 47, 3446–3455. [Google Scholar]
  14. Petrie, G. Systematic oblique aerial photography using multiple digital frame camera. Photogramm. Eng. Remote Sensing 2009, 75, 102–107. [Google Scholar]
  15. Tommaselli, A.M.G.; Galo, M.; Marcato, J., Jr.; Ruy, R.S.; Lopes, R.F. Registration and Fusion of Multiple Images Acquired with Medium Format Cameras. Proceedings of the Canadian Geomatics Conference 2010 and Symposium of Commission I, Calgary, AB, Canada, 15–18 June 2010.
  16. Tommaselli, A.M.G.; Moraes, M.V.A.; Marcato Junior, J.; Caldeira, C.R.T.; Lopes, R.F.; Galo, M. Using Relative Orientation Constraints to Produce Virtual Images from Oblique Frames. Proceedings of XXII ISPRS Congress, Melbourne, Australia, 25 August–1 September 2012; pp. 61–66.
  17. Brown, D. Close-range camera calibration. Photogramm. Eng 1971, 37, 855–866. [Google Scholar]
  18. Clarke, T.; Fryer, J. The development of camera calibration methods and models. Photogramm. Rec 1998, 16, 51–66. [Google Scholar]
  19. Habib, A.F.; Morgan, M.F. Automatic calibration of low-cost digital cameras. Opt. Eng 2003, 42, 948–955. [Google Scholar]
  20. Merchant, D.C. Analytical Photogrammetry: Theory and Practice; Ohio State University: Columbus, OH, USA, 1979. [Google Scholar]
  21. Choi, K.; Lee, I. A Sequential aerial triangulation algorithm for real-time georeferencing of image sequences acquired by an airborne multi-sensor system. Remote Sens 2013, 5, 57–82. [Google Scholar]
  22. Nakano, K.; Chikatsu, H. Camera-variant calibration and sensor modeling for practical photogrammetry in archeological sites. Remote Sens 2011, 3, 554–569. [Google Scholar]
  23. Zhuang, H. A self-calibration approach to extrinsic parameter estimation of stereo cameras. Robot. Auton. Syst 1995, 15, 189–197. [Google Scholar]
  24. He, G.; Novak, K.; Feng, W. Stereo camera system calibration with relative orientation constraints. Proc. SPIE 1994. [Google Scholar] [CrossRef]
  25. King, B.A. Methods for the photogrammetric adjustment of bundles of constrained stereopairs. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 1994, 30, 473–480. [Google Scholar]
  26. King, B.A. Bundle adjustment of constrained stereopairs-mathematical models. Geomat. Res. Austr 1995, 63, 67–92. [Google Scholar]
  27. El-Sheimy, N. The Development of VISAT—A Mobile Survey System for GIS Applications. 1996. [Google Scholar]
  28. Tommaselli, A.M.G; Alves, A.O. Calibração de uma Estereocâmara Baseada em vídeo. In Séries em Ciências Geodésicas—30 Anos de Pós-Graduação em Ciências Geodésicas no Brasil; Universidade Federal do Paraná: Curitiba, PR, Brazil, 2001; Volume 1, pp. 199–213. [Google Scholar]
  29. Tommaselli, A.; Galo, M.; Bazan, W.; Ruy, R.; Junior, J.M. Simultaneous Calibration of Multiple Camera Heads with Fixed Base Constraint. Proceedings of the 6th International Symposium on Mobile Mapping Technology, Presidente Prudente, SP, Brazil, 21–24 July 2009.
  30. Lerma, J.L.; Navarro, S.; Cabrelles, M.; Seguí, A.E. Camera calibration with baseline distance constraints. Photogramm. Rec 2010, 25, 140–158. [Google Scholar]
  31. Blázquez, M.; Colomina, I. Fast AT: A simple procedure for quasi direct orientation. ISPRS J. Photogramm 2012, 71, 1–11. [Google Scholar]
  32. Mikhail, E.M.; Ackermann, F.E. Observations and Least Squares; University Press of America: New York, NY, USA, 1983. [Google Scholar]
Figure 1. Resulting rectified images of dual cameras: (a) left image from camera 2, and (b) right image from camera 1, (c) resulting fused image from two rectified images after registration and, (d) cropped without the borders.
Figure 1. Resulting rectified images of dual cameras: (a) left image from camera 2, and (b) right image from camera 1, (c) resulting fused image from two rectified images after registration and, (d) cropped without the borders.
Remotesensing 05 01875f1
Figure 2. Dual head system with two Fuji S3 Pro cameras.
Figure 2. Dual head system with two Fuji S3 Pro cameras.
Remotesensing 05 01875f2
Figure 3. (a) Image of the calibration field; (b) origin of the arbitrary object reference system; and (c) existing targets and distances directly measured with a precision calliper for quality control.
Figure 3. (a) Image of the calibration field; (b) origin of the arbitrary object reference system; and (c) existing targets and distances directly measured with a precision calliper for quality control.
Remotesensing 05 01875f3
Figure 4. Root Mean Squared Error (RMSE) of the check distances.
Figure 4. Root Mean Squared Error (RMSE) of the check distances.
Remotesensing 05 01875f4
Figure 5. Estimated standard deviations of f, x0 and y0 for both cameras.
Figure 5. Estimated standard deviations of f, x0 and y0 for both cameras.
Remotesensing 05 01875f5
Figure 6. Standard deviations of the computed base components.
Figure 6. Standard deviations of the computed base components.
Remotesensing 05 01875f6
Figure 7. Standard deviations of rotation elements of the Relative Rotation matrix computed from estimated exterior orientation parameters (EOP).
Figure 7. Standard deviations of rotation elements of the Relative Rotation matrix computed from estimated exterior orientation parameters (EOP).
Remotesensing 05 01875f7
Figure 8. (a) Set of virtual images used in the fusion experiments; (b) reduced set used in the bundle block adjustment.
Figure 8. (a) Set of virtual images used in the fusion experiments; (b) reduced set used in the bundle block adjustment.
Remotesensing 05 01875f8
Figure 9. Average values for the standard deviations of discrepancies in tie points coordinates of 5 rectified image pairs with different sets of Interior Orientation Parameters (IOP) and Relative Orientation Parameters (ROP).
Figure 9. Average values for the standard deviations of discrepancies in tie points coordinates of 5 rectified image pairs with different sets of Interior Orientation Parameters (IOP) and Relative Orientation Parameters (ROP).
Remotesensing 05 01875f9
Figure 10. Average values for the standard deviations of discrepancies in tie points coordinates of five rectified image pairs with different sets of IOP and ROP, after scale change in the right image.
Figure 10. Average values for the standard deviations of discrepancies in tie points coordinates of five rectified image pairs with different sets of IOP and ROP, after scale change in the right image.
Remotesensing 05 01875f10
Figure 11. Distribution of ground control Points (triangles) and check points (circles) in the experiment with bundle adjustment.
Figure 11. Distribution of ground control Points (triangles) and check points (circles) in the experiment with bundle adjustment.
Remotesensing 05 01875f11
Figure 12. RMSE in the check points coordinates obtained in a bundle adjustment with three virtual images generated with parameters obtained in the experiments: (a) without and (b) with scale correction in the right image.
Figure 12. RMSE in the check points coordinates obtained in a bundle adjustment with three virtual images generated with parameters obtained in the experiments: (a) without and (b) with scale correction in the right image.
Remotesensing 05 01875f12
Table 1. Technical details of the camera used.
Table 1. Technical details of the camera used.
CamerasFuji S3 Pro
SensorCCD − 23.0 × 15.5 mm
Number of pixels4,256 × 2,848 (12 MP)
Pixel size (mm)0.0054
Focal length (mm)28.4
Table 2. Characteristics of the seven experiments with real data.
Table 2. Characteristics of the seven experiments with real data.
ExperimentA1 and A2BCDEFG
RO ConstraintsSingle camera calib.NYYYYY
Variation of the RO angular elements--1″10″15″30″1′
Variation of the base components (mm)--11111

Share and Cite

MDPI and ACS Style

Tommaselli, A.M.G.; Galo, M.; De Moraes, M.V.A.; Marcato, J., Jr.; Caldeira, C.R.T.; Lopes, R.F. Generating Virtual Images from Oblique Frames. Remote Sens. 2013, 5, 1875-1893. https://doi.org/10.3390/rs5041875

AMA Style

Tommaselli AMG, Galo M, De Moraes MVA, Marcato J Jr., Caldeira CRT, Lopes RF. Generating Virtual Images from Oblique Frames. Remote Sensing. 2013; 5(4):1875-1893. https://doi.org/10.3390/rs5041875

Chicago/Turabian Style

Tommaselli, Antonio M. G., Mauricio Galo, Marcus V. A. De Moraes, José Marcato, Jr., Carlos R. T. Caldeira, and Rodrigo F. Lopes. 2013. "Generating Virtual Images from Oblique Frames" Remote Sensing 5, no. 4: 1875-1893. https://doi.org/10.3390/rs5041875

Article Metrics

Back to TopTop