Next Article in Journal
A Hypergraph-Based Blockchain Model and Application in Internet of Things-Enabled Smart Homes
Previous Article in Journal
On the Feasibility of Low-Cost Wearable Sensors for Multi-Modal Biometric Verification
Open AccessArticle

GNSS-Assisted Integrated Sensor Orientation with Sensor Pre-Calibration for Accurate Corridor Mapping

1
LaSTIG, IGN, ENSG, University Paris-Est, F-94160 Saint-Mande, France
2
CACOH, Compagnie Nationale du Rhône, 69007 Lyon, France
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Zhou, Y.; Rupnik, E.; Faure, P.; Pierrot-Deseilligny, M. GNSS-assisted accurate corridor mapping with small UAV. In Proceedings of the Conférence Française de Photogrammétrie et Télédétection (CFPT), Paris, France, 25–28 June 2018.
Sensors 2018, 18(9), 2783; https://doi.org/10.3390/s18092783
Received: 15 June 2018 / Revised: 8 August 2018 / Accepted: 21 August 2018 / Published: 24 August 2018
(This article belongs to the Section Remote Sensors, Control, and Telemetry)

Abstract

With the development of unmanned aerial vehicles (UAVs) and global navigation satellite system (GNSS), the accurate camera positions at exposure can be known and the GNSS-assisted bundle block adjustment (BBA) approach is possible for integrated sensor orientation (ISO). This study employed ISO approach for camera pose determination with the objective of investigating the impact of a good sensor pre-calibration on a poor acquisition geometry. Within the presented works, several flights were conducted on a dike by a small UAV embedded with a metric camera and a GNSS receiver. The multi-lever-arm estimation within the BBA procedure makes it possible to merge image blocks of different configurations such as nadir and oblique images without physical constraints on camera and GNSS antenna positions. The merged image block achieves a better accuracy and the sensor self-calibrated well. The issued sensor calibration is then applied to a less preferable acquisition configuration and the accuracy is significantly improved. For a corridor acquisition scene of about 600 m , a centimetric accuracy is reached with one GCP. With the provided sensor pre-calibration, an accuracy of 3.9 c m is achieved without any GCP.
Keywords: photogrammetry; sensor calibration; GNSS; integrated sensor orientation photogrammetry; sensor calibration; GNSS; integrated sensor orientation

1. Introduction

For traditional airborne photogrammetry with unmanned aerial vehicles (UAVs), camera poses (position and attitude) are determined indirectly using the well-known method bundle block adjustment (BBA). The BBA method is effective and widely employed for photogrammetric production when the scene is well-textured and allows for automated tie point extraction.
The photogrmmetric accuracy is strongly dependent on the acquisition geometry and the number of ground control points (GCPs) as well as their distribution within the image block [1,2]. Among different configurations of UAV acquisitions, the corridor mapping is of particular interest, for instance in dike surveillance, highway planning and power line surveys [3]. Nevertheless, it is challenging in many aspects. The challenges come mainly from the special network geometry, and the fact that a well-distributed GCP network is not easy to ensure. The acquired images are often in strips, which makes it difficult to have satisfying cross-track overlaps and thus results in a less accurate camera pose determination. While the employment of a large number of GCPs prevents stereo model distortion [4], the field work of GCP establishment can be substantially expensive and time-consuming.
With the appearance and development of global navigation satellite system (GNSS), it is possible to know the accurate camera projection center positions at exposure and thus to reduce, or even eliminate, the requirement of GCPs within the image block [5,6]. Several GCPs are still used for the purpose of improving the redundancy and identifying possible bias in GNSS positioning [7,8]. When coupling this GNSS-assisted bundle block adjustment approach to an inertial navigation system (INS), camera poses can be determined directly for each image without the need of traditional BBA procedure. This said, for small UAVs, an accurate position and attitude aerial control remains challenging due to the limited availability of payload, power and volume.

1.1. Related Work

Different studies have been carried out to mitigate the errors in INS/GNSS system. The linear offset lever-arm can be determined indirectly by computing the difference between GNSS-derived positions of the antenna reference point and the camera projection centers issued from bundle block adjustment [9]. This offset can also be estimated as an additional parameter during the bundle block adjustment, however, the accuracy is limited due to its correlation with camera interior orientation parameters [10,11,12]. While, with classical methods (by a calliper or by photogrammetric approaches), the linear offset lever-arm can be determined, it is not possible for the boresight calibration to reach sufficient accuracy in the same way. Accordingly, several boresight calibration methods have been proposed [13,14,15,16]. It can be performed either with “one-step” method (i.e., estimation within the BBA process) or with “two-step” method (i.e., comparison of the INS/GNSS-derived attitude with that obtained by BBA process). Moreover, taking into account the remaining temporal correlations within the navigation system can lead to a more realistic estimation [17]. Despite the possibilities of improving the INS/GNSS system accuracy, the INS system demands more effort to reduce errors and often needs accurate GNSS positions as constraints for error control. Given the limited UAV payload capability, this demand on accuracy and thus on high quality of INS/GNSS sensors can be difficult to meet.
Therefore, for small UAV photogrammetric acquisitions, the position aerial control of GNSS systems outperforms the attitude aerial control of INS systems with its better accuracy, lower cost and lower payload requirements. The assisted bundle block adjustment with GNSS data, tie points and GCPs seems to be a more interesting approach. This approach is also referred to as Integrated Sensor Orientation (ISO). On the other hand, due to the substantial inconvenience and cost of GCP establishment, especially in areas with difficult access, the number of GCPs is to be reduced to a minimum.
In 2002, a multi-site test investigating sensor orientation is carried out by the European Organization for Experimental Photogrammetric Research (OEEPE) [18] using the state-of-the-art GNSS/IMU technology of 1999. It shows that the direct sensor orientation can reach an accuracy of 5–10 c m in planimetry and 10–15 c m in altimetry, which is larger than the standard photogrammetric result by a factor of 2–3. The additional introduction of tie points into BBA procedure without GCPs improves in particular the accuracy in height and an accuracy of 5–10 c m is reached. If a minimum of GCPs is introduced, one can achieve an accuracy in object space very similar to that of conventional photogrammetry. Though with direct sensor orientation, the time and cost decrease significantly, integrated sensor orientation is preferable when very high accuracy is indispensable.
In [19], an aerial acquisition of block configuration is performed on a 250 m × 300 m block with an embarked GNSS-inertial system integrated with a Sony a7R camera. Eight north–south strips and one cross strip are flown at 80 m , resulting in a GSD of 1 c m . With aerial control issued from the GNSS-inertial system and one GCP included in the BBA procedure, an accuracy of 3 c m is obtained in both horizontal axis and 11 c m in vertical component.
For a corridor mapping, two flight strips are flown at 135 m [20]. Data of a high-grade dual-frequency GPS receiver are introduced in BBA procedure as aerial position control. With pre-calibrated camera and without GCPs, an accuracy of 12, 11 and 64 c m are reached in northing, easting and height components, respectively.
With the aid of precise aerial position control together with a pre-calibrated camera, two case studies for accurate mapping are presented without GCPs [5]. The first one being block configuration, a self-calibration is performed within the BBA procedure without GCPs. The obtained accuracy with this configuration is 4.3 c m horizontal and 4 c m vertical. The second case study is a corridor configuration of 1200 × 180 m and the camera is pre-calibrated. Without GCPs in BBA procedure, an accuracy of 5.9, 3.3 and 7.0 c m is reached on x, y and z components, respectively.
A new approach concerning relative aerial control is proposed for integrated sensor orientation in [21]. To some extent, the procedure is simpler since the boresight matrix vanishes from the model; moreover, high-grade dual-frequency GNSS receiver can be replaced by single-frequency and carrier-phase GNSS receiver since relative aerial control is more resistant to GNSS bias. A block configuration is carried out with relative aerial control [6]. With one GCP and partially biased GNSS data, the accuracy is maintained compared to without GNSS bias and is of 2.9, 2.2 and 3.8 c m on x, y and z components, respectively.
An dynamic network is proposed to tightly integrate GNSS/INS measurements into BBA procedure in [22]. For a two-flight-strip acquisition of corridor configuration with pre-calibrated boresight, lever-arm and camera intrinsics, an accuracy of 2.5, 1.5 and 1.2 c m on easting, northing and vertical components, respectively, is obtained without GCPs.

1.2. Paper Structure

Section 2 gives brief introduction of the employed UAV system and sensors. Section 3 gives details on the methodological aspect. In Section 4, the nature of the acquisition field is described and the conducted flights are presented. Section 5 is devoted to details on the data processing. Finally, the results of conducted acquisitions are presented and the photogrammetric accuracy is evaluated. Discussion of the results are given in Section 6, as well as recommendations for further investigations.

2. System Design

2.1. UAV

The chosen UAV is a Copter 1B of SURVEY Copter (see Figure 1). It has a wingspan of 1.82 m and a length of 1.66 m . Powered by a gasoline engine, the maximal payload capacity of the UAV is 4.1 k g and the endurance is up to 60 min . The nominal flying altitude of the UAV is 150 m and the maximal flight speed is 10 m s . The UAV possesses a radio communication with its command station. Given a pre-set flight plan registered in the command station, the flight can be performed automatically. Thus, a steady longitudinal/lateral overlap can be assured. An aluminium base mounted on the UAV was adopted for rigid camera installation and cable fixation.

2.2. Camera

The camera chosen for data acquisition is an in-house metric camera CamLight (see Figure 1), designed by team Loemi (Laboratoire d’Opto-éléctronique, de Métrologie et d’Instrumentation) of laboratory LaSTIG, IGN (Institut National de l’Information Géographique et Forestière) to meet the needs of photogrammetric UAV acquisitions [23]. The compact camera body (without lens) has a low mass of 160 g and is compatible with most commercially available lenses. The camera is equipped with a full frame sensor of 5120 × 3840 pixels and a 35 m m lens (140 g ). During acquisition, the camera is powered by the on-board power supply and is triggered with an intervalometer every 2.5 s .

2.3. GNSS Module

The GNSS module chosen to be integrated to the camera system is a u-blox Neo M8T chip with a L1 GPS antenna. The GPS time of exposures is provided by the GPS module and is registered in the header file for each image. During the acquisition, the GPS sampling rate was set to 1 Hz .

3. Methodology

Figure 2 depicts the global data processing workflow. With the aid of a priori information coming from GNSS trajectories, tie points were extracted solely on overlapping images with the algorithm SIFT (Scale Invariant Feature Transform) [24]. A first BBA procedure including only tie points was performed to recover the observed scene. At this step, we get information of the camera attitudes and the tie points are completed by taking this into account. A second BBA procedure including complete tie points was performed for estimating camera poses in a relative scale. GNSS-derived camera positions were used to transform the estimated relative camera poses to absolute ones by estimating a spatial similarity. Afterwards, the tie points, the GCPs, the GNSS-derived camera positions and the image measurements were used as observations for an absolute BBA. Finally, the accuracy was evaluated with CPs.
The bundle block adjustment was carried out with bootstrap solutions. Performing direct algorithm on a single image, a pair or a triplet of images, the global orientation was deduced sequentially starting from a seed image pair [25]. The camera was self-calibrated during the relative BBA process with a two-step procedure. Firstly, the center of distortion (CD) and the radial symmetric distortion (RSD) were estimated (the latter with a polynomial up to R 15 ). Then, the RSD and the CD parameters were fixed, and the asymmetric radial distortion was estimated. The objective of the second phase was to model the sensor and lens discrepancies that have no physical explanations. The refined camera model, especially the high degree polynomial for radial distortion correction, mitigates significantly the recurrent “dome effect” without over-parameterizing [4,26]. The estimated parameters of the camera model then served as input for an absolute bundle block adjustment and can be either re-estimated or fixed within the procedure. This absolute BBA procedure takes tie points, GNSS-derived camera positions, GCP coordinates and image measurements as observations. Aforementioned observations being redundant, the cost function was minimized with least square method.
The mathematical model of the performed bundle block adjustment is presented as follows. The cost function to minimize is:
E = l = 1 L m = 1 M ( p l , m ζ ( π ( R m ( P l C m ) ) ) ) σ i m A 2
+ z = 1 Z k = 1 K ( R k ( C k C gnss , k ) θ z ) σ g n s s 2
+ n = 1 N ( P n P gcp , n ) σ g c p 2
+ n = 1 N m = 1 M ( p n , m ζ ( π ( R m ( P gcp , n C m ) ) ) ) σ i m H 2
where:
  • l is the index of tie points;
  • m is the image index;
  • z is the index of image blocks assorted by lever-arm;
  • k is the index of images with GNSS measurements;
  • n is the index of GCPs;
  • ζ is the camera model;
  • π is the projection function;
  • pl,m is the 2D position of tie point l in image m;
  • (Rm, Cm) is the pose of image m;
  • Pl is the 3D position of tie point l;
  • Ck is the camera projection center of image k;
  • Cgnss,k is the phase center of GNSS antenna of image k;
  • Rk is the world to camera rotation;
  • θ z is the lever-arm of image block z;
  • Pn is the pseudo-intersection position of GCP n;
  • Pgcp,n is the ground measurement of GCP n;
  • pn,m is the image measurement of GCP n in image m;
  • σ im A is the weight of tie points in images;
  • σ gnss is the weight of GNSS measurements;
  • σ gcp is the weight of GCPs; and
  • σ im H is the weight of image measurements of GCPs.
The global cost function is composed of four parts: (1) the cost of tie points; (2) the cost of GNSS measurements; (3) the cost of GCPs; and (4) the cost of image measurements of GCPs. Three strategies were implied here to determine the observation weight. The first strategy weights observations by their true standard deviation known a priori (similar to Gauss–Markov model). The second strategy controlled observation weights of each category, which avoids overweighting one single category only due to its abundance (e.g., tie point observations are abundant compared to GCP observations). The third strategy handled robustness; a higher weight was given to observations having small residuals during BBA procedure. The minimization problem was solved with Levenberg–Marquardt (L-M) method. The L-M is in essence the Gauss–Newton method enriched with a damping factor to handle rank-deficient Jacobian matrices [27].
Typically, a single lever-arm was estimated per image block. Here, to deal with an image block of multiple flight configurations (i.e., nadir and oblique camera-looking flights), the cost function was extended to take into account multiple lever-arms in one image block.

4. Data Acquisition

4.1. Acquisition Field

On 4 October 2017, several flights were conducted in Culoz, France. The surveying object of interest is a north–south orientated dike of about 1 . 2 k m long with a turn on the north end. The scene has a corridor configuration with little height difference. Thirty-one ground points were regularly placed along the corridor and surveyed to be used either as ground control points (GCP) or check points (CP).

4.2. Flight Design

According to the flight authorization issued by the DGAC (Direction Générale de l’Aviation Civil), the distance between the telepilot and the employed UAV could not surpass 600 m during the acquisition. Therefore, the acquisition field was divided into two segments of 600 m and surveyed separately. The first segment of 600 m consists of the south part of the dike, while the second segment consists of the rest of the dike including the turn at the north end.
The first segment was surveyed with the routine acquisition configuration, a nadir flight of 3 strips at 50 m (denoted as s1-n50). This configuration is simple and economic, whereas not preferable when high photogrammetric accuracy is demanded. It introduces correlations among the camera focal length, the height of camera and the lever-arm. Consequently, parameters could not be accurately estimated. The second segment consists of three flights, a nadir flight of 3 strips at 50 m (denoted as s2-n50), an oblique flight of 3 strips at 50 m (denoted as s2-o50) and a nadir flight of 2 strips with the first strip at 70 m and the second strip at 30 m (denoted as s2-n3070). When mixing oblique and nadir images, images acquired from different flight heights, the above-mentioned correlations could be significantly mitigated and these parameters were better estimated. This configuration of multiple acquisition angles and multiple flight heights is desirable though costly. The objective of the study was to investigate, given a good camera model and a correct lever-arm, the achievable accuracy of a simple while not preferable network configuration. The estimated camera model and lever-arm of the image block of segment 2 (s2-n50 + s2-o50 + s2-n3070) was considered of high accuracy and was used for sensor pre-calibration of the image block of Segment 1 (s1-n50).
Figure 3 depicts the conducted flights and the information of flights are given in Table 1.

5. Data Processing

5.1. Topographic Data Processing

The measurements of ground points were carried out with a Leica total station. Seven stations in total were employed in “free” station mode and the measurements were performed over two days. A Leica Viva GNSS receiver was installed on the south end of the dike to give absolute georeferencing information. Its position was precisely determined by processing a ∼26 h static observation session over two days with the service provided by the IGN. The compensation of the topographic measurements was carried out with Comp3D, a geodesic micro-compensation software developed at IGN.

5.2. GNSS Data Processing

The GNSS raw data were post-processed by RTKLib open source software in the carrier-phase differential mode with respect to the Leica Viva GNSS receiver mentioned in Section 5.1 as base station. Table 2 presents the configuration of main parameters used in RTKLib for the GNSS trajectory processing. The ratio of epochs with a fixed solution to total epochs of the GNSS trajectory solution is 99.17% and the mean standard deviation along x, y and z axes amounts to 8.4, 5.5 and 9.6 m m , respectively.

5.3. Synchronization of GNSS and Camera Modules

With a GPS sampling rate of 1 Hz , a position is calculated every second. Nevertheless, the frequency of image acquisition is 0.4 Hz , which introduces a desynchronization between GNSS and camera system. To determine the position of the camera at exposure, a parabolic interpolation was carried out taking into account the GNSS-derived position accuracy and velocity. Figure 4 depicts the desynchronization between the two systems; circles in blue represent GNSS-derived positions and pyramids represent camera frames.

5.4. Photogrammetric Data Processing

The photogrammetric data processing is performed with MicMac, a free open-source photogrammetric software developed at IGN and ENSG (Ecole Nationale des Sciences Géographiques) since 2003 [25].
To reduce memory requirements and processing time, a tie point reduction is then performed on SIFT-extracted tie points while maintaining tie point multiplicity with a stand-alone tool in MicMac [28]. Figure 5 and Figure 6 depict the distribution and multiplicity of SIFT-reduced tie points.
Image measurements of GCPs/CPs are performed manually. Figure 7 depicts the image measurement error on GCPs/CPs ordered according to their position along the south–north direction (point A: 10th, point B: 24th).

6. Results

The photogrammetric data processing results are reported in Table 3. Flights of Segment 2 (s2-n50, s2-o50, and s2-n3070) were used for camera and lever-arm calibration. The camera model and lever-arm were considered unknown and were estimated during the BBA procedure. The accuracy was evaluated with CPs and the root-mean-square (RMS) of residuals on CPs was used as the accuracy criteria (the GCPs included in the BBA procedure were not used as CPs for accuracy evaluation). The camera model and lever-arm that gave better accuracy were then used as a priori information for Flight s1-n50. For each image block, the photogrammetric accuracy and estimation results were given in two cases: without GCP and with one GCP used in the BBA procedure. The internal photogrammetric accuracy was measured by σ , the standard deviation of all image measurement residuals for the entire block. The lower σ , the less tension within the BBA solution. Typically, a σ in the range of 4–7 μ m (0.6–1 pixel) is an indication of a high quality internal block adjustment. The external photogrammetric accuracy is measured by the RMS calculated on check point 3D coordinates (GCP used during BBA is excluded). Estimated lever-arms are given in the camera frame (the original point is the camera optical center). Figure 8 depicts the relative position between the GNSS receiver antenna and the camera body, which corresponds to the lever-arm vector.
Given the little elevation difference along the dike, a strong correlation exists between the camera focal length, the height of the camera and the z component of the lever-arm. The correlation coefficients of these parameters are given for each configuration.
In the following, the results are discussed in more details focusing on the influence of the adopted acquisition geometry on the final accuracy in object space.

6.1. Influence of Oblique Images

Two datasets were processed for camera and lever-arm calibration. The first dataset combines nadir and oblique images of Flights s2-n50 and s2-o50. The inclusion of oblique images eases the correlation between the focal length and the lever-arm as well as between the focal length and the camera height ( δ L A z , f = 0.18 and δ C z , f = 0.18). However, when no GCP is included in BBA procedure, the z component of lever-arm remains highly correlated with the camera height ( δ C z , L A z = 0.99). By including one GCP (Point B, as shown in Figure 3), the z component of lever-arm is better decorrelated from the height of camera (correlation coefficient δ C z , L A z = 0.80). With the added information, more constraints are applied within the BBA procedure and the correlations between the focal length and other two parameters increased ( δ L A z , f = 0.84, δ C z , f = 0.68). The internal photogrammetric accuracy is also slightly improved ( σ is decreased by 0.3 μ m ). Moreover, the RMS of residuals on CPs decreases considerably, especially along z-axis. This gain is mainly due to the proper estimation of the z component of lever-arm. We can see that the RMS of residuals on CPs decreases by 7.6 c m on vertical direction while the estimated lever-arm vector has an increase of 7 and 6.1 c m on z axis for nadir and oblique flight, respectively.

6.2. Influence of Multiple Flight Heights

In the second dataset, nadir images of different flight heights (s2-n3070) are added to the first dataset (s2-n50, s2-o50). Without any GCP, the addition of images of different flight heights eases the correlation of the focal length with other two parameters ( δ L A z , f = 0.07 and δ C z , f = 0.07) while the lever-arm on z-axis is still strongly correlated to camera height ( δ C z , L A z = 0.99). The RMS of 3D point residuals is improved by 2.4 c m and the lever-arm estimation of nadir image is diminished by 2 c m . With one GCP being included, the addition of images of different flight heights largely reduces the correlations among these three parameters ( δ L A z , f decreases from 0.84 to 0.55, δ C z , L A z decreases from 0.80 to 0.62, and δ C z , f decreases from 0.68 to 0.33). The internal and the external photogrammetric accuracies are further improved and the RMS of 3D point residuals is decreased to 1.0 c m . For camera model and lever-arm calibration, all GCPs of Segment 2 are included for computing an optimal BBA solution. Parameter estimates issued from this BBA procedure are then used as a priori information for the Flight s1-n50.

6.3. Basic Flight Configuration

For a simple, economic while not preferable flight configuration such as s1-n50, with neither a priori information on the camera model and the lever-arm, nor any GCP being included in the BBA procedure, an important residual is observed on the z component. The camera height and the lever-arm are strongly correlated ( δ C z , L A z = 0.99). The inclusion of one GCP (Point A, as shown in Figure 3) improves to a large extent the photogrammetric accuracy, especially on the z axis. However, due to the poor geometry of the acquisition network, the correlation between the lever-arm and the height of camera remains high. Though residuals are small on z axis, the z component of the lever-arm is far from being correct. The accuracy is less satisfying than when oblique images are also taken into account.
Given a good calibration of the camera model and the lever-arm estimated with the image block of Segment 2 (s2-n50 + s2-o50 + s2-n3070) with all GCPs, the photogrammetric accuracy is further improved. Without any GCPs, the RMS of 3D point residuals on CPs is equal to 3.9 c m . It is worth noting that when giving well calibrated camera and lever-arm, the results obtained with a nadir flight of 3 strips (s1-n50 + 0 GCP) outperform that obtained with the second dataset (s2-n50 + s2-o50 + s2-n3070 + 0 GCP). The inclusion of one GCP does not improve the external photogrammetric accuracy on z, which can be explained by the fact that the camera model and lever-arm are already well-calibrated and fixed during the BBA procedure. However, even with well-calibrated parameters, the accuracy of image block s1-n50 does not exceed the one of the image block s2-n50 + s2-o50 + s2-n3070 when employing one GCP. It is possibly due to the better quality (better multiplicity and distribution) of tie points extracting from the latter image block.

7. Discussion

Four flights were conducted on two segments of a 1.2 k m long dike and about 1100 images were acquired. In this study, unlike what most commercially available photogrammetric softwares do, we performed an estimation of multiple lever-arms within the BBA procedure. It liberates us from the physical constraint between the camera and the GNSS antenna and makes it possible to merge image blocks of different configurations. For camera calibration strategy, instead of performing laboratory calibration in a close range configuration, we proposed an in-flight calibration which describes the true acquisition condition and gives more accurate camera calibration. Despite the poor geometry of the dike, our approach obtained a centimetric accuracy with one GCP and an accuracy of 3.9 c m with 0 GCP when in-flight calibration of camera and lever-arm is provided. The results outperform that of the majority of literature works.
Based on the experiments presented in the article, we suggest that the best acquisition geometry for corridor mapping is the inclusion of nadir and oblique images of different flight heights. The mapping accuracy can reach 1 c m when good GNSS data and at least one GCP are given. For the acquisitions with lower accuracy demands and limited budget, one nadir flight is sufficient for achieving an accuracy of 4 c m with a well-calibrated camera and lever-arm. Under the condition where well-calibrated camera and lever-arm are unavailable, the combination of one nadir and one oblique flight can reach an accuracy of 8–10 c m without any GCPs.

Author Contributions

Y.Z. and M.P.-D. provided the initial idea for this study; Y.Z. conceived and designed the experiments; Y.Z. analyzed the data; Y.Z. wrote the main manuscript; and E.R. reviewed the manuscript; P.-H.F. provided the funding for acquisitions.

Funding

This research received no external funding.

Acknowledgments

This study is part of Ph.D. thesis work that is funded by the Compagnie Nationale du Rhône.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GNSSGlobal navigation satellite system
BBABundle block adjustment
ISOIntegrated sensor orientation
UAVUnmanned aerial vehicles
GCPGround control points
INSInertial navigation system
IMUInertial measurement unit
CPCheck point
LoemiLaboratoire d’opto-éléctrique de métrologie et d’instrumentation
IGNInstitut national de l’information géographique et forestière
SIFTScale Invariant Feature Transform
DGACDirection générale de l’aviation civil
ENSGEcole Nationale des Sciences Géographiques
RMSRoot mean square

References

  1. Remondino, F.; Barazzetti, L.; Nex, F.; Scaioni, M.; Sarazzi, D. UAV photogrammetry for mapping and 3d modeling-current status and future perspectives. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, C22. [Google Scholar] [CrossRef]
  2. Vallet, J.; Panissod, F.; Strecha, C.; Tracol, M. Photogrammetric performance of an ultra light weight swinglet UAV. In Proceedings of the Unmanned Aerial Vehicle in Geomatics Conference, Zürich, Switzerland, 14–16 September 2011. [Google Scholar]
  3. Tournadre, V.; Pierrot-Deseilligny, M.; Faure, P.H. UAV photogrammetry to monitor dykes-calibration and comparison to terrestrial lidar. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 143. [Google Scholar] [CrossRef]
  4. James, M.R.; Robson, S. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [Google Scholar] [CrossRef][Green Version]
  5. Rehak, M.; Skaloud, J. Fixed-wing micro aerial vehicle for accurate corridor mapping. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 23. [Google Scholar] [CrossRef]
  6. Skaloud, J.; Rehak, M.; Lichti, D. Mapping with MAV: Experimental Study on the Contribution of Absolute and Relative Aerial Position Control. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 123. [Google Scholar] [CrossRef]
  7. Ackermann, F. Operational rules and accuracy models for GPS-aerotriangulation. Arch. ISPRS 1992, 1, 691–700. [Google Scholar]
  8. Heipke, C.; Jacobsen, K.; Wegmann, H. Integrated Sensor Orientation: Test Report and Workshop Proceedings; Bundesamt für Kartographie und Geodäsie: Frankfurt am Main, Germany, 2002. [Google Scholar]
  9. Ellum, C.; EL-SHEIMY, N. Inexpensive Kinematic Attitude Determination from MEMS-Based Accelerometers and GPS-Derived Accelerations. Navigation 2002, 49, 117–126. [Google Scholar] [CrossRef]
  10. Skaloud, J.; Vallet, J. High accuracy handheld mapping system for fast helicopter deployment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 614–619. [Google Scholar]
  11. Lichti, D.; Skaloud, J.; Schaer, P. On the calibration strategy of medium format cameras for direct georeferencing. In Proceedings of the International Calibration and Orientation Workshop EuroCOW, Castelldefels, Spain, 30 January–1 February 2008. [Google Scholar]
  12. Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y.; Martin, O. Lightweight UAV with on-board photogrammetry and single-frequency GPS positioning for metrology applications. ISPRS J. Photogramm. Remote Sens. 2017, 127, 115–126. [Google Scholar] [CrossRef]
  13. Skaloud, J.; Cramer, M.; Schwarz, K. Exterior orientation by direct measurement of camera position and attitude. Int. Arch. Photogramm. Remote Sens. 1996, 31, 125–130. [Google Scholar]
  14. Kruck, E. Combined IMU and sensor calibration with BINGO-F. In Proceedings of the OEEPE Workshop Integrated Sensor Orientation, Hannover, Germany, 17–18 September 2001. [Google Scholar]
  15. Cramer, M.; Stallmann, D. System calibration for direct georeferencing. Int. Arch. Photogramm. Remote Sens. 2002, 34, 79–84. [Google Scholar]
  16. Mostafa, M. Camera/IMU boresight calibration: New advances and performance analysis. In Proceedings of the ASPRS Annual Meeting, Washington, DC, USA, 19–26 April 2002; pp. 21–26. [Google Scholar]
  17. Skaloud, J.; Schaer, P. Towards A More Rigorous Boresight Determination: Theory, Technology and Realities of Inertial/GPS Sensor Orientation. In Proceedings of the ISPRS, WG 1/5 Platform and Sensor Integration Conference, Munich, Germany, 17–19 September 2003. [Google Scholar]
  18. Heipke, C.; Jacobsen, K.; Wegmann, H. Analysis of the results of the OEEPE test Integrated Sensor Orientation. In OEEPE Integrated Sensor Orientation Test Report and Workshop Proceedings; Citeseer: State College, PA, USA, 2002. [Google Scholar]
  19. Mian, O.; Lutes, J.; Lipa, G.; Hutton, J.; Gavelle, E.; Borghini, S. Direct georeferencing on small unmanned aerial platforms for improved reliability and accuracy of mapping without the need for ground control points. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 397. [Google Scholar] [CrossRef]
  20. Jozkow, G.; Toth, C. Georeferencing experiments with UAS imagery. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 25. [Google Scholar] [CrossRef]
  21. Blázquez, M.; Colomina, I. Relative INS/GNSS aerial control in integrated sensor orientation: Models and performance. ISPRS J. Photogramm. Remote Sens. 2012, 67, 120–133. [Google Scholar] [CrossRef]
  22. Cucci, D.A.; Rehak, M.; Skaloud, J. Bundle adjustment with raw inertial observations in UAV applications. ISPRS J. Photogramm. Remote Sens. 2017, 130, 1–12. [Google Scholar] [CrossRef]
  23. Martin, O.; Meynard, C.; Pierrot Deseilligny, M.; Souchon, J.P.; Thom, C. Réalisation d’une caméra photogrammétrique ultralégère et de haute résolution. In Proceedings of the Colloque Drones et Moyens Légers Aéroportés D’observation, Montpellier, France, 24–26 June 2014; pp. 24–26. [Google Scholar]
  24. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
  25. Rupnik, E.; Daakir, M.; Deseilligny, M.P. MicMac—A free, open-source solution for photogrammetry. Open Geospat. Data Softw. Stand. 2017, 2, 14. [Google Scholar] [CrossRef]
  26. Tournadre, V. Métrologie par Photogrammétrie Aéroportée Légère: Application au Suivi D’évolution de Digues. Ph.D. Thesis, Université Paris-Est, Saint-Mande, France, 2015. [Google Scholar]
  27. Wright, S.; Nocedal, J. Numerical Optimization; Springer: Berlin, Germany, 1999; Volume 35, p. 7. [Google Scholar]
  28. Martinez-Rubi, O. Improving Open-Source Photogrammetric Workflows for Processing Big Datasets; Netherlands eScience Center: Amsterdam, The Netherlands, 2016. [Google Scholar]
Figure 1. CamLight (a); camera set-up on UAV (b); and UAV (c).
Figure 1. CamLight (a); camera set-up on UAV (b); and UAV (c).
Sensors 18 02783 g001
Figure 2. Data processing workflow.
Figure 2. Data processing workflow.
Sensors 18 02783 g002
Figure 3. An illustration of the conducted flights.
Figure 3. An illustration of the conducted flights.
Sensors 18 02783 g003
Figure 4. An illustration of the desynchronization between GNSS (blue circles) and camera (pyramid) modules.
Figure 4. An illustration of the desynchronization between GNSS (blue circles) and camera (pyramid) modules.
Sensors 18 02783 g004
Figure 5. Multiplicity histogram of SIFT-reduced tie points.
Figure 5. Multiplicity histogram of SIFT-reduced tie points.
Sensors 18 02783 g005
Figure 6. Distribution of SIFT-reduced tie points with multiplicity represented by grey scale.
Figure 6. Distribution of SIFT-reduced tie points with multiplicity represented by grey scale.
Sensors 18 02783 g006
Figure 7. Image measurement error of GCPs/CPs.
Figure 7. Image measurement error of GCPs/CPs.
Sensors 18 02783 g007
Figure 8. An illustration of the relative position between the camera and the GNSS receiver antenna.
Figure 8. An illustration of the relative position between the camera and the GNSS receiver antenna.
Sensors 18 02783 g008
Table 1. Details on the conducted flights.
Table 1. Details on the conducted flights.
Flights1-n50s2-n50s2-n3070s2-o50
Nb of images395315200323
Height (m)505030, 7050
Orientationnadirnadirnadiroblique
Nb of strips3323
Overlap (%)forward80
side70
GCP accuracy (mm)horizontal1.3
vertical1
Camera focal length (mm)35
GSD (mm)10106, 1410
Table 2. RTKLib parameterization of GNSS trajectory processing.
Table 2. RTKLib parameterization of GNSS trajectory processing.
Positioning ModeKinematicTroposphere CorrectionSaastamoinen
FrequenciesL1Satellite EphemerisBroadcast
Filter typeCombinedNavigation SystemGPS
Elevation Mask15°Integer Ambiguity ResolutionFix and Hold
Ionosphere CorrectionBroadcastMin Ratio to Fix Ambiguity3.0
Table 3. Results of photogrammetric data processing.
Table 3. Results of photogrammetric data processing.
GCPParameter Liberation σ [ μ m ]RMS [ c m ]Lever-Arm [ c m ]Correlation
xyz3D xyz3D δ L A z , f δ C z , L A z δ C z , f
Calibration
s2-n50 + s2-o500camera model, lever-arm5.21.98.38.5na:51.434.862.10.180.99 ± 0.000.18 ± 0.00
ob:63.212.764.5
1 (pt B)camera model, lever-arm4.90.90.61.1na:51.041.865.90.840.80 ± 0.060.68 ± 0.05
ob:66.218.868.8
s2-n50 + s2-o50 + s2-n30700camera model, lever-arm5.01.95.96.2na:51.936.863.70.070.99 ± 0.000.07 ± 0.00
ob:64.814.666.4
1 (pt B)camera model, lever-arm4.70.70.81.0na:51.742.366.80.550.62 ± 0.090.33 ± 0.06
ob:66.519.469.3
14 (all)camera model, lever-arm4.7///na:51.742.867.20.850.44 ± 0.090.39 ± 0.08
ob:66.619.969.5
Acquisition
s1-n500camera model, lever-arm4.83.315.015.3na:53.847.771.90.280.99 ± 0.000.28 ± 0.00
1 (pt A)camera model, lever-arm4.83.30.93.4na:54.133.063.40.920.98 ± 0.020.91 ± 0.01
0N/A, given5.23.32.13.9na:51.742.867.2///
1 (pt A)N/A, given5.13.42.14.0na:51.742.867.2///
σ represents the sigma naught in bundle adjustment; δ L A z , f represents the correlation coefficient between the nadir flight lever-arm on z-axis and the focal length; δ C z , L A z represents the correlation coefficient between the camera height and the nadir flight lever-arm on z-axis; δ C z , f represents the correlation coefficient between the camera height and the focal length; and N/A means the camera model and the lever-arm are not estimated but given as a priori informations
Back to TopTop