Abstract
The paper presents the results of research on assessing the accuracy of angular position measurement relative to the sea horizon using a camera mounted on an unmanned bathymetric surveying vehicle of the Unmanned Surface Vehicle (USV) or Unmanned Aerial Vehicle (UAV) type. The first part of the article presents the essence of the problem. The rules of taking the angular position of the vehicle into account in bathymetric surveys and the general concept of the two-camera tilt compensator were described. The second part presents a mathematical description of the meters characterizing a resolution and a mean error of measurements, made on the base of the horizon line image, recorded with an optical system with a Complementary Metal-Oxide Semiconductor (CMOS) matrix. The phenomenon of the horizon line curvature in the image projected onto the matrix that appears with the increase of the camera height has been characterized. The third part contains an example of a detailed analysis of selected cameras mounted on UAVs manufactured by DJI, carried out using the proposed meters. The obtained results including measurement resolutions of a single-pixel and mean errors of the horizon line slope measurement were presented in the form of many tables and charts with extensive comments. The final part presents the general conclusions from the performed research and a proposal of directions for their further development.
1. Introduction
An important factor affecting the accuracy of bathymetric surveys performed with sonar, echosounder or Light Detection and Ranging (LiDAR) is the lack of correlation between a location of the positioning system antenna mounted on the surveying vehicle (surface type, e.g., USV, or air type, e.g., UAV) and a reflection point of the sound or light wave (depending on the type of the sensor) on the seabed [1]. Under ideal weather and propagation conditions, the acoustic or light ray is a straight line with a constant direction relative to the vertical. It allows to link the measured depth with the location of the positioning system antenna using constant values of coordinate rotation parameters which transform the coordinate system of the measuring sensor to the coordinate system connected with the Earth, most often WGS-84 [2].
However, in real conditions, due to sea waves and wind, it is necessary to take the pitch and roll of the surveying vehicle into account in each epoch of bathymetric measurement. Only on their basis, it is possible to determine the variable direction shifts of acoustic or light rays from a vertical line and then the coordinates (e.g., WGS 84) of the point of the sound/light wave reflection from the seabed [3].The problem can be solved by applying appropriate methods of the pitch and roll compensation. Currently, most of them use the information concerning spatial orientation angles obtained from Inertial Navigation Systems (INSs) which use for pitch and roll determination Global Navigation Satellite System (GNSS) receivers and Inertial Measurement Units (IMUs) of the Microelectromechanical Systems (MEMS) type. An IMU comprises tri–axial accelerometers and gyroscopes and is typically coupled with a magnetic flux sensor. Sensor fusion of the IMU readings allow measuring 3D orientation with respect to a fixed system of coordinates. Therefore, when an IMU is firmly attached to a UAV body, it is possible to obtain an estimate of its absolute orientation [4]. However, MEMS IMU outputs are corrupted by significant sensor errors. The navigation errors of a MEMS-based INS will therefore accumulate very quickly over time. This requires aiding from other sensors such as Global Navigation Satellite Systems (GNSS) [5]. IMU errors can be classified into two types: deterministic errors and random errors. Major deterministic error sources including constant bias, scale factor errors and misalignment. They can be removed by calibration and compensation. The random constant bias (turn to turn bias) and random noises are the main error sources in the orientation-finding system [6,7]. In the case of light surface bathymetric surveying vehicles of the USV type, the following INSs can be found [8]:
- Ekinox determining roll & pitch with RMS = 0.05° (RTK outage–30 s),
- Apogee determining roll & pitch z RMS = 0.012° (RTK outage–60 s),
- Horizon determining roll & pitch z RMS = 0.01° (RTK outage–60 s).
While, in the case of light aerial bathymetric surveying vehicles of the UAV type, the following INS series are used:
- Trimble Direct Mapping Solution (DMS) for example mounted on UAV RIEGL’s BathyCopter–determining roll & pitch with RMS in range of (0.015°, 0.2°) [9,10],
- Ellipse 2 cooperating with ASTRALiTe’s edge LiDAR mounted on UAV DJI Matrice 600 Pro UAV determining roll & pitch with RMS = 0.1° [11,12].
However, having in mind the rapid development of sea bathymetry, aimed at making more accurate measurements using UAV and USV autonomously [13,14,15], it can be expected to look for new methods of determining the spatial orientation angles.
As a proposal for an interesting and perspective solution, one can indicate a method based on observing the horizon line slope in the camera image [16,17]. This type of tilt compensator could be built from two cameras recording images (mounted perpendicular to each other on a vehicle with optical axes directed to the horizon) and a microcomputer that processes them into roll and pitch angles–measured as angles between the horizontal edge of the image and the extracted line of the horizon (Figure 1).
Figure 1.
The idea of measuring the horizon line slope: (a) Recording images with a two-camera tilt compensator mounted on a UAV; (b) The image processing process for the slope angle.
Due to the fact many effective methods of detecting the horizon line in the image have been developed [18,19] and a very wide range of light, high-resolution cameras are available, it seems advisable to determine with what accuracy can be measured the angular position of UAV and USV using a horizon line observation by the camera?
It should be additionally noted that the literature lacks information on this subject. The information may be very important when making decisions about taking further research and development works (R&D), raising the technology readiness level for new solutions based on the image of the sea horizon. They may include not only those used in bathymetric surveying but e.g., for stabilizing the spatial orientation of the flight, positioning of objects floating on the sea using UAVs, matching of the seabed 3D imaging maps for comparative purposes, etc.
Therefore, this article attempts to describe the method of assessing the measuring accuracy of the horizon line slope using a camera. It is supplemented with an example of the possibility of its practical application for the precision analysis of selected cameras mounted on USV manufactured by DJI [20].
2. Methods
Figure 2 presents the idea of measuring the horizon line slope with a CMOS matrix camera.
Figure 2.
The idea of measuring the horizon line slope with a CMOS matrix camera.
Accuracy of measuring the horizon line slope using a CMOS (Complementary Metal-Oxide Semiconductor) matrix camera can be characterized by two parameters: mean error and resolution. Their sizes depend mainly on the focusing of the optical system and the pixel size on the matrix. Focusing possibilities determine the “focusing power” of light rays. The shorter the focal length is, the stronger the lens refracts the rays. It means that focuses them more, causing moving the image away and reducing the measurement accuracy. The typical size of a single-pixel in modern CMOS sensors is from 1.7 to 14 micrometers. The smaller pixel size ensures better reproduction of image details and thus increases measurement accuracy.
The mean error of the horizon line slope measurement can be determined by applying the law of mean errors propagation formulated by C. F. Gauss. Knowing the mean errors of independent variables , , and of a single measurement result function:
the mean error equation can be easily written:
which after determining the partial derivatives will take the form:
where:
- —mean error of focal length measurement,
- —mean error of CMOS matrix width measurement,
- —mean error of angles i measurements on CMOS matrix.
While, using the similarity of right-angled triangles (Figure 3), the formula for the measurement resolution of a single-pixel representing the horizon line on the matrix, can be easily derived. It will take the following form:
where:
Figure 3.
Graphic interpretation of the measurement resolution of a single-pixel representing the horizon line on the matrix.
- —pixel size (calculated as the ratio of matrix height in units of length to matrix height in pixels),
- —focal length,
- —distance to the horizon line.
Where in [21,22]:
or taking the phenomenon of light refraction in the Earth’s atmosphere into account:
where:
- —length of the Earth’s radius,
- —camera height above sea level (a.s.l.),
- —Earth’s refraction coefficient (depending on the state of the atmosphere: pressure, temperature and humidity) [21,22,23,24,25].
Then, based on the known measurement resolution of single-pixel and horizontal field of view of the camera ():
the measurement resolution of the horizon line slope can be calculated:
where:
In Equation (7) it was assumed that the part of the horizon line (with the length ) in the horizontal field of view of the camera () has the shape of a line, not an arc (Figure 4).
Figure 4.
Part of the horizon line (with the length ) in the of the camera.
It was decided to simplify it because (as already shown in Figure 2) only the extreme pixels of the matrix (which are also boundary ) are used to calculate the horizon line slope . However, it should be realized that even for low AUV flight altitudes, but large values of horizon sagitta can be significant. Figure 5 presents the graphs of the horizon sagitta for several selected as a function of the height of the camera a.s.l. . Calculation were performed using the following relation:
where radius of horizon circle:
Figure 5.
Horizon sagitta as a function of the height of the camera a.s.l. (for equals 30°, 60°, 90° and 120°, R = 6,378,000.0 m, k = 0.16).
While, Figure 6 shows the horizon sagitta as the number of pixels on the matrix, calculated using the following formula:
where:
—horizontal resolution of the matrix in pixels.
Figure 6.
Horizon sagitta as a function of the height of the camera a.s.l. (for equals 30°, 60°, 90° and 120°, = 6,378,000.0 m, = 0.16, = 4000 pixels).
In Calculation (12) it was assumed that the optical axis of the camera is directed at a point lying on the horizon line, as shown in Figure 2.
The charts presented in Figure 5 testify the high values of the horizon sagitta . But these values do not translate directly into the shape of the horizon line seen in the image. Due to the large linear distortions of the image that occur at small angles between the camera’s optical axis line and the horizon plane, this arc will always be larger than the real one (“straightened”). This is clearly seen in Figure 6, where the value for and corresponds to just one pixel on the matrix, and for and –eight pixels.
3. Research and Discussion
Four different cameras mounted on UAVs manufactured by DJI were evaluated. They were chosen primarily because they differ in optical parameters, including focal lengths as well as matrix sizes and resolutions. Their most important technical parameters taken in the calculations into account are presented in Table 1.
Table 1.
Technical parameters of evaluated UAV cameras.
The measures of assessment were the measurement resolution and the mean error of measurement calculated using the formulas presented in Section 2, for:
- = 6,378,000.0 m,
- = 0.16 (average value of refraction coefficient for the Baltic Sea),
- = (0 m, 200 m),
- = 4 mm (DJI FC350), 5 mm (DJI FC220), (4 mm, 14 mmm) (DJI FC550RAW), 35 mm (SONY ILCE-7RM2);
And:
where: —number of pixels in a row of the matrix, .
3.1. Measurement Resolution Analysis
For a better bringing of the problem of moving the horizon line away from the camera, Figure 7 presents a graph of distance to the horizon line as a function of the height of the camera a.s.l. .
Figure 7.
Distance to the horizon line as a function of the height of the camera a.s.l .
The graph in Figure 7 clearly shows that when the camera reaches a height of 10 m, the distance to the horizon line is already 12,000 m and with a further increase in camera height it increases linearly up to 55,000 m.
3.1.1. Measurement Resolution of a Single-Pixel
Figure 8 presents the graphs of a single-pixel measurements resolution of the horizon line taken with cameras: DJI FC350, DJI FC220, DJI FC550RAW and SONY ILCE-7RM2.
Figure 8.
Single-pixel measurements resolution of the horizon line as a function of the height of the camera a.s.l. .
Graphs in Figure 8 show that the measurement resolutions of a single-pixel decrease significantly after reaching by cameras a height of 8 m a.s.l., although they are still in a quite narrow range (1.5 m, 4 m). What cannot be said after the cameras reach a height of 200 m a.s.l., when this range is wide (7 m, 21 m)–it should be remembered that the distance to the horizon is then up to 55,000 m. The graphs allow also to rank the cameras from the best to the worst, assessing them in terms of obtained the single-pixel measurement resolution , in the following order: SONY ILCE-7, DJI FC550RAW, DJI FC220, DJI RM2FC350. Figure 9 shows the graphs of the single-pixel measurement resolution of the horizon line as a function of focal length from 4 mm to 14 mm for camera DJI FC5350.
Figure 9.
Single-pixel measurement resolution of the horizon line as a function of focal length from 4 mm to 14 mm for camera DJI FC5350 ( = 100 m).
The graph in Figure 9 shows that to obtain a higher single-pixel measurement resolution of the horizon line, the focal length should be increased. Therefore, the focal length measurement resolution of 4 mm for the DJI FC350 camera (see Figure 8) taken to the previous calculations was incorrect and resulted in the lowest measurement resolution. Thus, to show the essence of the indicated problem, Figure 10 presents a graph of the single-pixel measurement resolution of the horizon line taken with a DJI FC350 camera, but at a maximum focal length setting of = 14 mm.
Figure 10.
Single-pixel measurements resolution of the horizon line as a function of the height of the DJI FC350 camera a.s.l. ( = 14 mm).
Figure 10 shows a significant improvement in the single-pixel measurement resolution of the horizon line. In the case of a measurement made at an altitude of 200 m a.s.l. the resolution increased from 21 m to 6 m. However, it should be remembered that by increasing the focal length we reduce the horizontal field of view . For FC350 camera with matrix width = 6.16 mm, decreases from 75.2° (for = 4 mm) to 24.8° (for = 14 mm).
3.1.2. Resolution of the Horizon Slope Measurement
To make the comparison of the rated cameras with each other easier, Table 2 presents their horizon slope measurement resolution and horizontal field of view calculated using Equations (7) and (8).
Table 2.
Horizon slope measurement resolution and horizontal field of view calculated for cameras: DJI FC350, DJI FC220, DJI FC550RAW and SONY ILCE-7RM2.
The results set out in Table 2 show that the measurements of the horizon slope can be performed with a resolution of 0.02°. Nevertheless, it should be remembered that the horizon is almost always distorted at the contact with the sea by rippling the water surface. Therefore, increasing the single-pixel measurement resolution to values smaller than the height of the sea waves certainly will not increase the measurement resolution of the horizon slop . Therefore, it is reasonable to check at which values of and , sea waving should be included in the computations. The dependence (9) can be easily used for this purpose. It should only be assumed that the value corresponds to the height of the sea waves-further referred to as . The result of calculations obtained at that time can be treated as the horizon slope measurement resolution–further referred to as , depending on and the wave height . Figure 11, Figure 12, Figure 13 and Figure 14 presents charts for four arbitrarily selected wave heights equal: 0.5 m, 1 m, 1.5 m and 2 m (corresponding to sea states 2–4 of the Douglas Sea Scale [26]).
Figure 11.
Measurement resolution of the horizon slope as a function of for chosen wave heights ( = 53.91°–SONY ILCE-7RM2).
Figure 12.
Measurement resolution of the horizon slope as a function of for chosen wave heights ( = 59.94°–DJI FC550RAW).
Figure 13.
Measurement resolution of the horizon slop as a function of for chosen wave heights ( = 24.81°–DJI FC350).
Figure 14.
Measurement resolution of the horizon slop as a function of a.s.l. for chosen wave heights ( = 63.27°–DJI FC220).
When comparing the values in Table 2 with the values shown in Figure 11, Figure 12, Figure 13 and Figure 14, it can be stated that may be greater than . Therefore, when talking about the actual measurement resolution of the horizon slop using a camera, for small and for large should be used. Table 3 presents the threshold values of height above which should be used instead of .
Table 3.
Threshold values of height above which should be used instead of ().
On the other hand, Figure 15 presents graphs of the real measurement resolution of the horizon slope for the wave heights “” = 2 m. They arose as a result of combining a part of the diagram below with a part of the diagram above .
Figure 15.
Real measurement resolution of the horizon slope for the wave height “” = 2 m.
The introduction of resolution in the range and in the range gives the possibility of a better comparative assessment of the cameras used to measure . For example, based on Table 3 and Figure 15, it can be unequivocally stated that the measurement resolution taken at low altitudes (below 3.0 m) with DJI FC550RAW and DJI FC220 cameras is similar to the resolution of the SONY ILCE-7RM2 camera–something completely different show data in Table 2. Another example is, that the constant (and also the largest) measurement resolution (equal to ) can be obtained with a DJI FC550RAW camera from a height of = 4.2 m, and with a DJI FC350 camera only after reaching a height of = 21.7 m.
3.2. Analysis of the Mean Error of Measurement
For the calculation of the mean error of the horizon slope measurement made with the assessed cameras, the method described in Section 2 was used–Equation (3). The results obtained in this way are presented in Figure 16 in the form of four diagrams of the mean error of the horizon slope measurement for values in the range .
Figure 16.
Mean error of measurement as a function of the horizon slope .
The graphs presented in Figure 16 show that the smallest mean error (not exceeding 0.012°) is for measurements taken with the SONY ILCE-7RM2 camera, while the largest (0.021°) for measurements taken with the DJI FC220 camera. Compared with other graphs, the diagram for the DJI FC350 camera deserves special attention, as it can be clearly seen that the mean error decreases the most with the increase of the horizon slope . To explain this phenomenon, Figure 17 and Figure 18 present graphs of the mean error of measurement over the full range of focal length of the DJI FC350 camera () for and .
Figure 17.
Mean error of measurement as a function of focal length of the DJI FC350 camera ().
Figure 18.
Mean error of measurement as a function of focal length of the DJI FC350 camera ().
The graph in Figure 17 shows that the focal length has no major impact on the value of the mean error of measurement for close to zero values of horizon slope (). However, the graph in Figure 18 shows that for , an increase in focal length from 4 mm to 14 mm will reduce the value of the mean error by about 25%. To better show this phenomenon, Figure 19 presents diagrams of the mean error of measurement taken with the DJI FC350 camera as a function of the horizon slope at a focal length : 4 mm, 6 mm, 8 mm, 10 mm, 12 mm and 14 mm (chosen arbitrarily).
Figure 19.
Mean error of measurement for DJI FC350 camera as a function of the horizon slope for focal length : 4 mm, 6 mm, 8 mm, 10 mm, 12 mm and 14 mm.
Based on previous considerations and analysis of the diagrams presented in Figure 19, a generalized conclusion can be drawn that in order to measure any value of the horizon slope with the smallest mean error (), the focal length should be set on a maximum value () and the camera should be rotated relative to the optical axis so that the horizon line would be along the diagonal of the matrix.
3.2.1. Analysis of the Impact of on
The mean measurement errors of selected elements (parameters) of the camera’s optical system were analyzed, taken into account in the calculation of the mean measurement error . Nevertheless, for a better understanding of their meaning, they were considered separately. The results of calculations of the single component value from the Formula (3) were used. They correspond to the mean measurement error of a particular element of the camera optical system (e.g., focal length, matrix size). For example, for the analysis, the . value was calculated using only the first component, and the remaining three were completely omitted.
3.2.2. Analysis of
Figure 20 presents the graph of the mean error of measurement error as a function of the focal length mean measurement error for the horizon slop angle β = 5° (a description of the methods and accuracy of focal length measurement can be found, among others in [27,28,29,30]).
Figure 20.
Mean error of measurement as a function of for (only the first component of the Formula (3) was used in the calculations).
Then, Table 4 presents the mean error of measurement calculated for assuming that μm.
Table 4.
Mean error of measurement calculated for assumig that μm (only the first component of the Formula (3) was used).
Based on the graphs presented in Figure 20, a generalized conclusion can be formulated that the accuracy of measuring the focal length has a large impact on the value of . In the case of DJI FC550RAW, DJI FC350 for μm the value increases significantly reaching 0.027°. However, the results of calculations presented in Table 4 for μm, and indicate that the value of increases slightly as a function of the angle β. In the case of the SONY ILCE-7RM2 camera, the value increased by only 0.0013°. In the case of SONY ILCE-7RM2 camera, the value of increased by only 0.0013°.
3.2.3. Analysis of
Figure 21, Figure 22 and Figure 23 present graphs of the mean error of measurement as a function of mean error of CMOS matrix width measurement calculated for the horizon slope equal . It was assumed that the value of corresponds to a multiple of in the range of —where corresponds to pixel size calculated as the ratio of matrix height in units of length to matrix height in pixels (like in Section 2). It made possible to carry out an analysis of the impact of on using the comparative method for all the tested cameras simultaneously, even though they have COMS matrices of different sizes and resolutions.
Figure 21.
Mean error of measurement as a function of for (only the second component of the Formula (3) was used in the calculations).
Figure 22.
Mean error of measurement as a function of for (only the second component of the Formula (3) was used in the calculations).
Figure 23.
Mean error of measurement as a function of for (only the second component of the Formula (3) was used in the calculations).
The graphs presented in Figure 21, Figure 22 and Figure 23 show that the impact of on depends to a large extent on the size of the angle being measured. With the increase of from to , the value of increases up to about 2.5 times. However, this value is still not very big-in the case of DJI FC220, DJI FC550RAW, SONY ILCE-7RM2 cameras it remains in the ranges of thousandths of a degree.
3.2.4. Analysis of
Figure 24, Figure 25 and Figure 26 present graphs of the mean error of measurement as a function of mean error of angles i measurements on CMOS matrix for the horizon slop equals . It was assumed that the values of and (calculated by Dependence (14)) are equal and correspond to their multiple in the range of . Thanks to this, as in the case of m_w, it was possible to carry out an analysis of the impact of and on using the comparative method for all the cameras simultaneously even though they have different focal lengths and matrix sizes.
Figure 24.
Mean error of measurement as a function of for (only the last two components of the Formula (3) were used in the calculations).
Figure 25.
Mean error of measurement as a function of for (only the last two components of the Formula (3) were used in the calculations).
Figure 26.
Mean error of measurement as a function of for (only the last two components of the Formula (3) were used in the calculations).
The graphs presented in Figure 24, Figure 25 and Figure 26 indicate that the influence of and on depends primarily on the size of the angle being measured. Although in this case, contrary to the case of , increasing the value of the angle β results in decreasing the value of . It should also be noted that in the case of DJI FC350 and DJI FC550RAW cameras, for the doubled value and , the calculated value of reached as much as . Figure 27, Figure 28, Figure 29 and Figure 30 present influance of on as a percentage contribution.
Figure 27.
Percentage contribution for SONY ILCE-7RM2 camera: (a) Calculated for equals ; (b) Calculated as a mean value.
Figure 28.
Percentage contribution for DIJ FC220 camera: (a) Calculated for equals ; (b) Calculated as a mean value.
Figure 29.
Percentage contribution for DIJ FC350 camera: (a) Calculated for equals ; (b) Calculated as a mean value.
Figure 30.
Percentage contribution for DIJ FC550RAW camera: (a) Calculated for equals ; (b) Calculated as a mean value.
Based on the graphs presented in Figure 27, Figure 28, Figure 29 and Figure 30, two generalized conclusions can be made:
- has the least effect on the value of ;
- for small angles i have the greatest impact on the value of , while for larger ones-.
4. Conclusions
The method of camera evaluation presented in the paper can be considered as quite specific because it is based on the analysis of the optical system in combination with the way it is used to measure the angular position relative to the sea horizon. However, the proposed approach allows using the method to evaluate cameras mounted on USV or UAV taking the height of the sea wave and the height of the camera into account. It has been confirmed by the evaluation results of cameras mounted on UAV manufactured by DJI. The results led to the following general conclusions:
- The image of the sea horizon can be curved already at camera heights , and with the increase in height and the camera field of view, this curvature can increase. At a height of and a horizontal field of view of the camera , horizon sagitta can be as much as eight pixels (see Figure 6). Therefore, it is necessary to take this phenomenon into account in angular measurements made relative to the sea horizon using a UAV camera (especially those carried out to the entire length of the horizon line in the camera’s field of view).
- For greater resolution of a single-pixel measurement of the horizon line, the maximum focal length should be used (see Figure 8, Figure 9 and Figure 10). Nevertheless, it should be remembered that the horizon is almost always distorted at the contact with the sea by rippling the water surface. Therefore, increasing the measurement resolution of a single-pixel to values smaller than the height of the sea waves will certainly not increase the resolution of the horizon slop measurement.
- The measurement resolution of the horizon slope with a middle-class camera (of course, today class) can be at the level of 0.02° (see Table 2, the second part of the Formula (8)). However, it should be borne in mind that up to a given camera height limit, the resolution depends not only on the parameters of the optical system but also on the height of the sea waves (see Figure 15 and Table 3). In the case of the rated cameras for the wave height , the threshold value of was as much as 21.7 m. Therefore, the measurement resolution using such ASV cameras should be calculated as the value, taking the wave height (using the first equation of Dependence (8)) into account. On the other hand, the switch to the resolution value can take place only after reaching the camera height . It means, it rather will concern measurements made from UAV.
- The mean errors of the horizon slope measurements of the rated cameras were at the level of 0.02° and were similar to the resolution of measurement (see Figure 16). Its value decreases significantly with the increase of the horizon slope (see Figure 19) and the increase of the focal length (see Figure 17 and Figure 18). Therefore, to measure the horizon slope with the smallest value of the mean error (), the focal length should be set on a maximum and the camera should be rotated relative to the optical axis so that the horizon line would be along the diagonal of the matrix.
- The mean error of measuring the focal length can have a significant impact on . This applies especially to the use of light cameras with focal lengths of short length , when the ratio to is relatively high (it reaches the value of the order of thousandths by a millimeter). Measurements of the tilt angle β with such cameras for μm (this is a high accuracy of focal length measurement) will be made with which is increased by a minimum of 0.03.
- can also have a significant impact on . This applies in particular to the use of larger cameras with focal lengths and matrices with sizes exceeding 4/3′′. The results of measurements with such cameras may be affected by error up to 70% dependent on . For small tilt angles -below 5° the value of can increase by up to several hundredths of a degree.
- The obtained values of the mean error of measurement of the horizon slope for the tested cameras are slightly higher than INSs used on USV, but much smaller than INSs used on UAV [9,10,11,12]. Therefore, measuring the pitch and roll of bathymetric surveying vehicles using a camera can be considered reasonable. Although, it must be realized that there are other more important factors affecting the accuracy of optical measurement: lens distortion, diffraction limited, motion artifact and lowering of the visibility-omitted in the tests.
Taking the presented general and detailed conclusions into account (resulting from the assessment) the usability of cameras for determining the angular orientation of unmanned bathymetric surveying vehicles, based on the image of the sea horizon should be considered. Therefore, further research on this type of tilt compensator is justified and advisable. The activity could be directed at:
- -
- adaptation of recording and image processing equipment to marine conditions (including various external lighting and lighting from the water table),
- -
- appropriate selection of image processing methods with emphasis on changes in visibility (caused by fog, rainfall or snowfall),
- -
- conducting several verification tests of the compensator prototype in a real environment (to find, remove or minimize the so-called critical functions of technology),
- -
- in the longer term, the methods for its use will be developing, including using for stabilization of UAV or USV spatial orientation, relative positioning (e.g., in relation to floating navigational marks or seabed), or for precise determination of the direction and distance to the floating surface object or underwater object (optically, as well as supporting the work of navigation devices such as: sonar, radar or LiDAR).
Author Contributions
Conceptualization and Writing—Original Draft Preparation, K.N.; Visualization and Resources, Ł.M.; Formal analysis and Methodology, P.S.; Investigation and Writing-review and editing, A.N.
Funding
This work was supported by the Minister of National Defense of Poland as part of the program called “RESEARCH GRANT” supporting the basic research.
Acknowledgments
The article could have been written due to statutory research on marine optical navigation systems conducted in cooperation with the Naval Academy and the Gdansk University of Technology and research conducted in the project entitled “Backscattering of acoustic waves in the aquatic environment” funded by the Minister of National Defense of Poland as part of the program called “RESEARCH GRANT” supporting the basic research.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Naus, K.; Makar, A. Movement disruptions compensation of sounding vessel with three nonlinear points method. Rep. Geod. 2005, 1, 25–32. [Google Scholar]
- NOAA. Department of Defense World Geodetic System 1984, Its Definition and Relationships with Local Geodetic Systems; National Imagery and Mapping Agency: Springfield, VA, USA, 2005; pp. 1–175. [Google Scholar]
- Zhou, Z.; Li, Y.; Zhang, J.; Rizos, C. Integrated Navigation System for a Low-Cost Quadrotor Aerial Vehicle in the Presence of Rotor Influences. J. Surv. Eng. 2016, 143, 05016006. [Google Scholar] [CrossRef]
- Ricci, L.; Taffoni, F.; Formica, D. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion. PLoS ONE 2016, 11, e0161940. [Google Scholar] [CrossRef] [PubMed]
- Du, S.; Sun, W.; Gao, Y. MEMS IMU Error Mitigation Using Rotation Modulation Technique. Sensors 2016, 16, 2017. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Wu, W.; Jiang, O.; Wang, J. A New Continuous Rotation IMU Alignment Algorithm Based on Stochastic Modeling for Cost Effective North-Finding Applications. Sensors 2016, 16, 2113. [Google Scholar] [CrossRef] [PubMed]
- Naus, K.; Makar, A. Usage of Camera System for Determination of Pitching and Rolling of Sounding Vessel. Rep. Geod. 2005, 2, 301–307. [Google Scholar]
- Navsight Marine Solution. Available online: https://www.sbg-systems.com/wp-content/uploads/Navsight_Marine_Leaflet.pdf (accessed on 3 May 2019).
- Positioning and Mapping Solutions for UAVs. Available online: https://www.navtechgps.com/assets/1/7/PrecisionMappingSolutions-UAVs_BRO.pdf (accessed on 3 May 2019).
- RIEGL. Available online: http://www.riegl.com/products/unmanned-scanning/bathycopter/ (accessed on 3 May 2019).
- MATRICE 600. Available online: https://www.dji.com/pl/matrice600?site=brandsite&from=landing_page (accessed on 5 May 2019).
- Ellipse 2 Series. Available online: https://www.sbg-systems.com/wp-content/uploads/Ellipse_Series_Leaflet.pdf (accessed on 5 May 2019).
- Specht, M.; Specht, C.; Wąż, M.; Naus, K.; Grządziel, A.; Iwen, D. Methodology for Performing Territorial Sea Baseline Measurements in Selected Waterbodies of Poland. Appl. Sci. 2019, 9, 3053. [Google Scholar] [CrossRef]
- Stateczny, A.; Motyl, W.; Gronska, D. HydroDron—New step for professional hydrography for restricted waters. In Proceedings of the Baltic Geodetic Congress (Geomatics) 2018, Olsztyn, Poland, 21–23 June 2018. [Google Scholar] [CrossRef]
- Szymak, P. Selection of Training Options for Deep Learning Neural Network Using Genetic Algorithm. Proc. MMAR 2019, in press. [Google Scholar]
- Naus, K.; Wąż, M. Accuracy of measuring small heeling angles of a ship using an inclinometer. Sci. J. Marit. Univ. Szczec. 2015, 44, 25–28. [Google Scholar] [CrossRef]
- Bodnar, T. Wybrane metody przetwarzania obrazów do określania orientacji przestrzennej okrętu. Logistyka 2014, 6, 2100–2107. [Google Scholar]
- Gershikov, E.; Tzvika Libe, T.; Kosolapov, S. Horizon Line Detection in Marine Images: Which Method to Choose? Int. J. Adv. Intell. Syst. 2013, 6, 79–88. [Google Scholar]
- Praczyk, T. A quick algorithm for horizon line detection in marine images. J. Mar. Sci. Technol. 2018, 23, 164–177. [Google Scholar] [CrossRef]
- DJI. Available online: https://www.dji.com (accessed on 10 May 2019).
- Urbański, J.; Kopacz, Z.; Posiła, J. Maritime Navigation—Part I; Polish Naval Acad.: Gdynia, Poland, 2018; pp. 23–56. [Google Scholar]
- Distance to the Horizon. Available online: https://aty.sdsu.edu/explain/atmos_refr/horizon.html (accessed on 10 May 2019).
- Brocks, K. Die Lichtstrahlkrümmung in Bodennähe. Tabellen des Refraktionskoeffizienten, I. Teil (Bereich des Präzisionsnivellements). Dtsch. Hydrogr. Z. 1950, 3, 241–248. [Google Scholar] [CrossRef]
- Stober, M. Untersuchungen zum Refraktionseinfluss bei der trigonometrischen Höhenmessung auf dem grönländischen Inlandeis. In Festschrift für Heinz Draheim, Eugen Kuntz und Hermann Mälzer; Geodätisches Institut der Universität Karlsruhe: Karlsruhe, Germany, 1995; pp. 259–272. [Google Scholar]
- Kosiński, W. Geodezja. PWN 2010, 6, 212–214. [Google Scholar]
- Wikipedia. Available online: https://en.wikipedia.org/wiki/Sea_state (accessed on 20 May 2019).
- Liao, L.-Y.; Bráulio de Albuquerque, F.C.; Parks, R.E.; Sasian, J.M. Precision focal-length measurement using imaging conjugates. Opt. Eng. 2012, 51, 1–6. [Google Scholar] [CrossRef]
- DeBoo, B.; Sasian, J. Precise focal-length measurement technique with a reflective Fresnel-zone hologram. Appl. Opt 2003, 42, 3903–3909. [Google Scholar] [CrossRef] [PubMed]
- Angelis, M. A new approach to high accuracy measurement of the focal lengths of lenses using a digital Fourier transform. Opt. Commun 1997, 136, 370–374. [Google Scholar] [CrossRef]
- Sriram, K.V.; Kothiyal, M.P.; Sirohi, R.S. Curvature and focal length measurements using compensation of a collimated beam. Opt. Laser Technol. 1991, 23, 241–245. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).



