Next Article in Journal
On Alternative Algorithms for Computing Dynamic Mode Decomposition
Previous Article in Journal
Some Remarks on Malicious and Negligent Data Breach Distribution Estimates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precision Calibration of Omnidirectional Camera Using a Statistical Approach

by
Vasilii P. Lazarenko
1,
Valery V. Korotaev
2,
Sergey N. Yaryshev
2,
Marin B. Marinov
3,* and
Todor S. Djamiykov
3
1
IT One Digital Technology LLC., Galernaya St. 10A, 190098 St. Petersburg, Russia
2
Engineering Research Faculty, ITMO University, Kronverksky Pr. 49, 197101 St. Petersburg, Russia
3
Department of Electronics, Technical University of Sofia, 1756 Sofia, Bulgaria
*
Author to whom correspondence should be addressed.
Computation 2022, 10(12), 209; https://doi.org/10.3390/computation10120209
Submission received: 29 October 2022 / Revised: 24 November 2022 / Accepted: 25 November 2022 / Published: 30 November 2022
(This article belongs to the Section Computational Engineering)

Abstract

:
Omnidirectional optoelectronic systems (OOES) find applications in many areas where a wide viewing angle is crucial. The disadvantage of these systems is the large distortion of the images, which makes it difficult to make wide use of them. The purpose of this study is the development an algorithm for the precision calibration of an omnidirectional camera using a statistical approach. The calibration approach comprises three basic stages. The first stage is the formation of a cloud of points characterizing the view field of the virtual perspective camera. In the second stage, a calibration procedure that provides the projection function for the camera calibration is performed. The projection functions of traditional perspective lenses and omnidirectional wide-angle fisheye lenses with a viewing angle of no less than 180° are compared. The construction of the corrected image is performed in the third stage. The developed algorithm makes it possible to obtain an image for part of the field of view of an OOES by correcting the distortion from the original omnidirectional image.Using the developed algorithm, a non-mechanical pivoting camera based on an omnidirectional camera is implemented. The achieved mean squared error of the reproducing points from the original omnidirectional image onto the image with corrected distortion is less than the size of a very few pixels.

1. Introduction

In areas where a large viewing angle is critical, OOES are used. However, these systems have a large image distortion: which makes their use difficult in the case of measuring and observing in television systems [1,2,3].
Omnidirectional imaging systems find applications in many areas where a wide viewing angle is crucial. Their main areas of application are in the field of mobile robot navigation and surveillance systems. Over the last decade, the use of mobile robots in our society has grown significantly. The robots need sensor systems to extract information from the environment to solve mapping and localization problems. Cameras have become one of the most widespread options in mobile robotics owing to the big amount of information that they provide to the robot. For the solution of this complex task for simultaneous localization and mapping (the so-called SLAM) these systems are increasingly used alone or in configuration with other sensor systems [4].
Some works focus on the use of omnidirectional imaging as the only source of information for solving mapping and localization tasks. Such is the study of Caruso et al. [5] which presents a method for performing visual odometry with a rover. Garcia-Fidalgo et al. [6] presented a study of mapping and localization methods using visual systems. They allow for various configurations, such as single cameras, stereo cameras, multi-camera systems, catadioptric systems, and more. Catadioptric vision systems consist of a single camera aimed at a convex mirror. This configuration allows for image capturing with a 360-degree field of view around the mirror axis.
Corke [7] developed large-scale SLAM using also omnidirectional cameras. An approach for the efficient reduction of information lost and the creation of a map is proposed, as well. Often the visual information is combined with other sources of data such as GPS (global positioning system), IMU (inertial measuring device), LiDARS, pressure sensors, encoders, and others [4,8]. In [9] a system for the navigation and control of mobile robots is introduced which includes a camera, IMU, and encoder. Oriolo et al. [9] presented a method for locating robots using a monocular camera, IMU, encoders, and pressure sensors. In [10] and [11] systems are proposed for Optical 3D Object Recognition in autonomous driving applications. The systems combine data from a stereo camera system, LiDAR, IMU, Encoder, and GNSS (Global Navigation Satellite System), to incorporate the 3D object information into a mapping framework.
In recent years, various approaches have been presented for robust feature extraction from images taken with catadioptric visual systems [4,10,12]. Surveillance has a wide variety of applications, ranging from applications requiring high security to those used in our daily life, such as healthcare surveillance. Accordingly, many researchers in the field of imaging and video technology have paid great attention to the study and development of highly advanced surveillance systems [11,13].
Despite the wide variety of surveillance products on the market, most systems have a limitation in the angle of surveillance of the camera. Many studies have proposed two main approaches to increase the viewing angle—mechanical and optical. Mechanical approaches typically use a mechanically rotating and moving chamber system [14].
Some surveillance products use the so-called Pan-Tilt-Zoom (PTZ) cameras. They can move back and forth, up, and down, and increase the viewing area. The main disadvantages of mechanical approaches, however, are the high cost, the need for moving parts, and accurate positioning. Therefore, longer scene scanning, and synchronization time are required to obtain an omnidirectional image. For capturing omnidirectional images alternatively, the optical approach can be used. For example, a fisheye lens can provide a 360 ° viewing angle by refracting the omnidirectional scene in the camera sensor. However, the fisheye lens distorts the image which is not immediately understandable due to geometric distortion [15].
Another proposed approach is the use of a one-way mirror as a hyperbolic mirror. The 360-degree omnidirectional scene on the surveillance site is reflected in the camera and the image is captured. Examples that use a hyperbolic mirror to extend the viewing angle are: (i) folded catadioptric cameras by Nayar and Peri [16], (ii) an integrated surveillance system using multiple omnidirectional vision sensors by Ng et al. [17], and (iii) non-targeted surveillance system proposed in the work of Wong et al. [18]. The system consists of a hyperbolic mirror that is attached face-to-face with the webcam in a vertical direction using a construction bracket. It can capture a 360° area in a horizontal plane at once. A hyperbolic mirror is used because it is cheaper than a fisheye lens with almost the same image quality.
Along with the presented main applications, there are many others. Electric power generation forecasting is of particular interest. Electric power load forecasting has been an integral part of managing electrical energy markets and infrastructure for many decades. The cost of generating power from other than traditional energy sources can be reduced through the integration of solar energy into the classical energy supply structures. However, such integration has its challenges and costs. The forecasting of distributed photovoltaic (PV) power generation, requires both intra-hour and day-ahead forecasting of solar irradiance [19]. For the PV systems, global irradiation (GI) on the inclined surface is required.
For different time horizons, however, different approaches are required.
For a very short time (<30 min), a range of ground-based imaging techniques was developed for GI using the information on cloud positioning and deterministic models [20,21].
Hensel et al. [22] describe a systematic approach for a precise short-time cloud coverage prediction based on an optical system. The images are based on a sky imager system with a fish-eye lens optic to cover a maximum area. After a calibration step, the image is rectified to enable linear prediction of cloud movement. In a subsequent step, the clear sky model is estimated based on actual high dynamic range images and combined with a threshold-based approach to segment clouds from the sky. In the final stage, a multi-hypothesis linear tracking framework estimates cloud movement, velocity, and possible coverage of a given photovoltaic power station. A Kalman filter framework, which efficiently operates on the rectified images, is used. The evaluation of real-world data suggests high coverage prediction accuracy above 75%.
The goal of this research is to develop a mathematically valid model and efficient practical approach for calibrating and correcting distortion in fisheye cameras.

2. Methodology

This section explains the technical details of the perspective geometry model and the calibration methods used and sets out the framework for implementing the proposed approach.

2.1. The Perspective Geometric Model

Most recent optical systems can be described with sufficient precision by a perspective geometric model (Figure 1a). In this case, distortion will be considered a deviation from this model. The projection function of such systems is expressed as follows:
t a n θ = r f ,
where r is the image radius; f is the focal length; θ is the ray incidence angle. The relationship (1) shows that when the ray incident angle is 90°, then the image radius becomes infinite.
It follows that ultra-wide-angle fisheye lenses with a viewing angle of at least 180° cannot be described by derivation from this model.
M. M. Rusinov proposed in [23], a projection function for super-wide-angle fisheye lenses in which the image radius is proportional to the angle at which the ray falls (Figure 1b):
θ = r f .
In previous papers [24], this function is used in the algorithms implemented by many other authors [22,25,26]. However, it became obvious that in practice the transfer function varies with the specific lens model and, in general, it is not known in advance.
To solve this problem, algorithms have been developed for converting images received by omnidirectional cameras into images with corrected distortion; these images correspond to those obtained by the classical perspective model using a calibration procedure. By omnidirectional optoelectronic systems (omnidirectional cameras), we mean optoelectronic systems, in which the field of view, reaches 360 ° , at least in one of the planes (meridional or sagittal).
There are three common types of omnidirectional optical systems:
  • Optical systems with super-wide fisheye lenses with a view angle of no less than 180 ° , capable of capturing at least a hemisphere of the surrounding space.
  • Mirror-lens (catadioptric) optical systems are cameras with a conventional lens with a nozzle mounted on it in the form of a mirror with rotational symmetry. The shape of the mirror surface can vary from cone to ellipse.
  • Multi-chamber systems, whose large field of view is achieved using several chambers with overlapping fields of view.
Herein only single-chamber optical-electronic systems, i.e., cameras with fisheye lenses and catadioptric cameras, are considered. The main purpose of this work is the development of an algorithm for the transformation of images, obtained from cameras with omnidirectional lenses, into classic perspective views with corrected distortion. The goal was to develop a module for the so-called Typhoon program for an optoelectronic surveillance system that implements, on an omnidirectional camera, the function of a non-mechanical PTZ camera operating on a motion detector.
In this respect, there are additional requirements for the algorithm:
  • It should work with omnidirectional cameras with a fisheye lens, as well as with catadioptric optical systems.
  • The calibration process should be accessible for unqualified users of the system and should not require the use of special technical means.
The algorithm can be deployed in video surveillance systems that can use different models of cameras and lenses, but a simple calibration procedure will be needed. Besides, the algorithm can be used in various areas of robotics, where a wide viewing angle is important, but the distortion, typical for omnidirectional cameras, has to be eliminated [27,28].

2.2. Methods for Calibration of Omnidirectional Optical Systems

The omnidirectional optical systems calibration is built on the Unifying Theory of Geyer and Daniilidis [29] for Central Panoramic Systems, according to which every perspective and catadioptric projection can be centered by mapping a three-dimensional sphere in the effective pixel. Ying and Hu [30] extended this theory for fisheye lenses with a viewing angle of < 180 ° .
The unified imaging model provides a suitable framework for considering different camera types, such as standard, catadioptric perspective, and diverse fisheye lens types. The two-step process and the interrelationships are shown in Figure 2.
A spherical type of projection of world point P by a ray onto the surface of the unit sphere p = x ,   y ,   z is the first step.
A two-parameter minimal representation for a surface point of a sphere θ , φ includes the colatitude angle measured from the North pole θ with r = x 2 + y 2   and the azimuth φ .
θ = arcsin r ,     θ 0 , π ,
φ = arctan y x ,       φ π , π .
The viewpoint O is the center of the sphere, which is along its normal z -axis at a distance m from the image plane (Figure 2).
In the second step the point p = θ , φ is projected onto the image plane with the viewpoint F , which is located at a distance l   l 0 , 1 along the z axis above O .   p = r , φ are the polar coordinates of the image plane point where
r = s i n   θ × l + m l c o s   θ .
There are two basic parameters l and m for the unified imaging model. For m = f (where f is the focal length of the lens), depending on the values of the parameter l three types of imaging can be distinguished [7]:
  • Stereographic for l = 1 .
  • Perspective for l = 0 .
  • Fisheye for l > 0 .
A coordinate system transformation is needed for the relationship identification between the pixel position of the image and the position of the spatial center:
  • Linear transformation to the camera coordinate system from the world coordinate system.
  • Nonlinear transformation from the camera coordinate system to the image plane. To approximate the fisheye lens model a Taylor polynomial is used [7].
The key issue for the image transformation algorithm is setting the lens transfer function which connects the coordinates (in three dimensions) of a point in the object space with the coordinates of its image in the plane of the receiver. To address this task, the omnidirectional optical system is calibrated.
The different methods for calibrating OOES are presented and compared in detail in [7,31].
What is needed is a technique that does not require any special technical equipment and can be employed by unskilled operators. The method of Scaramuzza described in [32] was selected as the most easy-to-use.
This approach is implemented through the MATLAB “OCamCalib” toolkit. For calibration, it is necessary to take several images positioned in a chessboard form with a calibrated optic system. What follows is a calculation of the calibration parameters (such as polynomial coefficients and mean coordinates) for two functions that define a connection between the three-dimensional coordinates of the point in object space and the coordinates of its image in the coordinate system of the image sensor u v = world 2 cam x y z and x y z = c a m 2 w o r l d u v . A detailed calibration process is described in [32,33,34].

3. Proposed Geometric Projection Approach for Omnidirectional Optical System Calibration

The omnidirectional camera geometric projection model used for calibration is shown in Figure 3.
Here, X , Y , Z is a coordinate system in the object space; U , V —the coordinate system in the plane of the image sensor; x , y , z —point coordinates in the object space; p —point image; u , v —image coordinates of this point in the plane of the image sensor; P —vector originating from the coordinate system origin and directed at the point in the object space.
The model is based on the following assumptions:
  • The catadioptric camera is a centered optical system; therefore, there is a point at which all the reflected rays intersect. This [0,0,0] point is the coordinate system origin X , Y , Z .
  • The optical system focal plane has to coincide with the plane of the image sensor, only minor deviations are permissible.
  • The mirror has rotational symmetry about the optical axis.
  • The distortion of the lens in the model is not considered since the mirror used in an omnidirectional camera requires a long focal length of the lens. Thus, this lens distortion can be neglected. However, in the case of the fisheye lens, the distortion must be included in the calculated projection function.
Since we suppose that the focal plane of the optical system coincides with the image sensor plane, it follows that x and y are proportional to u and v , respectively:
x y = α × u v ,
where α is a scaling factor, α > 0 .
The purpose of calibration is to find a function that will describe the correspondence between the point images p and the three-dimensional vector P . Thus,
P = x y z = α × u α × v f u , v .
When assigning u = α · u and, v = α · v then
P = x y z = u v f u , v .
Because the vector P is not a point, but only a direction to it, the latter simplification is permissible. Moreover, as the mirror is rotationally symmetric (as well as the distortion of the fisheye lens), the function f u , v depends only on the distance ρ between the point image and the image center:
P = x y z = u v f ρ ,
where ρ = u 2 + v 2 .
The calculation of the coefficients of the polynomial f ρ is carried out by the method of least squares:
f ρ = α 0 + α 1 ρ + α 2 ρ 2 + + α n ρ n .
However, to obtain its coefficients a 0 , a 1 , a 2 , a 3 , , a n we need to consider the distortions caused by the discretization of the image sensor and the fact that the pixels do not always have a square form. Thus, the border of the circular image takes the form of an ellipse (Figure 4). To account for these distortions, we complement our model with affine transformations:
u v = c d e 1 × u v + x c y c ,
where u , v are the true coordinates in the coordinate system of the image sensor, u , v coordinates without distortions, x c , y c center coordinates of the circular image.
As a result of using the calibration toolbox [33], we obtain all the necessary parameters for Equation (12). That function is specifying the relationship of the three-dimensional coordinates of the object points in the objects space and the coordinates of its image in the image sensor coordinate system (hereinafter, we will assume that these functions already contain the calculated parameters):
x y z = f x , y , z , α 0 , α 1 , α 2 , α 3 , .. α n , x c , y c , c , d , e = c a m 2 w o r l d u v .

4. Algorithm of Image Conversion for Omnidirectional Optoelectronic Systems

The algorithm has three basic parts (Figure 5).
The first part is the formation of a cloud of three-dimensional points in the object space corresponding to the field of view of, a kind of virtual perspective camera. We suggest that this camera has desirable characteristics: α is the angle of the virtual camera view; φ is the rotation angle of the virtual camera around the Z axis; θ is the tilt angle of the virtual camera from the Z axis.
In the second part, the coordinates of the images of these points in the plane of the image sensor are determined. For this purpose, the transfer function of the OOES found using the omnidirectional camera calibration procedure is used.
The third part is the elementwise (pixel) formation of the output image from the original omnidirectional image using the coordinates found in the second part.

4.1. The First Stage: Formation of a Cloud of Points Characterizing the Field of View of the Virtual Perspective Camera

To implement the first stage, we need to set the characteristics of this camera:
H r e s —horizontal resolution;
K r e s —side ratio (for example, 4/3 or 16/9);
V r e s = H r e s K r e s —vertical resolution;
α is the angle of the virtual camera view;
φ is the rotation angle of the virtual camera around the Z axis;
θ is the tilt angle of the virtual camera from the Z axis.
The Z axis coincides with the omnidirectional camera optical axis. First, we calculate the field view of the virtual camera with φ = 0 ,   θ = 0 (Figure 6). Since when passing through the lens, we lose information about the distance to the object, we can place the plane of the virtual camera’s field of view at an arbitrary distance. To simplify calculations, a distance of 1   m is assumed.
In this way, the virtual camera field of view will be the rectangular area defined by the points ABCD, located parallel to the O x y plane at a unit distance.
The rectangular area A B C D is represented as an array M of size H r e s by V r e s containing the coordinates of three-dimensional points P i , j , :
M = P 0 , 0 P H r e s 1 , 0 P 0 , V r e s 1 P H r e s 1 , V r e s 1 ,
where
P i , j = x i , j y i , j z i , j .
Thus, point A corresponds to P 0 , 0 , B   P 0 ,   V r e s 1 , C P H r e s 1 ,   0 , D P H r e s 1 , V r e s 1 . We calculate the geometric dimensions of the region A B C D as follows:
A D = 2 t a n   α 2 , A C = A D 2 A B 2 = 2 t a n   α 2 × K r e s 2 K r e s 2 + 1 , A B = A C K r e s = 2 t a n   α 2 × K r e s 2 K r e s 2 + 1 .
Then the coordinates of the vertices A , B , C , D are to be calculated:
A = P 0 , 0 = x 0 , 0 y 0 , 0 z 0 , 0 = A C / 2 A B / 2 1 , B = P 0 , V r e s 1 = x 0 , V r e s 1 y 0 , V r e s 1 z 0 , V r e s 1 = A C / 2 A B / 2 1 , C = P H r e s 1 , 0 = x H r e s 1 , 0 y H r e s 1 , 0 z H r e s 1 , 0 = A C / 2 A B / 2 1 , D = P H r e s 1 , V r e s 1 = x H r e s 1 , V r e s 1 y H r e s 1 , V r e s 1 z H r e s 1 , V r e s 1 = A C / 2 A B / 2 1 .
Taking into consideration that x i , 0 = x i , 1 = = x i , V r e s 1 ,   y 0 , j = y 1 , j = = y H r e s 1 , j ,   z = 1 for all points P i , j , we obtain
x i = A C 2 + A C H r e s 1 × i , y j = A B 2 + A B V r e s 1 × j , P i , j = x i , j y i , j 1 .
Next, we need to calculate the field of view ABCD by the angles of rotation φ and slope θ (Figure 7). First, we rotate the field of view ABCD by an angle θ , while x i remains unchanged:
P i , j = x i y j z j = x i z s i n   θ + y j c o s   θ z c o s   θ + y j s i n   θ = x i y i c o s   θ s i n   θ y i s i n   θ + c o s   θ .
Next, we rotate the A B C D relative to the Z axis by an angle φ (Figure 8), while z j remains unchanged:
P i , j = x i y j z j = x i c o s   φ + y j s i n   φ x i s i n   φ + y j c o s   φ z j .

4.2. The Second Stage: Search for the Coordinates of the Point Images

To obtain the relationship between the spatial coordinates of the point P i , j , obtained in Equation (19), and pixel coordinates p of its image in the coordinate system of the image sensor, we apply the function of a direct correspondence between coordinates of the coordinates (9), with parameters calculated because of calibration:
p i , j = u i , j v i , j = w o r l d 2 c a m P i , j = w o r l d 2 c a m x i , j y i , j z i , j .
Thus, we obtain an array M containing pixel coordinates of the images of the points for the calculated field of view:
M = u 0 , 0 v 0 , 0 u H r e s 1 , 0 v H r e s 1 , 0 : : : u 0 , V r e s 1 v 0 , V r e s 1 u H r e s 1 , V r e s 1 v H r e s 1 , V r e s 1 .

4.3. The Third Stage: Construction of the Corrected Image

The last third step is to form the resulting image pixel using a pixel from the original omnidirectional image:
I = L u 0 , 0 , v 0 , 0 L u H r e s 1 , 0 , v H r e s 1 , 0 : : : L u 0 , V r e s 1 , v 0 , V r e s 1 L u H r e s 1 , V r e s 1 , v H r e s 1 , V r e s 1
where L is the signal level in each pixel of the image sensor (or the original omnidirectional image) with coordinates u i , j , v i , j . The resulting image can be improved by any anti-aliasing method since we are dealing with fractional coordinate values.

5. Experimental Results

The omnidirectional camera was calibrated using the OCamCalib toolkit. For this purpose, nine images of a test object were taken in the form of a chessboard on a calibrated camera. In the calibration experiments, it was found that usually, nine images were sufficient to stabilize the determined coefficients. A two-megapixel IP camera was used with a super wide-angle lens of fisheye mounted on it (Fujinon FE185CO46HA-1: Focal length 1.4 mm, Iris range F1.4–F16, Angle of view 1/2”) [35]. This approach can be applied to catadioptric setup too.
After calibration and setting of the input parameters for the camera (Figure 5) the following coordinates were obtained:
  • center of the circular image (pixel) coordinates:
  • x c = 580718 10 3 , y c = 770693 10 3 ;
  • standard deviation of re-projection (pixel): 0.7948 ;
  • affine coefficients:
    c = 999779 × 10 6 ,   d = 913037 × 10 10 , e = 374671 × 10 9 ;
  • polynomial coefficients:
    a 0 = 338295 × 10 6 ,   a 1 = 0 ,   a 2 = 106759 × 10 12 , a 3 = 755634 × 10 16 , a 4 = 1593 × 10 16 .
As seen, the mean squared error of re-projection (in other words, the error of the function of the direct connection of the coordinates of world2cam) turned out to be less than the size of a pixel; this is shown in Figure 9b. Such accuracy is sufficient for both the tasks of observation and most calculation tasks.
Further, the parameters found because of the calibration were used in the algorithm implemented in the module for the Typhoon system [36]. The module is implemented based on an omnidirectional non-mechanical PTZ camera, turning after moving objects. Figure 10 shows an example of the program operation.
Thus, using the developed algorithm, we implemented a non-mechanical pivoting camera based on an omnidirectional camera, while the mean squared error of the reproducing points from the original omnidirectional image onto the image with corrected distortion was less than pixel size.
The proposed precision calibration using a statistical approach is also applied to a digital camera with a fisheye lens of the type “cnAICO-Fisheye Lens/1/2”—Focal Length 1.4 mm, F1.4 Manuel Iris—182 Degree [37].
In Figure 11 the output wide-angle image is shown, which clearly shows the distortions and the test object for calibration.
After calibration, the following results were obtained:
  • input image size = 2592 ,   1920 ;
  • center of the circular image (pixel) coordinates: x c = 1284.93 ,   y c = 1114.8 ;
  • standard deviation of re-projection (pixel): 2.476;
  • average error = 2.476 pixel;
  • polynomial coefficients:
    a 0 = 6.470 × 10 2 ,   a 1 = 0 ,   a 2 = 5.652 × 10 4 , a 3 = 8.358 × 10 8 ,   a 4 = 1.059 × 10 10 .
After applying the algorithm for virtual fields of view of 90 and 120 degrees, the images shown in Figure 12a,b are obtained where the distortions are removed.

6. Conclusions

This paper presents a practical approach for calibrating and correcting distortion for fisheye cameras. The only requirement is that the lens can be modeled by a Taylor series expansion of a spherical perspective model.
Based on some assumptions, an algorithm has been developed for converting images received by omnidirectional cameras into images with corrected distortion corresponding to the classical projection geometric models. This algorithm makes it possible to obtain an image with corrected distortion. It is suitable for both types of omnidirectional cameras: the ones with catadioptric optical systems and those with fisheye lenses.
The algorithm was successfully applied in a software module for a “Typhoon” optical-electronic observation system, implemented based on an omnidirectional non-mechanical camera, turning after moving objects [36].
The algorithm was applied in a software module for a “Typhoon” CCTV system, implementing a PTZ camera, turning after moving objects based on an omnidirectional camera with a fisheye lens.
The algorithm can be applied also in various fields of robotics, where a wide viewing angle is important, but the elimination of distortion, inherent in omnidirectional cameras, is needed. It can also be used to create 360-degree videos for the YouTube platform, see example—https://www.youtube.com/watch?v=j34Ut0QHNYA (accessed on 3 October 2022).
The transformation of images from fisheye cameras into an equiangular representation moves the trajectory estimation problem from a nonlinear motion model to a linear one that can be treated with classical algorithms such as the Kalman filter.

Author Contributions

Methodology, V.V.K., S.N.Y. and T.S.D.; software, V.P.L.; writing—original draft, V.P.L. and M.B.M.; writing—review and editing, V.P.L., M.B.M. and T.S.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Bulgarian National Science Fund in the scope of the project “Exploration the application of statistics and machine learning in electronics” under contract number KП-06-H42/1.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the Research and Development sector at the Technical University of Sofia for the financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yarishev, S.; Konyahin, I.; Timofeev, A. Universal optoelectronic measuring modules in distributed measuring systems. In Proceedings of the Fifth International Symposium on Instrumentation Science and Technology, Shenyang, China, 12 January 2009; SPIE—The International Society for Optical Engineering: Bellingham, DC, USA, 2009. [Google Scholar]
  2. Konyahin, I.; Timofeev, A.; Yarishev, S. High precision angular and linear measurements using universal optoelectronic measuring modules in distributed measuring systems. Key Eng. Mater. Key Eng. Mater. 2010, 437, 160–164. [Google Scholar]
  3. Korotaev, V.; Konyahin, I.; Timofeev, A.; Yarishev, S. High precision multimatrix optic-electronic modules for distributed measuring systems. In Proceedings of the Sixth International Symposium on Precision Engineering Measurements and Instrumentation, Hangzhou, China, 28 December 2010; SPIE—The International Society for Optical Engineering: Bellingham, DC, USA, 2010. [Google Scholar]
  4. Berenguer, Y.; Payá, L.; Valiente, D.; Peidró, A.; Reinoso, O. Relative Altitude Estimation Using Omnidirectional Imaging and Holistic Descriptors. Remote Sens. 2019, 11, 323. [Google Scholar] [CrossRef] [Green Version]
  5. Caruso, D.; Engel, J.; Cremers, D. Large-scale direct slam for omnidirectional cameras. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015. [Google Scholar]
  6. Garcia-Fidalgo, E.; Ortiz, A. Vision-based topological mapping and localization methods: A survey. Robot. Auton. Syst. 2014, 64, 1–20. [Google Scholar] [CrossRef]
  7. Corke, P. Robotics, Vision and Control: Fundamental Algorithms. In MATLAB, 2nd ed.; Springer International Publishing AG: Cham, Switzerland, 2017. [Google Scholar]
  8. Valiente, D.; Payá, L.; Jiménez, L.M.; Sebastián, J.M.; Reinoso, O. Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching. Sensors 2018, 18, 2041. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Oriolo, G.; Paolillo, A.; Rosa, L.; Vendittelli, M. Humanoid odometric localization integrating kinematic, inertial and visual information. Auton. Robot. 2016, 40, 867–887. [Google Scholar] [CrossRef]
  10. Puig, L.; Guerrero, J. Scale-space for central catadioptric systems: Towards a generic camera feature extractor. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 12 January 2012. [Google Scholar]
  11. Wong, W.K.; Fo, J.S.-T. Omnidirectional Surveillance System Featuring Trespasser and Faint Detection for i-Habit. Int. J. Pattern Recognit. Artif. Intell. 2015, 29, 1559012. [Google Scholar] [CrossRef]
  12. Morel, J.-M.; Yu, G. ASIFT: A New Framework for Fully Affine Invariant Image Comparison. SIAM J. Imaging Sci. 2009, 2, 438–469. [Google Scholar] [CrossRef] [Green Version]
  13. Dimitrov, K.; Shterev, V.; Valkovski, T. Low-cost system for recognizing people through infrared arrays in smart home systems. In Proceedings of the 2020 XXIX International Scientific Conference Electronics (ET), Sozopol, Bulgaria, 16–18 September 2020. [Google Scholar]
  14. Onoe, Y.; Yokoya, N.; Yamazawa, K.T.H. Visual surveillance and monitoring system using an omnidirectional video camera. In Proceedings of the Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170), Brisbane, QLD, Australia, 20 August 1998. [Google Scholar]
  15. Shah, S.; Aggarwal, J. A simple calibration procedure for fish-eye (high distortion) lens camera. In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, San Diego, CA, USA, 8–13 May 1994. [Google Scholar]
  16. Nayar, S.; Peri, V. Folded catadioptric cameras. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Fort Collins, CO, USA, 23–25 June 1999. [Google Scholar]
  17. Ng, K.C.; Ishiguro, H.; Trivedi, M.; Sogo, T. An integrated surveillance system—Human tracking and view synthesis using multiple omni-directional vision sensors. Image Vis. Comput. 2004, 22, 551–561. [Google Scholar] [CrossRef] [Green Version]
  18. Wong, W.; Liew, J.; Loo, C. Omnidirectional surveillance system for digital home security. In Proceedings of the 2009 International Conference on Signal Acquisition and Processing, Kuala Lumpur, Malaysia, 3–5 April 2009. [Google Scholar]
  19. Lorenz, E.; Hurka, J.; Heinemann, D.; Beyer, H.G. Irradiance Forecasting for the Power Prediction of Grid-Connected Photovoltaic Systems. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2009, 2, 2–10. [Google Scholar] [CrossRef]
  20. Marquez, R.; Gueorguiev, V.G.; Coimbra, C.F.M. Forecasting of Global Horizontal Irradiance Using Sky Cover Indices. J. Sol. Energy Eng. 2012, 135, 011017. [Google Scholar] [CrossRef] [Green Version]
  21. Ghonima, M.S.; Urquhart, B.; Chow, C.W.; Shields, J.E.; Cazorla, A.; Kleissl, J. A method for cloud detection and opacity classification based on ground-based sky imagery. Atmospheric Meas. Technol. 2012, 5, 2881–2892. [Google Scholar] [CrossRef]
  22. Hensel, S.; Marinov, M. Comparison of Algorithms for Short-term Cloud Coverage Prediction. In Proceedings of the 2018 IX National Conference with International Participation (ELECTRONICA), Sofia, Bulgaria, 17–18 May 2018. [Google Scholar]
  23. Rusinov, M. Technical Optics; Librocom: Saint-Petersburg, Russia, 2011. [Google Scholar]
  24. Lazarenko, V.; Yarishev, S. The algorithm for transforming a hemispherical field-of-view image. In Proceedings of the 3rd International Meeting on Optical Sensing and Artificial Vision, OSAV’2012, St. Petersburg, Russia; 2012. [Google Scholar]
  25. Schwalbe, E. Geometric modeling and calibration of fisheye lens camera systems. In Proceedings of the Panoramic Photogrammetry Workshop, Berlin, Germany, 24–25 February 2005. [Google Scholar]
  26. Tsudikov, M. Reduction of the Image from the Type Chamber “Fisheye” to the Standard Television; Tula State University: Tula, Russia, 2011; pp. 232–237. [Google Scholar]
  27. Hensel, S.; Marinov, M.B.; Schwarz, R. Fisheye Camera Calibration and Distortion Correction for Ground-Based Sky Imagery. In Proceedings of the XXVII International Scientific Conference Electronics—ET2018, Sozopol, Bulgaria, 13–15 September 2018. [Google Scholar]
  28. Mei, C.; Rives, P. Single viewpoint omnidirectional camera calibration from planar grids. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA’07, Rome, Italy, 10–14 April 2007. [Google Scholar]
  29. Geyer, C.; Daniilidis, K. A Unifying Theory for Central Panoramic Systems and Practical Implications. In Computer Vision—ECCV 2000; Springer: Berlin, Heidelberg, 2000. [Google Scholar]
  30. Ying, X.; Hu, Z. Can We Consider Central Catadioptric Cameras and Fisheye Cameras within a Unified Imaging Model? In Computer Vision—ECCV 2004; Springer: Berlin, Heidelberg, 2004. [Google Scholar]
  31. Puig, L.; Bastanlar, Y.; Sturm, P.; Guerrero, J.; Barreto, J. Calibration of Central Catadioptric Cameras Using a DLT-Like Approach. Int. J. Comput. Vis. 2011, 93, 101–114. [Google Scholar] [CrossRef] [Green Version]
  32. Scaramuzza, D.; Martinelli, A.; Siegwart, R. A flexible technique for accurate omnidirectional camera calibration and structure from motion. In Proceedings of the 4th IEEE International Conference on Computer Vision Systems, ICVS’06, New York, NY, USA, 4–7 January 2006. [Google Scholar]
  33. Scaramuzza, D.; Martinelli, A.; Siegwart, R. A toolbox for easily calibrating omnidirectional cameras. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, IROS 2006, Beijing, China, 9–15 October 2006. [Google Scholar]
  34. Juarez-Salazar, R.; Zheng, J.; Diaz-Ramirez, V.H. Distorted pinhole camera modeling and calibration. Appl. Opt. 2020, 59, 11310–11318. [Google Scholar] [CrossRef] [PubMed]
  35. Fujinon CCTV Lens. 2018. Available online: https://www.fujifilm.com/products/optical_devices/pdf/cctv/fa/fisheye/fe185c046ha-1.pdf (accessed on 3 October 2022).
  36. Golushko, M.; Yaryshev, S. Optoelectronic Observing System “Typhoon” [Optiko-elektronnaya sistema nablyudeniya “Typhoon”]. Vopr. Radioelektron. 2014, 1, 38–42. [Google Scholar]
  37. cnAICO-Fisheye Lens/1/2. Available online: https://aico-lens.com/product/1-4mm-c-fisheye-lens-acf12fm014ircmm/ (accessed on 4 October 2022).
Figure 1. Perspective geometric models: (a) of the lens and (b) of an extra wide-angle fisheye lens.
Figure 1. Perspective geometric models: (a) of the lens and (b) of an extra wide-angle fisheye lens.
Computation 10 00209 g001
Figure 2. Geyer and Daniilidis Unified Imaging Model (adapted from [7], p. 344).
Figure 2. Geyer and Daniilidis Unified Imaging Model (adapted from [7], p. 344).
Computation 10 00209 g002
Figure 3. Geometric projection models: (a) catadioptric omnidirectional camera, (b) camera with a fisheye lens, and (c) coordinates on the plane of the camera receiver.
Figure 3. Geometric projection models: (a) catadioptric omnidirectional camera, (b) camera with a fisheye lens, and (c) coordinates on the plane of the camera receiver.
Computation 10 00209 g003
Figure 4. Distortions caused by the discretization process (using rectangular pixels) and the displacement of the camera and mirror (lens) axes.
Figure 4. Distortions caused by the discretization process (using rectangular pixels) and the displacement of the camera and mirror (lens) axes.
Computation 10 00209 g004
Figure 5. The three main steps of the algorithm.
Figure 5. The three main steps of the algorithm.
Computation 10 00209 g005
Figure 6. Virtual camera field of view with φ = 0 , θ = 0 .
Figure 6. Virtual camera field of view with φ = 0 , θ = 0 .
Computation 10 00209 g006
Figure 7. Field of view slope of the virtual camera at an angle θ .
Figure 7. Field of view slope of the virtual camera at an angle θ .
Computation 10 00209 g007
Figure 8. Field of view rotation of the virtual camera at an angle φ relative to the Z axis.
Figure 8. Field of view rotation of the virtual camera at an angle φ relative to the Z axis.
Computation 10 00209 g008
Figure 9. Calibration results using the OCamCalib toolkit. (a) An example of the wrong determination of the calibration parameters [32]; (b) the result of the experimental calibration is the position where calibration points and re-projected points coincide, which confirms the correct determination of calibration parameters. Yellow crosses denote the determined calibration points of the test object, and red crosses—the result of projecting the calibration points with three-dimensional coordinates, calculated during the calibration process, back to the image. The size of each square of the test object is 20 mm estimated center coordinates of the circular image.
Figure 9. Calibration results using the OCamCalib toolkit. (a) An example of the wrong determination of the calibration parameters [32]; (b) the result of the experimental calibration is the position where calibration points and re-projected points coincide, which confirms the correct determination of calibration parameters. Yellow crosses denote the determined calibration points of the test object, and red crosses—the result of projecting the calibration points with three-dimensional coordinates, calculated during the calibration process, back to the image. The size of each square of the test object is 20 mm estimated center coordinates of the circular image.
Computation 10 00209 g009
Figure 10. The implementation of the algorithm in the “Typhoon” system: (a) the original image; (b) virtual PTZ camera, guided by a motion detector.
Figure 10. The implementation of the algorithm in the “Typhoon” system: (a) the original image; (b) virtual PTZ camera, guided by a motion detector.
Computation 10 00209 g010
Figure 11. Initial image with the test object for calibration.
Figure 11. Initial image with the test object for calibration.
Computation 10 00209 g011
Figure 12. Images obtained after applying the algorithm to a field of view of (a) 90 degrees and (b) 120 degrees.
Figure 12. Images obtained after applying the algorithm to a field of view of (a) 90 degrees and (b) 120 degrees.
Computation 10 00209 g012
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lazarenko, V.P.; Korotaev, V.V.; Yaryshev, S.N.; Marinov, M.B.; Djamiykov, T.S. Precision Calibration of Omnidirectional Camera Using a Statistical Approach. Computation 2022, 10, 209. https://doi.org/10.3390/computation10120209

AMA Style

Lazarenko VP, Korotaev VV, Yaryshev SN, Marinov MB, Djamiykov TS. Precision Calibration of Omnidirectional Camera Using a Statistical Approach. Computation. 2022; 10(12):209. https://doi.org/10.3390/computation10120209

Chicago/Turabian Style

Lazarenko, Vasilii P., Valery V. Korotaev, Sergey N. Yaryshev, Marin B. Marinov, and Todor S. Djamiykov. 2022. "Precision Calibration of Omnidirectional Camera Using a Statistical Approach" Computation 10, no. 12: 209. https://doi.org/10.3390/computation10120209

APA Style

Lazarenko, V. P., Korotaev, V. V., Yaryshev, S. N., Marinov, M. B., & Djamiykov, T. S. (2022). Precision Calibration of Omnidirectional Camera Using a Statistical Approach. Computation, 10(12), 209. https://doi.org/10.3390/computation10120209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop