You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

9 July 2021

Research on Design, Calibration and Real-Time Image Expansion Technology of Unmanned System Variable-Scale Panoramic Vision System

,
,
and
School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
This article belongs to the Section Sensing and Imaging

Abstract

This paper summarized the research status, imaging model, systems calibration, distortion correction, and panoramic expansion of panoramic vision systems, pointed out the existing problems and put forward the prospect of future research. According to the research status of panoramic vision systems, a panoramic vision system with single viewpoint of refraction and reflection is designed. The systems had the characteristics of fast acquisition, low manufacturing cost, fixed single-view imaging, integrated imaging, and automatic switching depth of field. Based on these systems, an improved nonlinear optimization polynomial fitting method is proposed to calibrate the monocular HOVS, and the binocular HOVS is calibrated with the Aruco label. This method not only improves the robustness of the calibration results, but also simplifies the calibration process. Finally, a real-time method of panoramic map of multi-function vehicle based on vcam is proposed.

1. Introduction

The key technologies of unmanned systems can be divided into four parts [1]: environment perception, precise positioning technology, decision making and planning, and control and execution technology. Among them, environment perception is the premise and the most important technology to solve other key technologies. In the environmental perception technology, the common vision systems are widely used as an indispensable passive sensor system with low financial cost. However, restricted by the optical principle of perspective lens, it can only observe the local area of the environment, so that the amount of environmental information obtained by vehicles is very limited, Moreover, the depth information of the environment cannot be obtained only by the single common vehicle vision systems. In order to meet the needs of large scale, large field of view, and integrated imaging of vehicle vision systems, panoramic systems came into being.
The concept of “panorama” was first put forward in the field of European art, and then engineers gradually explored the advantages of this concept. So far, the application of panorama has changed from art to engineering, and has gradually been sought after by all walks of life. Donald W. Rees, an American scholar, proposed omni directional vision sensor (ODVS) [2] in 1970 and applied for a patent. Since then, the research on panoramic vision systems has been further expanded by Yagi, Hong and Yamazawa in 1991, 1994, and 1995, respectively [3,4,5]. Panoramic vision systems have the advantages of large field of view, integrated imaging, imaging symmetry, rotation invariance [6], especially in the fields of visual navigation, panoramic vision slam [7], visual odometer, active vision, unmanned systems, space field of taking high-definition panoramic images on the moon, panoramic vision systems monitoring, and underwater detection [8]. According to their different components, panoramic vision systems can be divided into pan-tilt rotating panoramic vision systems, fisheye lens panoramic vision systems, multi-camera splicing panoramic vision systems [9], catadioptric panoramic vision systems, and panoramic annular optical vision systems [10]. Compared with other conventional methods such as large field of view imaging, catadioptric panoramic imaging systems has great advantages in miniaturization, structural flexibility, low cost, and real-time acquisition.
Since the year 2000, IEEE-ICOIM has held seminars on panoramic imaging for many years that mainly focus on articles on catadioptric panoramic imaging [11]. As a cross-discipline of computer vision and optics, catadioptric panoramic imaging still has many theoretical and technical problems to be solved urgently. In particular, the imaging properties, calibration methods, distortion correction of panoramic images, panoramic image expansion, stereo matching of panoramic images, and theories and methods of stereo reconstruction of single-viewpoint catadioptric panoramic vision systems all need further research. Among them, the single-view hyperboloid catadioptric panoramic vision systems have the advantages of good systems design flexibility, good integrated imaging effect, large field of view, and high performance in real-time imaging, and has been gradually applied in the field of unmanned systems technology.

3. Architecture Design and Theoretical Analysis

3.1. Panoramic Systems Design

The HOVS proposed in this paper includes two single-viewpoint variable-scale hyperbolic mirror panoramic vision subsystems with the same configuration and two industrial computers, one of which is used to receive the image data collected by the panoramic vision systems in real time, and the other is used for real-time processing of image data, the data transmission between the two industrial computers is in the form of a 10 Gigabit network cable, and the two single-view variable-scale hyperboloid mirror panoramic vision systems respectively communicate with the industrial computer through a dual-channel gigabit network cable.

Structure Design and Depth Information Acquisition Theory of HOVS

  • Hyperboloid mirror module and perspective camera module
HOVS is mainly composed of two single-viewpoint variable-scale hyperbolic mirror panoramic vision subsystems with the same configuration. Each single-view variable-scale hyperboloid mirror panoramic vision subsystems are composed of a hyperboloid mirror module, a perspective camera module, a mirror mounting plate rotation module, a mirror height adjustment module, and a visual positioning module. According to the number of imaging viewpoints, reflective panoramic vision systems can be divided into single-viewpoint reflective panoramic vision systems and multi-viewpoint reflective panoramic vision systems. In order to satisfy the single-view geometric constraints of the hyperboloid mirror, the lower focus of the hyperboloid mirror should coincide with the lens of the perspective camera, so that the incident light to the upper focus of the hyperboloid mirror must be reflected to its lower focus according to the mathematical properties of the hyperboloid. Therefore, the partial position information and color information of all spatial objects on the hyperboloid mirror can be perceived in real time on the image plane, as shown in Figure 1.
Figure 1. Cylindrical coordinate systems and imaging optical path of hyperboloid panoramic vision systems.
Hyperboloid mirror [141] is an indispensable part of HOVS. Its vertical field of view angle and the basic size of the mirror are the design parameters that must be considered. The hyperboloid mirror is located directly above the perspective camera. According to the figure below, the upper boundary of the hyperboloid panoramic vision systems is determined by the basic parameters of the hyperboloid mirror, and the lower boundary is determined by the occlusion range of the perspective camera and the environment perception platform. For the vertical field of view ξ, the formula is as follows:
ξ m i n = a r c t a n 2 D d l e n s ξ m a x = a r c t a n z m a x x m a x 2 + y m a x 2
x m a x = d 2 , y m a x = 0 , z m a x = a ,   η = f t f f f f
ξ = ξ m i n + ξ m a x = a r c t a n 2 D d l e n s +   a r c t a n ( 2 a 1 + d 2 4 b 2 a 2 + b 2 d )
From Equation (3), the vertical field angle is related to the basic parameters of the hyperboloid mirror, the lens dlens of the perspective camera, the upper focus of lens, and the distance D of focal plane. Under the condition of ideal single-view, D = 2c (where c is the focal length of hyperboloid); perspective camera lens diameter dlens = 20 mm; the opening diameter d of hyperboloid mirror is restricted by its processing technology and manufacturing cost. In the actual production process, as the diameter of hyperboloid mirror becomes larger, the accuracy of hyperboloid mirror becomes smaller, and the cost becomes larger. In these systems, the hyperboloid mirror surface is processed by using a composite material open mold-forming method to form an optical mirror by electroplating a reflective layer on the hyperboloid mirror surface. However, the surface area of the hyperboloid mirror will have a huge impact on the uniformity of the composite material electroplating coating, so the opening diameter of the hyperboloid mirror is determined as d = 80 mm, according to the actual situation of the experimental equipment. The vertical field angle diagram of hyperboloid panoramic vision system is shown in Figure 2.
Figure 2. Schematic diagram of vertical field angle of view of hyperboloid panoramic vision systems.
The numerical relation among the three parameters a, b, ξ is simulated with MATLAB. From the calculation results (left of Figure 3), when b gets the minimum value and a gets the maximum value, the vertical field of view can reach the maximum value. From the contour, the influence of parameter b on the vertical field of view is far greater than that of parameter a. The parameters of GX2750 camera used in the laboratory are as follows: sensor model ICX694, resolution 2200 × 2750, pixel size 4.54 × 4.54 μm; the distance between the panoramic camera and the ground is 1.3 m.
Figure 3. a, b and vertical field angle ξ relationship (left) and the relationship between a, b and ri vertical field angle (right).
The design of hyperboloid mirror parameters should also meet the following conditions:
  • The upper boundary point of vertical field of viewMax Pmax and lower boundary point Pmin. The projection points of Min should be in the image plane.
  • Since the angle of view above the horizontal plane of the hyperbolic reflector is basically reflected sky and belongs to the invalid area, the upper boundary angle of the vertical field of view angle is less than 0°.
  • The viewpoint of the perspective camera should include the maximum reflection area of the hyperbolic mirror. Because the field of view angle of the perspective camera is inversely proportional to the focal length, it is necessary to select the lens with small focal length when selecting the focal length of the perspective camera. In the experiment, the focal length of the smallest one-inch lens is 16 mm, which is cheap and common.
The solution set of condition a and condition b is shown in Figure 3, and the solution set with condition restriction is shown in Figure 4. By solving the intersection of the two solutions shown in Figure 4, the values of hyperboloid mirror parameters a and b can be obtained. In order to minimize HOVS, as can be seen from Figure 5 that a = 48, b = 44, and its physical diagram is shown in Figure 5 (right).
Figure 4. Feasible solutions of a, b and vertical field of view angle with constraints (left) and feasible solutions of a, b and ri with constraints (right).
Figure 5. Feasible intersection of two solution spaces (left) and real image of mirror (right).
2.
Mirror mounting disk rotation module, mirror height adjustment module, and visual positioning module.
In order to facilitate the rapid switching of different hyperboloid mirrors to obtain different field of view and depth of field information, for this special experimental requirement, there is currently no special device for automatically switching mirrors, adjusting the vertical field of view, and adjusting the depth of field and image clarity for this special experimental requirement. The existing binocular stereoscopic panoramic vision imaging mirror switching technology mainly adopts manual methods. There are many defects in the way of manually switching mirrors, such as long switching time, low switching efficiency, poor switching accuracy, and so on. In the process of multiple switching of the reflector, it is easy to cause mirror contamination and mirror thread damage, which causes unnecessary resource loss.
Based on the above situation, the authors of this paper have designed a mirror mounting rotation module, a mirror height adjustment module, and the vision fixed module. The mirror mounting plate rotation module is used to mount four hyperboloid mirrors with different parameters and drive the mirror to rotate. The visual positioning module adjusts the coincidence degree of the central axis of the mirror and the optical axis of the fluoroscopic camera through PID control and the position of the label on the mirror mounting plate. The flow chart of the visual positioning module is shown in Figure 6. The reflector height adjustment push rod module adjusts the distance between the reflector and the main point of the camera to achieve a single point of view and large depth of field visual effect. The structure of monocular panoramic vision system and binocular stereo vision system are shown in Figure 7 and Figure 8.
Figure 6. Feasible flow chart of visual positioning module.
Figure 7. The physical picture of monocular panoramic vision systems.
Figure 8. Physical map of binocular stereo panoramic vision systems.
3.
Principle of depth information acquisition in HOVS.
The epipolar geometry principle and formula derivation process of the horizontal binocular stereoscopic panoramic vision systems is very different from ordinary perspective imaging. The ordinary perspective camera is a small hole imaging model and the panoramic vision systems use a panoramic model. The ordinary perspective camera uses rectangular coordinate systems and panoramic vision systems use a cylindrical coordinate system. Therefore, the depth information of binocular stereo panoramic vision cannot be obtained by ordinary perspective cameras. In order to facilitate the calculation, the horizontal binocular stereo panoramic vision systems’ coordinate systems in Figure 9 are simplified as Figure 10. O 1 and O 2 is the upper focus of two hyperboloid mirrors, P w is the world point in the world coordinate system.
Figure 9. Coordinate systems of horizontal binocular stereo panoramic vision systems.
Figure 10. Geometric simplification of binocular stereo panoramic vision.
The horizontal distance between two single-view panoramic vision systems is b, and there is a world point P w . In the panoramic vision systems, the world coordinates of camera_left and camera_right are P w 1 = r w 1 ; φ w 1 ; z w 1 ,   P w 2 = r w 2 ; φ w 2 ; z w 2 . The projection points on the camera_left hyperboloid and camera_right hyperboloid are P h 1 , P h 2 . The projection points on the right panoramic image plane and camera_left panoramic image plane are P i 1 , P i 2 , r p is the distance between P w and O 1 O 2 , The intersection points of P w O 1 , P w O 2 and left hyperboloid, right hyperboloid are P h 1 , P h 2 .The plane Π is the plane of the x-axis of the rectangular coordinate systems and perpendicular to the z-axis. The projection of point P w on the Π plane is P w . The intersection points of P w O 1 and camera_left hyperboloid mirror is P h 1 .The intersection points of P w O 2 and camera_ right hyperboloid mirror is P h 2 .The angle between P w P h 1 , P w P h 2 and X axis is φ 1 , φ 2 . Because φ where is no change after the nonlinear transformation of hyperboloid mirror, and the number of projection points in panoramic image is small, then the value of φ i is equal to that of the environment point φ Value. And plane P w O 1 P w is perpendicular to the plane Π, so P w , P w and P i are same value ( φ 1 = φ 1 = φ i 1 ; φ 2 = φ 2 = φ i 2 ).
In Δ P w O 1 O 2 , according to the sine theorem:
P w O 2 sin φ 1 = P w O 1 sin φ 2 = O 1 O 2 sin φ 3
P w O 1 = b sin φ i 2 sin φ i 2 φ i 1
P w O 2 = b sin φ i 1 sin φ i 2 φ i 1
In Δ P w P w O 1 , due to P w is the vertical projection point of P w on plane Π, so:
tan P w O 1 P w   = P w P w O 1 P w  
The back-projections is performed from the image plane pixel P i x i , y i , z i to environment point P w x w , y w , z w . In this paper we discuss the transformation relations between the two points.
k = b 2 f c + a b 2 x i 2 + y i 2 + f 2   a 2 x i 2 + y i 2 b 2 f 2
From the polar coordinate back-projection transformation and Equation (8), through the analysis and derivation of the polar coordinate equation, finally, the environmental point Pw can be calculated from the wold coordinate in the left and right camera coordinate systems, it is called depth information.
c a m e r a _ l e f t :   r w 1 = b sin φ i 2 sin φ i 2 φ i 1 φ w 1 = φ i 1 z w 1 = b sin φ i 2 sin φ i 2 φ i 1 × 2 c f K k r i 1  
c a m e r a _ r i g h t : r w 2 = b sin φ i 2 sin φ i 2 φ i 1 φ w 2 = φ i 2 z w 2 = b sin φ i 1 sin φ i 2 φ i 1 × 2 c f K k r i 2  

3.2. HOVS Calibration

3.2.1. Calibration Principle of Monocular Panoramic Vision Systems for Single-View Hyperboloid Mirror

Since the mathematical model and projection model of hyperboloid mirror panoramic vision systems are completely different from those of an ordinary perspective camera, a new calibration method is adopted, which first calibrates the hyperboloid mirror parameters (external parameters) and then calibrates the perspective camera parameters (internal parameters).
In order to calibrate the basic parameters of the hyperboloid mirror, the hyperboloid equation is used to model the calibration algorithm. In order to make the calibration algorithm more robust and extensive, the polynomial series is used to fit the shape of hyperboloid mirror.
The calibration principle of hyperboloid mirror panoramic vision systems is shown in Figure 11 above. The point Pw on the chessboard calibration board changes nonlinearly to the point P h on the hyperboloid mirror, then through perspective transformation, it becomes the point P i in the image plane. Where the set of P w points is denoted by N i j , the set of P i points is denoted as n i j . From the above world point P w , the transformation process shows that:
K i j · [ m i j f m i j ] = [ R 0 1 | t 0 1 ] · N i j
Figure 11. Calibration principle of panoramic vision systems for hyperboloid reflector.
In the process of P w P h , R is the rotation matrix of 3 × 3, t is the translation vector of 3 × 1, and K i j is the depth coefficient scalar, which is used to measure the world point coordinates P w in the same direction and the projection point coordinates of the reflector   P h . Thus, Equation (11) can be changed into Equation (12).
K i j · u i j c x v i j c y f u i j c x , v i j c y = r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 · x i j y i j 0 1  
where
  r τ 2 = u τ 2 + v τ 2
  f u i j c x , v i j c y   = f u τ , v τ   = a 0 + a 1 r τ + a 2 r τ 2 + a 3 r τ 3 + + a N r τ N
x i j ,   y i j are the corner coordinates of the chessboard; u i j ,   v i j are the projection point coordinates of chessboard corner in pixel coordinate systems.
c x ,   c y are the coordinates of the center point of the image plane; u τ ,   v τ are the projection point coordinates of the projection point in the image plane coordinate systems.
By expanding Equation (12), it can be known that only three sets of chessboard angular coordinates are needed to solve the hyperboloid mirror external parameter matrix. However, in practical work, due to the influence of environmental noise and acquisition error, more corner coordinates are used to form the least squares problem.
By constructing reprojection residuals S i j , the iterative calculation is carried out along the descending direction of the gradient until the residual result is less than the accuracy, and the iteration is stopped. At this time, the parameters corresponding to the optimal solution are the internal parameter matrix. The mathematical model of nonlinear optimization is as follows:
arg m i n R , t , a 0 , a 1 , a N i = 1 m j = 1 n | |   S i j   | | 2 2
S i j = n i j n i j τ
where
n i j is the observation value and   n i j τ is the reprojection point value.

3.2.2. Calibration Principle of Binocular Stereo Panoramic Vision Systems with Single-View Hyperboloid Mirror

When calculating the stereo depth of binocular stereo panoramic vision systems, the two monocular panoramic vision systems are arranged in strict accordance with the horizontal alignment. In fact, because of the installation accuracy and other reasons, the two panoramic vision systems are not strictly aligned horizontally, there is always a deviation, and the baseline B between the two monocular panoramic vision systems cannot be obtained by actual measurement. Therefore, before using the binocular stereo panoramic vision systems, it is necessary to calibrate it to determine its relative position relationship and baseline length. The calibration principle and structure are shown in Figure 12 and Figure 13.
Figure 12. Schematic diagram of binocular stereo panoramic calibration.
Figure 13. Structure diagram of binocular stereo panoramic calibration.
The calibration of binocular stereo panoramic vision systems is easier than the calibration of the monocular panoramic vision system. The principle can refer to the ordinary binocular camera calibration method. After the monocular panoramic vision systems are calibrated, the accurate projection relationship can be determined, and various computer vision algorithms can also be used normally. By using the third-party vision detection library Aruco to complete the calibration of the binocular stereo panoramic vision systems, Aruco is an open-source augmented reality library that can be used to complete computer vision tasks such as tracking, recognition, and positioning. This research mainly uses Aruco’s GPS. It should be noted that the camera model supported by the Aruco library does not include a panoramic vision system, so the image detection part of the Aruco front end needs to be modified to expand the panoramic image to the hotspot area.
The coordinate systems relationship of binocular stereo panoramic vision systems calibration is shown in Figure 14. The two panoramic vision systems are camera_left and camera_right respectively, and the attitude transformation matrix from camera_right to camera_left is T c 2 c 1 =   R c 2 c 1 , t c 1 c 2 , The calibration plate is a plate printed with a specific Aruco pattern. Two panoramic stereo vision systems detect and recognize the same Aruco calibration board at the same time, and get T b c 1 =   R b c 1 , t b c 1 and T b c 2 =   R b c 2 , t b c 2 .
T c 2 c 1 = T b c 1 · T b c 2 T c 2 c 1 = R b c 1 t b c 2 0 1 · R b c 2 T R b c 2 T · t b c 2 0 T 1
Figure 14. Coordinate systems relationship of binocular stereo panoramic calibration.
Equation (17) is the calculation process of the attitude transformation matrix between two panoramic vision systems. Generally, the optimal T c 2 c 1 is obtained by repeatedly calculating multiple pairs of binocular images.
T c 2 c 1 is a pose transformation matrix, which cannot be added or subtracted, it needs to be calculated many times to get the optimal solution.
It is also necessary to construct a nonlinear optimization problem to solve. Order P i j c 1 is the coordinate of the j-th corner in the i-th image in camera_left coordinate systems; P i j c 2 is the coordinate in the camera_right coordinate systems. According to the transformation relation, there is Equation (18).
  T c 2 c 1 · P i j c 1 = P i j c 2
According to Equation (14), the optimization problem is constructed, such as Equation (19).
a r g m i n T c 2 c 1 i = 1 n j = 1 k | | T c 2 c 1 · P i j c 1 P i j c 2 | | 2 2
In Equation (18), P i j c 1 and P i j c 2 are the coordinates of the corner points on the Arco calibration plate observed by the left and right cameras in their respective coordinate systems. In particular, in order to avoid the problem that the orthogonality constraint of rotation matrix may lead to optimization failure in the process of optimization, in the actual calibration process, the rotation part of the pose transformation matrix T c 1 c 2 uses quaternion to participate in the optimization, which is converted into rotation matrix after the optimal result is obtained. The whole calibration algorithm flow is shown in Algorithm 1.
Algorithm 1 Aruco estimates tag attitude.
  Input: n pairs of left and right panoramic images
if i < n then
  Binocular stereo panoramic image distortion correction
  Image preprocessing, extracting Aruco tag
  If Aruco tag is found in both left and right images then
  Extract the posture of Aruco tag, i = 1 n j = 1 k | | T c 2 c 1 · P i j c 1 P i j c 2 | | 2 2 is added to the nonlinear optimization equations
  else i = i + 1
  end if
Else
  Solving nonlinear optimization equations with MATLAB
  if the average reprojection error is less than ±5 pixels then
     Calibration successful
    Else
   Calibration failed
  end if
end if

3.3. HOVS Image Expansion

Panoramic Image Expansion Algorithm Based on VCAM

The panoramic image expansion algorithm is mainly divided into three parts:
The first part: As shown in Algorithm 2 below, the mathematical model of panoramic vision systems is established, and the two-dimensional image points on the panoramic image are transformed into three-dimensional world point cloud through the mathematical model of panoramic vision systems;
The second part: As shown in the following Algorithm 3, the virtual camera model is established, and the three-dimensional world point cloud is transformed into a two-dimensional virtual vision expansion image through the virtual camera model. Finally, the panoramic expansion image is obtained after image quadratic interpolation and mean optimization;
The third part: As shown in the following Algorithm 4, calculate the panorama and time, and initialize panoramic_view_extract program, circularly publish image_topic, subscribe to image_ topic, initialize OCAM and VCAM and load the mapping file, through the defined DISPLAY and ADJUST macro to adjust the camera parameters and pose, and calculate the time t of loading OCAM model t_i and the time t of foreground expansion t_p. Finally, save the result in remap_view_ exact. Panoramic image expansion flow chart is shown in Figure 15.
Figure 15. Panoramic image expansion flow chart.
The panoramic expansion algorithm includes Algorithms 2–4. The HOVS algorithm is to expand two same monocular panoramic images synchronously to form a panoramic image around the car body 360° 2D panoramic expansion within the scope.
Algorithm 2 Transform 2D panoramic image to 3D world point cloud.
  Input: 2D panoramic image
  Output: 3D world point cloud
  1: Start of OCAM algorithm
  2: Loading OCAM model
  3: Initialize OCAM
  4: Read initial 2D panoramic image
  5: Using polyval () to calculate Z coordinate of 3D world point
  6: Using cam2word () to transform 2D image points into 3D world point clouds
  7: Save world point cloud image
  8: End of OCAM algorithm
Algorithm 3 Image transformation from 3D world point cloud to panoramic expansion.
  Input: 3D world point cloud
  Output: Panoramic expansion image
  1: VCAM algorithm starts
  2: Loading VCAM model
  3: Setting VCAM parameters
  4: Loading 3D point cloud image
  5: Update VCAM camera parameters
  6: Using MD5 algorithm to verify the integrity of image expansion data transmission
  7: Read image mapping file
  8: Transformation from 3D point cloud image to 2D image points
  9: Calculation of expanded image by bilinear interpolation algorithm
  10: Real-time display of expansion image progress with progress bar
  11: Mean optimization re_ MAP1 and re_ MAP2 image
  12: Save the optimized image
  13: End of VCAM algorithm
Algorithm 4 Panoramic_view_extract.
  Input: Panoramic theme, OCAM and VCAM
  Output: Panoramic expansion time and panoramic expansion map
  1: panoramic_ view_ Start of extract algorithm
  2: Initialize panoramic_ view_ extract
  3: Circularly publish image_ topic, image_ pub_ topic, camera_ model, virtual_ camera, point cloud_ topic, remap_ save_ path
  4: Subscribe to image_ Topic, publishing to point_ Cloud and point_ Topic of cloud2
  5: Initialize OCAM and VCAM, load mapping file
  6: Through the defined display and adjust macro, create image_ Remap and control_ Panel window (parameters and pose of camera can be adjusted)
  7: Load OCAM model into CV: mat map, and then map CV: mat map to VCAM: point_ Map calculates its time t_ i. Finally, VCAM: point_ The map is expanded quickly and the time t is calculated_ p
  8: Finally, save the expanded map to remap_ save_ File
  9: panoramic_ view_ End of extract algorithm

4. Experiments and Results

4.1. Calibration Experiment and Experimental Results of Monocular Panoramic Vision Systems with Single-View Hyperboloid Mirror

4.1.1. Calibration Experiment

The 9 × 7 checkerboard is prepared, in order to avoid the influence of reflection of aluminum substrate around the checkerboard, the inner 7 × 5 checkerboard calibration panel is selected and aluminum alloy is used as the base plate. The parameters to be marked are   R , t , a 0 , a 1 , , a N . The calibration procedure is as follows:
The first step is to collect nine high-quality panoramas, as shown in Figure 16 below.
Figure 16. Read picture names.
The second step is to extract corner points on each panorama, as shown in Figure 17 below.
Figure 17. Extract grid corners.
The third step is to set R ϵ = I ,   t ε = 0 , the pose of the checkerboard calibration board in the camera coordinate systems in all panoramic images is shown in Figure 18 below.
Figure 18. Show extrinsic.
In the fourth step, N = 2 is set. Through the result of the third step, the pose of the calibration plate, the pixel coordinates of the corners, and the R to be estimated can be obtained   R ϵ = I ,   t ε = 0 , the error equation of reprojection is constructed.
In the fifth step, nonlinear optimization and robust nonlinear optimization are used to solve the reprojection equation iteratively. The reprojection error of the optimized result is less than ±5 pixels, and the calibration result is finished and output. If it is larger, the fourth step is returned, N = N + 1.
The reprojection errors in the calibration results before optimization are as follows:
  • Average reprojection error computed for each chessboard (pixels):
5.12 ± 4.02; 4.37 ± 2.97; 4.94 ± 3.82; 3.64 ± 2.95; 3.46 ± 2.83; 3.37 ± 2.78; 3.49 ± 2.79; 4.93 ± 3.94; 3.67 ± 2.85
Average error [pixels]; 4.108346 Sum of squared errors: 11985.065455
ss = −315.37231721852 0 − 0.00000141751 0.000000017656 − 0.00000000001
  • The results of nonlinear optimization are as follows:
ss = −215.68454874302 0 − 0.00002217486 0.00000006384 − 0.00000000004
Root mean square [pixel]: 2.315967
  • The results of robust nonlinear optimization are as follows
According to the average error of each image in Table 1, when n = 1, 2, 3, 4, and 5 are recorded in Table 2, the sum of squares of total reprojection error and average reprojection error can be calculated. It can be seen from the above table that when n= 1, f () becomes a linear function, and the fitting effect of single-view bisurface is very poor, resulting in large reprojection error. When n = 5, the optimizer cannot get the optimal result and the calibration fails. When n = 4, the reprojection error of calibration is the smallest, and the reprojection diagram of calibration plate corner is shown in Figure 19 below.
Table 1. Average reprojection error table of optimized images when n = 1, 2, 3, 4, and 5.
Table 2. Overall reprojection error table of all images to be marked when n = 1, 2, 3, 4, and 5.
Figure 19. Reproject on images.

4.1.2. Calibration Result

As shown in Figure 19 and Figure 20, the checkerboard can be correctly projected into the image, and the average reprojection error is about 3 pixels, so the calibration results can be considered to be effective. The calibration results of monocular vision systems of single-view hyperboloid mirror are shown in Figure 21 above, and the final results of calibration parameters are as follows:
N = 4 ,   f r   = 219.7939 + 2.1487771 × 10 3 r 2 6.348362 × 10 6 r 3 + 4.161875 × 10 9 r 4
Figure 20. Reprojection error map.
Figure 21. Calibration result chart.
The coordinates of the central pixel of the mirror:
  X c , Y c   =   993.103303 ,   1337.177544 .
Mirror position deviation:
R ε   =   0.996229 , 0.013996 , 0.012452 ; t ε =   993.103303 ,   1337.177544 ,   0.0
After calibration, the coordinate of the rotation center of the mirror in the image plane is represented by a red circle, which is close to the image center of the mirror. The rotation vector of the position deviation of the mirror is expressed as R ε =   0.996229 , 0.013996 , 0.012452 (the first term is close to 1, the second and third terms are almost close to 0, which indicates that the error between the mirror and the panoramic vision systems is very small.

4.2. Calibration Experiment and Experimental Results of Binocular Stereo Panoramic Vision Systems with Single-View Hyperboloid Mirror

After the monocular panoramic vision systems is calibrated, the horizontal binocular stereo panoramic vision systems can be calibrated. Firstly, the calibration environment is built, and the equipment used is as follows.

4.2.1. Calibration Experiment

After the monocular panoramic vision systems is calibrated, the horizontal binocular stereo panoramic vision systems can be calibrated. Firstly, the calibration environment is built, and the equipment used is as follows: Two monocular panoramic vision systems, as shown in Figure 22.
Figure 22. Binocular stereo panoramic systems to be calibrated.
4.
One Aruco calibration board, as shown in Figure 23.
Figure 23. Aruco calibration plate.
Two monocular panoramic vision systems are pre-calibrated and aligned on the same base. The size of the calibration plate for the Aruco logo is 1200 × 1200 mm, the pattern of the Aruco marker map is used as an Aruco tag to improve the stability of the Aruco detection algorithm.
The specific calibration steps are as follows:
  • Collect the images of two panoramic vision systems at the same time, and keep the images containing the Aruco calibration board as far as possible. A total of 12 pairs of images are collected.
  • According to the calibration results of the panoramic vision systems and the back-projection transformation, the collected image is partially expanded and the distortion is corrected. Figure 24 shows the partially expanded image of the panoramic vision systems after the back-projection transformation.
    Figure 24. Twelve pairs of images collected at the same time (capture).
3.
The position and posture of the Aruco calibration plate in each image are obtained by using the Aruco detection algorithm (see Figure 25 and Figure 26).
Figure 25. Extracted matching points and pose (1–6).
Figure 26. Extracted matching points and pose (7–12).
4.
According to the position and pose of the Aruco calibration board in each image, the coordinates P of each corner on the calibration board in the current camera coordinate systems are calculated P i j c 1 and P i j c 2 .
5.
According to Equation (23), the nonlinear optimization problem is constructed and solved by MATLAB.
6.
Calculate the reprojection error, and the calibration is considered successful when the average is within 10 pixels.
After MATLAB calculation, the final calibration results are as follows:
  • Equation (23) is the rotation matrix and translation vector between two panoramic vision systems.
  • The baseline length between the two panoramic vision systems is 0.64 m, the deviation in Y direction is −0.007 m, and the deviation in Z direction is −0.014 m.
T c 2 c 1 = 0.999 0.020 0.012 0.641 0.020 0.999 0.015 0.006 0.011 0.016 0.999 0.013 0 0 0 1
It can be seen that the diagonal value of the rotation matrix in Equation (23) is close to 1, which indicates that the attitude relationship between the two panoramic vision systems is basically aligned, as shown in Figure 27.
Figure 27. Spatial relationship of two panoramic vision systems.

4.2.2. Calibration Result

Finally, the reprojection error of 64 pairs of images is shown in Figure 28. Camera index represents the serial number of the image pair, a total of 64 pairs. The left figure of Figure 28 shows the projection position distribution of 64 pairs of images, and the coordinate unit is pixel. The right figure of Figure 28 is the superposition of the average reprojection error of 64 pairs of images. It can be seen that the Y direction of the reprojection error of each pair of 64 pairs of images is in the same direction ± within 5 pixels and most of the X direction is in the ± within 5 pixels; thus, the calibration result is considered valid.
Figure 28. Reprojection error.

4.3. Experiments and Results of Image Expansion in HOVS

  • Experimental setting: Xishan campus of Beijing University of technology.
  • Experimental equipment: Unmanned vehicle (Modified by BAIC EC180) and perception platform, HOVS systems (GX2750 camera resolution 2700 × 2200), computer, and other necessary equipment. Unmanned systems experimental platform as shown in Figure 29.
    Figure 29. Unmanned systems experimental platform.
  • Experimental evaluation index: The new HOVS systems can expand the panoramic image in real time with less distortion. As shown in Figure 30 below.
    Figure 30. Six-direction real-time deployment rendering.
  • Experimental method: By driving the vehicle in Xishan campus, the collected panoramic image is expanded in real time with small distortion.
  • Panorama resolution: At 2700 × 2200 for single panoramic vision systems.
  • Resolution of unfolded image: Three directions of single panoramic vision systems, each direction 1200 × 700 (secondary interpolation), a total of six directions of unfolded image.
  • Algorithm efficiency: Using the 2.3 Ghz main frequency processor, multithreading parallel processing, six direction pictures are expanded at the same time, each frame averages 4 to 5 ms (panoramic video frame rate is 10 fps), which can achieve the effect of real-time panoramic expansion.
  • Algorithm effect: Using the mathematical model of panoramic calibration to interpolate the missing pixels, the image distortion after interpolation is significantly reduced (does not affect the typical feature extraction). Effect of partial expansion is as shown in Figure 31.
    Figure 31. Effect of partial expansion.
It can be seen from the local panoramic expansion effect picture in the above figure that the interpolation algorithm based on the mathematical model of panoramic vision systems can still maintain the texture features and lighting conditions of the environment at a higher resolution in the edge position, with large distortion of the panoramic image.

5. Conclusions and Future Work

In this paper, using the 360° imaging characteristics of the single-view hyperbolic catadioptric panoramic vision systems, two single-view hyperbolic catadioptric panoramic vision systems are symmetrically placed on the unmanned systems perception platform to construct a new type of binocular stereo panoramic perception. This system is particularly suitable for 360° obstacle real-time sensing and detection of military unmanned systems. With this requirement as a starting point, the imaging principle of the single-view hyperbolic catadioptric panoramic vision systems, the design of the hyperbolic catadioptric mirror, the mirror mounting disc rotation module, the mirror height adjustment module, and the vision positioning module are studied. The HOVS depth information acquisition principle, single-view hyperboloid mirror monocular panoramic vision systems calibration principle, single-view hyperbolic mirror binocular stereo panoramic vision systems calibration principle, and panoramic image expansion algorithm are based on VCAM.
The HOVS systems proposed in this paper can automatically switch mirrors and auto-focus to obtain scenes in different depths of field and different vertical angles of view. Aiming at the problem of panoramic camera calibration, a non-linear optimization method combined with the polynomial fitting method of the mirror is proposed to complete the calibration. For the calibration problem of the binocular stereo panoramic camera, the Aruco vision library is used to simplify the calibration process and improve the robustness and precision. At the edge of the panoramic image with larger distortion, the interpolation algorithm based on the VCAM mathematical model interpolates the missing pixels. The interpolated image can still maintain the texture characteristics and lighting conditions of the environment at a higher resolution. The panorama expansion algorithm can expand the picture in six directions at the same time, and each frame averages 4 to 5 ms (the panoramic video shooting frame rate is 10 fps), which can fully achieve the real-time panorama expansion effect.
Our future work will mainly focus on the integration, miniaturization, and generalization of the system. After the optimization of the system, the authors will also focus on panoramic vision slam and panoramic obstacle dynamic monitoring.

Author Contributions

This work was carried out in collaboration among all authors. X.G. investigated and conceived of the work. Z.Z. provided funding acquisition and validation. X.G. carried out the theoretical analysis and mechanical design. W.Z. and Z.W. carried out on the formal analysis. X.G. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by National Natural Science Foundation (face items) of China under Grant 61773059 and by the National Defense Technology Foundation Program of China under Grant 202020230028.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. AMiner. Automatic Driving Research Report of Artificial Intelligence. Available online: https://static.aminer.cn/misc/article/selfdriving.pdf (accessed on 5 January 2018).
  2. Rees, D.W. Panoramic Television Viewing Systems. U.S. Patent 350546, 7 April 1970. [Google Scholar]
  3. Yagi, Y.; Kawato, S.; Tsuji, S. Real—Time omnidirectional image sensor (COPIS) for vision—Guided navigate. IEEE Trans. Robot. 1994, 1, 11–27. [Google Scholar] [CrossRef]
  4. Hong, J. Image Based Homing. In Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, 22–25 April 1991; pp. 620–625. [Google Scholar]
  5. Yamazawa, K.; Yagi, Y.; Yachida, M. Obstacle detection with omnidirectional image sensor Hyper Omni Vision. In Proceedings of the 1995 IEEE International Conference on Robotics & Automation, Aichi, Japan, 21–27 May 1995; pp. 1062–1067. [Google Scholar]
  6. Zhang, F. Research on Panoramic Vision Image Quality Optimization Method. Ph.D. Thesis, Harbin Engineering University, Harbin, China, 2010. [Google Scholar]
  7. Hang, Y.; Huang, F. Panoramic Visual SLAM Technology for Spherical Images. Sensors 2021, 21, 705. [Google Scholar]
  8. Negahdaripour, S.; Zhang, H.; Firoozfam, P.; Oles, J. Utilizing Panoramic Views for Visually Guided Tasks in Underwater Robotics Applications. In Proceedings of the 2001 MTS/IEEE Conference and Exhibition (OCEANS 2001), Honolulu, HI, USA, 5–8 November 2001; pp. 2593–2600. [Google Scholar]
  9. Chen, J.; Xu, Q.; Luo, L.; Wang, Y.; Wang, S. A Robust Method for Automatic Panoramic UAV Image Mosaic. Sensors 2019, 19, 1898. [Google Scholar] [CrossRef] [PubMed]
  10. Wang, J.; Zhang, Z.H.; Li, K.J.; Tao, X.; Shi, Z.; Zhang, D.; Shao, H.; Liang, Z. Development and Application of Panoramic Vision Systems. Comput. Meas. Control. 2018, 22, 1664–1666. [Google Scholar]
  11. Zeng, J.; Su, Y. Panoramic imaging systems of refraction reflection. Laser J. 2004, 25, 62–64. [Google Scholar]
  12. Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 666–673. [Google Scholar]
  13. Grossberg, M.D.; Nayar, S.K. A general imaging model and a method for finding its parameters. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; pp. 108–115. [Google Scholar]
  14. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Auto. 1987, 4, 323–344. [Google Scholar] [CrossRef]
  15. Zhang, Z. In A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 22, 1330–1334. [Google Scholar] [CrossRef]
  16. Li, M. Research on Camera Calibration Technology. Master’s Thesis, Nanchang Aeronautical University, Nanchang, China, June 2006. [Google Scholar]
  17. Gu, X.; Wang, X.; Liu, J. Camera self-calibration method based on Kruppa equation. J. Dalian Univ. Technol. 2003, 43, 82–85. [Google Scholar]
  18. Geyer, C.; Daniilidis, K. A Unifying Theory for Central Panoramic Systems and Practical Implications. In Proceedings of the 2000 European Conference on Computer Vision, Dublin, Ireland, 26 June–1 July 2000; pp. 445–461. [Google Scholar]
  19. Kang, S.B. Catadioptric self-calibration. In Proceedings of the 2000 IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head, SC, USA, 13–15 June 2000; pp. 201–207. [Google Scholar]
  20. Svoboda, T.; Pajdla, T. Epipolar geometry for central cata-dioptric cameras. Int. J. Comput. Vision. 2002, 49, 23–37. [Google Scholar] [CrossRef]
  21. Micušık, B. Two-View Geometry of Omnidirectional Cameras. Ph.D. Thesis, Czech Technical University, Prague, Czechia, June 2004. [Google Scholar]
  22. Barreto, J.P.; Araujo, H. Geometry properties of central cata-dioptric line images and application in calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1327–1333. [Google Scholar] [CrossRef]
  23. Puig, L.; Bastanlar, Y.; Sturm, P.; Guerrero, J.J.; Barreto, J. Calibration of Central Catadioptric Cameras Using a DLT-Like Approach. Int. J. Comput. Vision. 2011, 93, 101–114. [Google Scholar] [CrossRef]
  24. Sturm, P. Multi-view geometry for general camera models. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; pp. 206–212. [Google Scholar]
  25. Morel, O.; Fofi, D. Calibration of catadioptric sensors by polarization imaging. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3939–3944. [Google Scholar]
  26. Kannala, J.; Brandt, S.S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1335–1340. [Google Scholar] [CrossRef]
  27. Wikipedia. Fisheye Lens. Available online: https://en.wikipedia.org/wiki/Fisheye_lens (accessed on 26 August 2010).
  28. Cłapa, J.P.; Blasinski, H.; Grabowski, K.; Sekalski, P. A fisheye distortion correction algorithm optimized for hardware implementations. In Proceedings of the 21st International Conference on Mixed Design of Integrated Circuits and Systems, Lublin, Poland, 24–26 June 2014; pp. 415–419. [Google Scholar]
  29. Courbon, J.; Mezouar, Y.; Eck, L.; Martinet, P. A generic fisheye camera model for robotic applications, in: Intelligent Robots and Systems. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 1683–1688. [Google Scholar]
  30. Scaramuzza, D.; Martinelli, A.; Siegwart, R. A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion. In Proceedings of the 2006 IEEE International Conference of Vision Systems (ICVS’06), New York, NY, USA, 5–7 January 2006. [Google Scholar]
  31. Scaramuzza, D. Omnidirectional Vision: From Calibration to Robot Motion Estimation. Ph.D. Thesis, ETH Zurich, Zürich, Switzerland, 22 February 2008. [Google Scholar]
  32. Rufli, M.; Scaramuzza, D.; Siegwart, R. Automatic Detection of Checkerboards on Blurred and Distorted Images. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2008), Nice, France, 22–26 September 2008. [Google Scholar]
  33. Scaramuzza, D.; Martinelli, A.; Siegwart, R. A Toolbox for Easy Calibrating Omnidirectional Cameras. In Proceedings of the 2006 IEEE International Conference on Intelligent Robots and Systems (IROS 2006), Beijing, China, 7–15 October 2006. [Google Scholar]
  34. Schneider, D.; Schwalbe, E.; Maas, H.G. Validation of geometric models for fisheye lenses. ISPRS J. Photogramm. Remote Sens. 2009, 64, 259–266. [Google Scholar] [CrossRef]
  35. Urban, S.; Leitloff, J.; Hinz, S. Improved wide-angle, fisheye and omnidirectional camera calibration. ISPRS J. Photogramm. Remote Sens. 2015, 108, 72–79. [Google Scholar] [CrossRef]
  36. Christopher, M.; Patrick, R. Single View Point Omnidirectional Camera Calibration from Planar Grids. In Proceedings of the 2007 International Conference on Robotics and Automation (ICRA), Rome, Italy, 10–14 April 2007; pp. 3945–3950. [Google Scholar]
  37. Frank, O.; Katz, R.; Tisse, C.-L.; Durrant-Whyte, H. Camera calibration for miniature, low-cost, wide-angle imaging systems. In Proceedings of the British Machine Vision Conference 2007, Warwick, UK, 10–13 September 2007; Volume 31. [Google Scholar] [CrossRef][Green Version]
  38. Yeong, D.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
  39. Zhu, Q.; Liu, C.; Cai, C. A Novel Robot Visual Homing Method Based on SIFT Features. Sensors 2015, 15, 26063–26084. [Google Scholar] [CrossRef] [PubMed]
  40. Abdel-Aziz, Y.I.; Karara, H.M. Direct linear transformation from comparator coordinates into object space in close-range photogrammetry. Am. Soc. Photogramm. 1971, 1, 1–18. [Google Scholar] [CrossRef]
  41. Aliaga, D.G. Accurate Catadioptric Calibration for Real- time Pose Estimation of Room-size Environments. In Proceedings of the 2001 International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; pp. 127–134. [Google Scholar]
  42. Wu, Y.; Hu, Z. Geometric invariants and applications under catadioptric camera model. In Proceedings of the 2005 International Conference on Computer Vision, Beijing, China, 17–21 October 2005; pp. 1547–1554. [Google Scholar]
  43. Luis, P.; Bermudez, J.; Peter, S.; Guerrero, J.J. Calibration of omnidirectional cameras in practice: A comparison of methods. Comput. Vis. Image Underst. 2012, 116, 120–137. [Google Scholar]
  44. Thirthala, S.R.; Plllefeys, M. Radial multi-focal tensors. Int. J. Comput. Vis. 2012, 96, 195–211. [Google Scholar] [CrossRef]
  45. Geyer, C.; Daniilidis, K. Catadioptric camera calibration. In Proceedings of the International Conference on Computer Vision, Corfu, Greece, 20–25 September 1999; pp. 398–404. [Google Scholar]
  46. Swaminathan, R.; Nayar, S.K. Nonmetric calibration of wide-angle lenses and poly cameras. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1172–1178. [Google Scholar] [CrossRef]
  47. Geyer, C.; Daniilidis, K. Para catadioptric Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 687–695. [Google Scholar] [CrossRef]
  48. Barreto, J.P.; Araujo. Paracatadioptric Camera Calibration Using Lines. IEEE Int. Conf. Comput. Vis. 2003, 2, 1359–1365. [Google Scholar]
  49. Ying, X.; Hu, Z. Catadioptric camera calibration using geometric invariants. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 10, 1260–1271. [Google Scholar] [CrossRef] [PubMed]
  50. Vasseur, P.; Mouaddib, E.M. Central catadioptric line detection. In Proceedings of the 15th British Machine Vision Conference, London, UK, 7–9 September 2004. [Google Scholar]
  51. Vandeportaele, B.; Cattoen, M.; Marthon, P.; Gurdjos, P. A New Linear Calibration Method for Para catadioptric Cameras. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006. [Google Scholar]
  52. Caglioti, V.; Taddei, P.; Boracchi, G.; Gasparini, S.; Giusti, A. Single-image calibration of off-axis catadioptric cameras using lines. In Proceedings of the 11th IEEE International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–20 October 2007; pp. 1–6. [Google Scholar]
  53. Wu, F.; Duan, F.; Hu, Z.; Wu, Y. A new linear algorithm for calibrating central catadioptric cameras. Pattern Recognit. 2008, 41, 3166–3172. [Google Scholar] [CrossRef]
  54. Bakstein, H.; Pajdla, T. Panoramic mosaicking with a 180 field of view lens. In Proceedings of the IEEE Workshop on Omnidirectional Vision, Copenhagen, Denmark, 2 June 2002; pp. 60–67. [Google Scholar]
  55. Luo, C.; Su, L.; Zhu, F.; Shi, Z. A versatile method for omnidirectional stereo camera calibration based on BP algorithm. Optoelectron. Inf. Technol. Res. Lab. 2006, 3972, 383–389. [Google Scholar]
  56. Deng, X.M.; Wu, F.C.; Wu, Y.H. An easy calibration method for central catadioptric cameras. Acta Autom. Sin. 2007, 33, 801–808. [Google Scholar] [CrossRef]
  57. Gasparini, S.; Sturm, P.; Barreto, J.P. Plane-based calibration of central catadioptric cameras. In Proceedings of the 12th International Conference on Computer Vision, Kyoto, Japan, 23–25 September 2009; pp. 1195–1202. [Google Scholar]
  58. Zhang, Z.; Matsushita, Y.; Ma, Y. Camera calibration with lens distortion from low-rank textures. CVPR 2011, 2011, 2321–2328. [Google Scholar]
  59. Zhou, Z.; Liang, X.; Ganesh, A.; TILT, Y.M. Transform Invariant Low-rank Textures. Int. J. Comput. Vis. 2012, 99, 1–24. [Google Scholar]
  60. Zhang, Z.; Zhang, Y.; Yan, Y.; Gao, Y. Research on panoramic camera calibration and application based on Halcon. Comput. Eng. Appl. 2016, 52, 241–246. [Google Scholar]
  61. Hu, S.H. Research on Camera Calibration Method in Vehicle Panoramic Vision Systems. Master’s Thesis, Wuhan University, Wuhan, China, 2017. [Google Scholar]
  62. Hartley, R.; Kang, S.B. Parameter-free radial distortion correction with Centre of distortion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1309–1321. [Google Scholar] [CrossRef] [PubMed]
  63. Toepfer, C.; Ehlgen, T. A unifying omnidirectional camera model and its applications. In Proceedings of the 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–20 October 2007; pp. 1–5. [Google Scholar]
  64. Luong, Q.; Maybank, S.J. Camera self-calibration: Theory and experiments. In Proceedings of the 1992 European Conference on Computer Vision, London, UK, 19–22 May 1992; pp. 321–334. [Google Scholar]
  65. Hartley, R.I. Self-calibration from multiple views with a rotating camera. In Proceedings of the 1994 European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994; pp. 471–478. [Google Scholar]
  66. Stein, G.P. Accurate internal camera calibration using rotation, with analysis of sources of error. In Proceedings of the 5th International Conference on Computer Vision, Cambridge, MA, USA, 20–23 June 1995; pp. 230–236. [Google Scholar]
  67. Ramalingam, S.; Sturm, P.; Lodha, S.K. Generic self-calibration of central cameras. Comput. Vis. Image Underst. 2010, 114, 210–219. [Google Scholar] [CrossRef]
  68. Espuny, F.; Gil, J.I.B. Generic self-calibration of central cameras from two rotational flows. Int. J. Comput. Vis. 2011, 91, 131–145. [Google Scholar] [CrossRef]
  69. Triggs, B. Auto calibration and absolute quadric. In Proceedings of the Computer Vision and Pattern Recognition, San Juan, PR, USA, 7–19 June 1997; pp. 604–614. [Google Scholar]
  70. Liebowitz, D. Metric rectification for perspective images of planes. In Proceedings of the 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, USA, 25–25 June 1998; pp. 482–488. [Google Scholar]
  71. Caprile, B.; Torre, V. Using vanishing points for camera calibration. Int. J. Comput. Vision. 1990, 4, 127–139. [Google Scholar] [CrossRef]
  72. Viola, P.; Jones, M. Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade. In Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA, 3–8 December 2001; pp. 14–18. [Google Scholar]
  73. Micusik, B.; Pajdla, T. Estimmion of omnidirectional camera model from Epipolar geometry. CVPR 2003, 200, 485–490. [Google Scholar]
  74. Maybank, S.J.; Faugeras, O.D. A theory of self-calibration of a moving camera. Int J Comput. Vision. 1992, 8, 123–151. [Google Scholar] [CrossRef]
  75. Strecha, C.; van Hansen, W.; Gool, L.V.; Fua, P.; Thoennessen, U. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In Proceedings of the 2008 IEEE Conferenc on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008, pp. 1–8. [Google Scholar]
  76. Furukawa, Y.; Ponce, J. Accurate camera calibration from multi-view stereo and bundle adjustment. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  77. Wei, D.Z. Design of High Definition 360-Degree Camera-Design of Panoramic Vision Processing Software Based on DaVinci. Master’s Thesis, Nanjing University of Aeronautics and Astronautics, Nanjing, China, June 2012. [Google Scholar]
  78. Swaminathan, R.; Nayar, S.K. Non-metric calibration of wide-angle lenses and polycameras. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999; pp. 413–419. [Google Scholar]
  79. Choi, K.H.; Kim, Y.; Kim, C. Analysis of Fish-Eye Lens Camera Self-Calibration. Sensors 2019, 19, 1218. [Google Scholar] [CrossRef] [PubMed]
  80. Morel, O.; Stolz, C.; Meriaudeau, F.; Gorria, P. Active lighting applied to three-dimensional reconstruction of specular metallic surfaces by polarization imaging. Appl. Opt. 2006, 45, 4062–4068. [Google Scholar] [CrossRef] [PubMed]
  81. Ainouz, S.; Morel, O.; Fofi, D.; Mosaddegh, S.; Bensrhair, A. Adaptive processing of catadioptric images using polarization imaging: Towards a pola-catadioptric model. Opt. Eng. 2013, 52, 037001. [Google Scholar] [CrossRef]
  82. Luo, Y.; Huang, X.; Bai, J.; Liang, R. Compact polarization-based dual-view panoramic lens. Appl. Opt. 2017, 56, 6283–6287. [Google Scholar] [CrossRef] [PubMed]
  83. Wang, Z. Research on Ultra-Wide Angle Lens Design and Distortion Correction Algorithm. Master’s Thesis, Zhejiang University, Hangzhou, China, June 2018. [Google Scholar]
  84. Duane, C.B. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  85. S1ama, C.C. Manual of Photogrammetry, 4th ed.; American Society of Photogrammetry: Falls Church, VA, USA, 1980. [Google Scholar]
  86. Wei, G.Q.; De Ma, S. Implicit and explicit camera calibration-theory and experiments. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 469–480. [Google Scholar]
  87. Heikkila, J.; Silvén, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  88. Shah, S.; Aggarwal, J.K. A simple calibration procedure for fish-eye (high distortion) lens camera. In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, San Juan, PR, USA, 8–13 May 1994; pp. 3422–3427. [Google Scholar]
  89. Devernay, F.; Faugeras, O.D. Automatic calibration and removal of distortion from scenes of structured environments. Investig. Trial Image Process. 1995, 2567, 62–72. [Google Scholar]
  90. Fitzgibbon, A.W. Simultaneous linear estimation of multiple view geometry and lens distortion. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; p. I. [Google Scholar]
  91. Mallon, J.; Whelan, P.F. Precise radial un-distortion of images. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 26–26 August 2004; pp. 18–21. [Google Scholar]
  92. Ahmed, M.; Farag, A. Nonmetric calibration of camera lens distortion: Differential methods and robust estimation. IEEE Trans. Image Process. 2005, 14, 1215–1230. [Google Scholar] [CrossRef] [PubMed]
  93. Hughes, C.; Denny, P.; Jones, E.; Glavin, M. Accuracy of fish-eye lens models. Appl. Opt. 2010, 49, 3338–3347. [Google Scholar] [CrossRef] [PubMed]
  94. Prescott, B.; McLean, G.F. Line-based correction of radial lens distortion. Graph. Models Image Process. 1997, 59, 39–47. [Google Scholar] [CrossRef]
  95. Gaspar, J.; Santos-victor, J. Visual path following with a catadioptric panoramic camera. Int. Symp. Um Intell. Robot. Syst. 1999, 139–147. [Google Scholar]
  96. Ahmed, M.T.; Farag, A. Differential methods for non-metric calibration of camera lens distortion. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, 8–14 December 2001; pp. 477–482. [Google Scholar]
  97. Ishii, C.; Sudo, Y.; Hashimoto, H. An image conversion algorithm from fish eye image to perspective image for human eyes. In Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Kobe, Japan, 20–24 July 2003; pp. 1009–1014. [Google Scholar]
  98. Qiu, Z.Q.; Lu, H.W.; Yu, Q.; Feng. Correction of fish eye lens distortion by Projective Invariance. Appl. Opt. 2003, 24, 36–38. [Google Scholar]
  99. Zeng, J.Y.; SU, X.Y. Elimination of the lens distortion in catadioptric omnidirectional distortion less imaging systems for horizontal scene. Acta Opt. Sin. 2004, 24, 730–734. [Google Scholar]
  100. Liu, L.Q. Research on Omnidirectional Machine Vision Based on Fisheye Lens. Master’s Thesis, Department of Mechanical and Electronic Engineering, Tianjin University of Technology, Tianjin, China, June 2008. [Google Scholar]
  101. Feng, W.J. Research and Development of Embedded Omnidirectional Visual Tracker. Master’s Thesis, Department of Mechanical and Electronic Engineering, Tianjin University of Technology, Tianjin, China, June 2008. [Google Scholar]
  102. Xiao, X.; Yang, G.G.; Bai, J. Distortion correction of circular lens based on spherical perspective projection constraint. Acta Opt. Sinical. 2008, 28, 675–680. [Google Scholar] [CrossRef]
  103. Liu, L.Q.; Cao, Z.L. Omnidirectional Image Restoration Using a Support Vector Machine. In Proceedings of the 2008 IEEE International Conference on Information and Automation, Changsha, China, 20–23 June 2008; pp. 606–611. [Google Scholar]
  104. Carroll, R.; Agrawala, M.; Agarwala, A. Optimizing content-preserving projections for wide-angle images. ACM Trans. Graph. 2009, 28, 43. [Google Scholar] [CrossRef]
  105. Xu, Y.; Zhou, Q.; Gong, L.; Zhu, M.; Ding, X.; Teng, R.K. FPGA implementation of reflective panoramic video real-time planar display technology. Appl. Electron. Technol. 2011, 37, 45–48. [Google Scholar]
  106. Maybank, S.J.; Ieng, S.; Benosman, R. A fisher- Rao metric for Para catadioptric images of lines. Inc. Compute. Vis. 2012, 99, 147–165. [Google Scholar] [CrossRef]
  107. Kanatani, K. Calibration of Ultrawide Fisheye Lens Cameras by Eigenvalue Minimization. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 813–822. [Google Scholar] [CrossRef] [PubMed]
  108. Ren, X.; Lin, Z.C. Linearized alternating direction method with adaptive penalty and warm starts for fast solving transform invariant low-rank textures Source. Int. J. Comput. Vis. 2013, 104, 1–14. [Google Scholar] [CrossRef]
  109. Huang, Y.Y.; Li, Q.; Zhang, B.Z. Fisheye distortion checkerboard image correction. Comput. Eng. Appl. 2014, 50, 111–114. [Google Scholar]
  110. Wu, Y.; Hu, Z.; Li, Y. Radial distortion invariants and lens evaluation under a single-optical-axis omnidirectional camera. Comput. Vis. Image Underst. 2014, 126, 11–27. [Google Scholar] [CrossRef]
  111. Tang, Y.Z. Parametric distortion-adaptive neighborhood for omnidirectional camera. Appl. Opt. 2015, 54, 6969–6978. [Google Scholar] [CrossRef]
  112. He, Y.; Xiong, W.; Chen, H.; Chen, Y.; Dai, Q.; Tu, P.; Hu, G. Fish eye image distortion correction method based on double longitude model. Acta Instrum. Sin. 2015, 36, 377–385. [Google Scholar]
  113. Hu, S.H.; Zhou, L.L.; Yu, H.S. Sparse Bayesian learning for image rectification with transform invariant low-rank textures. Signal Process. 2017, 137, 298–308. [Google Scholar] [CrossRef]
  114. Zhang, X.; Xu, S. Research on Image Processing Technology of Computer Vision Algorithm. In Proceedings of the 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL), Chongqing, China, 10–12 July 2020; pp. 122–124. [Google Scholar]
  115. Ling, Y.F.; Zhu, Q.D.; Wu, Z.X.; Zhang, Z. Implementation and improvement of cylindrical theory expansion algorithm for panoramic vision image. Appl. Sci. Technol. 2006, 33, 4–6. [Google Scholar]
  116. Lin, J.G.; Qian, H.L.; Mei, X.; Xu, J.F. Research on cylindrical solution of hyperboloid catadioptric panoramic image. Comput. Eng. 2010, 36, 204–206. [Google Scholar]
  117. Chen, Z.P.; He, B.W. A modified unwrapping method for omnidirectional images. In Proceedings of the 2011 International Conference on Electric Information and Control Engineering, Wuhan, China, 15–17 April 2011; pp. 52–55. [Google Scholar]
  118. Liu, H.J.; Chen, C.; Miao, L.G.; Liu, X.C. Study of Key Technology of Distortion Correction Software for Fisheye Image. Instrum. Tech. Sens. 2011, 40, 100–105. [Google Scholar]
  119. Xiao, S.; Wang, F. Generation of Panoramic View from 360 Degree Fisheye Images Based on Angular Fisheye Projection. International Symposium on Distributed Computing and Applications to Business. Eng. Sci. IEEE Comput. Soc. 2011, 135, 187–191. [Google Scholar]
  120. Wang, Y. Correction and Expansion of Super Large Wide Angle Distortion Image. Master’s Thesis, Tianjin University of Technology, Tianjin, China, January 2011. [Google Scholar]
  121. Ye, L.B. Design of Intelligent 3D Stereo Camera Equipment Based on 3D Panoramic Vision. Master’s Thesis, Zhejiang University of Technology, Hangzhou, China, June 2013. [Google Scholar]
  122. Cai, C.; Wu, K.; Liu, Q.; Cheng, H.; Ma, Q. Panoramic multi-target real-time detection based on improved Yolo algorithm. Comput. Eng. Des. 2018, 39, 3259–3264. [Google Scholar]
  123. Feng, Y.M. Research and Implementation of Omnidirectional Image Expansion Algorithm. Master’s Thesis, Zhejiang University of Technology, Hangzhou, China, June 2007. [Google Scholar]
  124. Lei, J.; Du, X.; Zhu, Y.F.; Liu, J.L. Omnidirectional image expansion based on Taylor model. Chin. J. Image Graph. 2010, 15, 1430–1435. [Google Scholar]
  125. Gaspar, J.; Deccó, C.; Okamoto, J.; Santos-Victor, J. Constant resolution omnidirectional cameras. In Proceedings of the IEEE Workshop on Omnidirectional Vision 2002. Held in Conjunction with ECCV’02, Copenhagen, Denmark, 2 June 2002; pp. 27–34. [Google Scholar]
  126. Pi, W.K. Motion Detection for Human Bodies Basing Adaptive Background Subtraction by Using an Omnidirectional Camera. Acta Sci. Nat. Univ. Pekineses 2004, 40, 458–464. [Google Scholar]
  127. Hou, H.J.; Bai, J.; Yang, G.G. Research on the expansion algorithm of two-dimensional planar imaging with panoramic annular lens. Acta Photonica Sin. 2006, 11, 1686–1688. [Google Scholar]
  128. Ma, Z.L.; Wang, J.Z. Expansion and correcting method of catadioptric panoramic reconnaissance image. J. Missile Guid. 2010, 30, 173–176. [Google Scholar]
  129. Zhu, X.M.; Zhang, X.K. A new method of panoramic image expansion. Sci. Technol. Inf. 2014, 2, 19–20. [Google Scholar]
  130. Du, E.Y.; Zhang, N.; Li, Y.D. Fast Lane detection method based on Gabor filter. Infrared Laser Eng. 2018, 47, 304–311. [Google Scholar]
  131. Hiroshi, K.; Jun, M.; Yoshiaki, S. Recognizing Moving Obstacles for Robot Navigation using Real-time Omnidirectional Stereo Vision. J. Robot. Mechatron. 2002, 14, 147–156. [Google Scholar]
  132. Xu, W. Research on Modeling and Rendering Technology of Dynamic Virtual Environment Based on Catadioptric Panorama. Ph.D. Thesis, University of Defense Science and Technology, Changsha, China, June 2007. [Google Scholar]
  133. Zhang, X.X. A Study of Single CCD Panoramic Imaging Systems Based on Optical Mirror. Master’s Thesis, Changchun University of Technology, Changchun, China, June 2013. [Google Scholar]
  134. Benosman, R.; Kang, S.; Faugeras, O. Panoramic Vision. In Sensors Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  135. Xiong, Z.H.; Xu, W.; Wang, W.; Zhang, M.J.; Liu, S.H. Eight direction symmetric reuse strategy to reduce the look-up table space of panoramic image table expansion method. Minicomput. Syst. 2007, 28, 1832–1836. [Google Scholar]
  136. Yu, H.Y.; Lu, M.; Yong, S.W. Design and implementation of radar PPI raster scanning display systems. J. Natl. Def. Univ. Sci. Technol. 2007, 29, 65–68. [Google Scholar]
  137. Wang, B.; Xiong, Z.H.; Cheng, G.; Chen, L.D.; Zhang, M.J. Real time development of catadioptric panoramic image based on FPGA. Comput. Appl. 2008, 28, 3135–3137. [Google Scholar]
  138. Chen, X.; Yang, D.Y.; Shi, X.F. Parallel optimization of omnidirectional image expansion. Comput. Eng. Des. 2010, 31, 4862–4865. [Google Scholar]
  139. Liang, S.T. FPGA Implementation of High-Resolution Panoramic Image Processing Algorithm. Master’s Thesis, Harbin University of Technology, Harbin, China, 2015. [Google Scholar]
  140. Zhu, W.; Han, J.F.; Zheng, Y.Y.; Tang, Y. Panoramic video multi-target real-time detection based on DSP. Optoelectron. Eng. 2014, 5, 68–76. [Google Scholar]
  141. Jaramillo, C.; Valenti, R.G.; Guo, L.; Xiao, J. Design and Analysis of a Single−Camera Omni stereo Sensor for Quadrotor Micro Aerial Vehicles (MA Vs). Sensors 2016, 6, 17. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.