Next Article in Journal
Best Decision-Making on the Stability of the Smoke Epidemic Model via Z-Numbers and Aggregate Special Maps
Next Article in Special Issue
Multi-Objective Optimization of Cell Voltage Based on a Comprehensive Index Evaluation Model in the Aluminum Electrolysis Process
Previous Article in Journal
A Type of Interpolation between Those of Lagrange and Hermite That Uses Nodal Systems Satisfying Some Separation Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Improved Differential Evolution Particle Swarm Hybrid Optimization Method and Its Application in Camera Calibration

1
School of Automation and Information Engineering, Xi’an University of Technology, Xi’an 710048, China
2
Autonomous Systems and Intelligent Control International Joint Research Center, Xi’an Technological University, Xi’an 710021, China
3
Testing Technology Institute, China Flight Test Academy, Xi’an 710021, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(6), 870; https://doi.org/10.3390/math12060870
Submission received: 23 February 2024 / Revised: 13 March 2024 / Accepted: 13 March 2024 / Published: 15 March 2024
(This article belongs to the Special Issue Advance in Control Theory and Optimization)

Abstract

:
The calibration of cameras plays a critical role in close-range photogrammetry because the precision of calibration has a direct effect on the quality of results. When handling image capture using a camera, traditional swarm intelligence algorithms such as genetic algorithms and particle swarm optimization, in conjunction with Zhang’s calibration method, frequently face difficulties regarding local optima and sluggish convergence. This study presents an enhanced hybrid optimization approach utilizing both the principles of differential evolution and particle swarm optimization, which is then employed in the context of camera calibration. Initially, we establish a measurement model specific to the camera in close-range photogrammetry and determine its interior orientation parameters. Subsequently, employing these parameters as initial values, we perform global optimization and iteration using the improved hybrid optimization algorithm. The effectiveness of the proposed approach is subsequently validated through simulation and comparative experiments. Compared to alternative approaches, the proposed algorithm enhances both the accuracy of camera calibration and the convergence speed. It effectively addresses the issue of other algorithms getting trapped in local optima due to image distortion. These research findings provide theoretical support for practical engineering applications in the field of control theory and optimization to a certain extent.

1. Introduction

Photogrammetric technology is an important measurement technique that can be applied to rotating machinery, to capture images of the objects inside the machinery using cameras. By analyzing the captured images, the displacement and structure of the targets can be measured.
Firstly, in the design and manufacturing of rotating machinery, photogrammetric technology can be employed to measure the position and shape of the internal components, as well as dynamic behavioral parameters during machinery operation. Secondly, in the maintenance and monitoring of rotating machinery, photogrammetric technology can be utilized to monitor the position, deformation, and vibration of rotating components. Through real-time monitoring, faults and defects can be promptly detected for repair and adjustment, thus enhancing their reliability and operational safety. Finally, the essential role of camera calibration in the context of photogrammetry is indispensable for the application of rotating machinery. The precision of camera calibration directly impacts the measurement results and reliability of photogrammetry, which in turn influences the effectiveness and benefits of the design, manufacturing, and maintenance aspects of rotating machinery.
Camera calibration, serving as the foundation of photogrammetry, is a technical process carried out to enable a camera to accurately capture objects in the real world and map them onto images. The implementation of camera calibration relies on the determination of specific camera parameters, encompassing focal length, principal point position, distortion, as well as external parameters like the camera’s spatial position and orientation. This facilitates precise measurements of objects in the real world and their projection onto images through photogrammetry.
Camera calibration methods have their own characteristics in the selection of calibration techniques. Commonly used methods include the Direct Linear Transformation (DLT), Tsai’s two-step calibration, and Zhang’s calibration. The DLT method establishes a geometric imaging model for the camera and directly solves the model parameters using linear equations. However, it does not consider distortion and is not practical [1]. Tsai’s two-step calibration combines the DLT method with nonlinear optimization, which improves the calibration accuracy compared to traditional methods. Nevertheless, it is complex and cannot meet industrial requirements [2]. Zhang’s calibration method is based on planar chessboard patterns and overcomes the need for highly accurate calibration objects required in traditional methods by only requiring a printed chessboard pattern for experimentation [3]. However, Zhang’s method exhibits drawbacks such as inaccurate calibration results and being too time-consuming when applied to practical engineering problems. Therefore, this paper proposed improvements to Zhang’s method, aiming to achieve a faster and more accurate calibration approach.
To achieve better calibration results, previous studies have conducted research based on the traditional Zhang’s calibration method. In 2020, Rui, Z. integrated feedforward artificial neural networks into the traditional calibration method to correct model errors, resulting in a twofold improvement in accuracy. However, the paper did not account for issues such as image blurring and lack of clarity [4]. In 2009, a simple and adaptable calibration method was proposed by Sarkka, S., utilizing neural networks to tackle the challenging calibration problem that arises when the object plane is nearly parallel to the image plane. However, the paper failed to consider the camera’s nonlinear distortion and imposed stringent data prerequisites [5]. In 2020, Xu, H. introduced a neural network structure for binocular stereo vision camera calibration but did not specify the dataset or application methods [6]. In 2001, Liang, Y. introduced a new method for monocular camera calibration by combining the Harris corner detection algorithm with artificial neural networks, introducing camera-to-template angles into the calibration process. However, the paper did not detail specific application scenarios and usage restrictions [7]. In 2016, Jiang W et al. optimized the Zhang calibration method using a combination of genetic and particle swarm algorithms, ultimately discovering that combining these algorithms can improve calibration accuracy. However, the experiment did not explain the susceptibility of the algorithm to local optima [8]. In 2019, Liu, J.Y. utilized the Structure-from-Motion (SFM) principle to calibrate cameras internally and externally based on the paired relationships between cameras, but the impact of radial distortion on the algorithm remains to be verified in practice [9]. In 2010, Huang, D. applied Back-Propagation Neural Networks (BPNN) to calibrate Kinect depth cameras, using target corner information as training data and establishing an error compensation model based on error symmetry to reduce depth measurement errors. However, the selection of neural network parameters relied on empirical knowledge and could only be applied in specific circumstances [10]. In 2019, Wang, Z. implemented camera calibration using multiple directional images captured by a thermal infrared camera based on the multi-view theory [11]. However, this method suffers from slow calibration convergence and susceptibility to local optima, primarily due to the need to simultaneously solve many parameters and the involvement of high-dimensional, nonlinear, and sub-pixel-level precision requirements. Nonetheless, addressing these issues is crucial for the accuracy and efficiency of camera calibration. Improving calibration convergence speed and avoiding local optima can reduce calibration time, enhance calibration accuracy, and establish a foundation for subsequent applications. In response to this challenge, the paper introduced the concept of transforming differential evolution into optimization algorithms, put forward an enhanced particle swarm hybrid optimization algorithm derived from differential evolution, and applied it to improve the Zhang calibration method.
The organization of this document is structured as follows: Section 2 is dedicated to establishing the camera imaging and distortion models. Section 3 investigates methods for camera calibration. Section 4 integrates the concept of Differential Evolution with Particle Swarm Optimization algorithms, introducing an enhanced hybrid optimization algorithm derived from Differential Evolution. Section 5 conducts experiments, simulating various parameter optimization techniques and comparing the outcomes.
The primary novelty of this paper is the enhancement of the differential evolution algorithm, particularly in the parameter selection aspect, facilitating the algorithm’s capability to explore the optimal population over the entire range. Consequently, when integrated with the particle swarm algorithm for searching the optimal individual within the optimum population, it enables the acquisition of superior calibration parameters and improves calibration accuracy.

2. Camera Imaging Model

The camera’s imaging process operates on the principle of pinhole imaging, which delineates the projection correlation between the imaging plane and the target. Under ideal circumstances, the association between objects and images adheres to a linear model governed by the principles of triangulation. However, in practical measurement processes, the influence of various external factors can lead to nonlinear distortions in camera imaging. Therefore, to more accurately model the camera’s imaging process, it is necessary to consider the distortions introduced by the camera and adopt a nonlinear camera imaging model.

2.1. Coordinate System and Its Transformation Relation

(1)
Coordinate system
The coordinate systems involved in camera imaging are depicted in Figure 1.
  • The world coordinate system: O W X W Y W Z W , it describes the position of the measured object in the three-dimensional world. The origin coordinates can be determined according to the specific requirements, and the unit is in meters ( m ) .
  • Camera Coordinate System: O C X C Y C Z C , the original coordinate system is positioned at the optical center, with the X C -axis and Y C -axis running parallel to the two edges of the image plane. The Z C -axis coincides with the optical axis, and the unit of measurement is meters ( m ) .
  • Pixel Coordinate System: O u v u v , the upper left corner is designated as the origin of the coordinate system for the image plane. The u -axis of the pixel coordinate system extends horizontally from left to right, while the v -axis extends vertically from top to bottom. The unit of measurement is pixels.
  • The coordinate system for the image, referred to as the Image Coordinate System, is defined by the central point of the image plane. The x -axis and y -axis are aligned in parallel with the u -axis and v -axis of the pixel coordinate system, respectively. The unit of measurement employed is millimeters.
(2)
Coordinate system transformation relationship
The primary focus of the camera imaging model involves the mapping correlation between the pixel coordinates of an image point p and its corresponding coordinates within three-dimensional space. The camera imaging procedure operates on the principle of pinhole imaging, depicted in Figure 2.
The point P in the figure represents a point in the world coordinate system, with coordinates ( X W , Y W , Z W ) . In the camera coordinate system, the coordinates of point p are ( X C , Y C , Z C ) . The corresponding imaging point of point p in the image is located in the pixel coordinate system with coordinates ( u , v ) , while the coordinates of point p in the image coordinate system are ( x , y ) . The focal length of the camera, denoted as f , f = o O C , is the distance between o and O C [12]. The projection transformation from camera coordinates to image coordinates follows specific transformations based on the principles of triangle similarity.
B C A o = C O C o O C = P B p A = X C x = Y C y = Z C f
According to Formula (1):
x = f X C Z C , y = f Y C Z C
According to Formula (2):
Z C x y 1 = f 0 0 0 0 f 0 0 0 0 1 0 X C Y C Z C 1
Figure 3 exhibits the process of converting camera coordinates into world coordinates.
The conversion from world coordinates to camera coordinates involves only rotation and translation operations and is considered a rigid transformation. The equation for coordinate transformation is as follows:
X C Y C Z C 1 = R T 0 1 X W Y W Z W 1
The rotation matrix, denoted as R , is a 3 × 3 matrix, and the translation vector, denoted as T , is a 3 × 1 vector. The representation of the camera coordinate system’s homogeneous coordinates is denoted as O C X C Y C Z C , whereas the homogeneous coordinates of the world coordinate system are denoted as O W X W Y W Z W . During the coordinate transformation process, rotation is performed by rotating around the z -axis with an angle of θ , and rotating around the x and y axes with angles α and β , respectively, resulting in the composite rotation matrix R = R 1 R 2 R 3 .
R = cos θ sin θ 0 sin θ cos θ 0 0 0 1 cos β 0 sin β 0 1 0 sin β 0 cos β 1 0 0 0 cos α sin α 0 sin α cos α
X C Y C Z C = R X W Y W Z W + T
The camera captures two-dimensional images, and the output consists of pictures where pixels are arranged based on their pixel values. As a result, the pixel coordinate system is derived by discretizing the coordinate values and shifting the origin to the center [13].
Figure 4 illustrates the denotation of the pixel coordinate system as O u v u v and the image coordinate system as o x y . The representation of point p in the pixel coordinate system is indicated as ( u , v ) , and the transformation relationship is expressed as follows:
u = x d x + u 0 v = y d y + v 0
According to the above formula:
u v 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 x y 1
In the given equation, the physical dimensions along the x , y directions of the pixel are denoted as 1 / d x and 1 / d y , respectively, whereas o ( u 0 , v 0 ) signifies the displacement of the origin on the pixel plane.

2.2. Camera Imaging Model

(1)
Linear camera model
In an ideal scenario where the distortion resulting from the camera imaging process is neglected, the rectification of a linear camera’s distortion can be achieved by solving the intrinsic and extrinsic parameter matrices of the camera using only the image [14]. This is shown in the following equation:
Z C u v 1 = f x 0 u 0 0 f y v 0 0 0 1 0 R T 0 1 X W Y W Z W 1
The internal parameter matrix is denoted as Equation (10), while the external parameter matrix is represented as Equation (11):
f x 0 u 0 0 f y v 0 0 0 1 0
R T 0 1
(2)
Nonlinear distortion model
Through analyzing the linear camera model, an ideal scenario would reveal a linear correlation, while practical measurement processes often encounter unattainable linearity and inevitable errors [15]. Due to manufacturing process variations, the camera lens generates distortions during the imaging process, leading to nonlinear distortions [16]. Nonlinear distortions can be further categorized into radial and tangential distortions, causing deviations between ideal projected points and actual projected points [17]. Let us assume a point P in the world coordinate system, represented as P ( X W , Y W , Z W ) in the O W X W Y W Z W coordinates. The corresponding theoretical image coordinate is denoted as p ( x , y ) . However, due to nonlinear distortions introduced by real-world imaging, the actual projected point of this point will deviate.
Expressed as follows is the deviation between the theoretical and actual points when p ( x , y ) is shifted to point p ( x , y ) :
x = x + δ x y = y + δ y
For the nonlinear distortion offset in the x direction, δ x is used as a representation, while δ y represents the nonlinear distortion offset in the y direction. In the case of nonlinear distortion offsets, the coefficients k 1 , k 2 , k 3 are employed to denote radial distortion. The expressions for the initial terms in the Taylor expansion are provided below:
x = x ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) y = y ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 )
The radial distortion coefficients, denoted by k 1 , k 2 , k 3 , r 2 = x 2 + y 2 .
To ensure comprehensive distortion analysis, it is crucial to incorporate the consideration of tangential distortion. This type of distortion arises mainly from the disparity between the principal point of the optical system and the geometric center. The changes in tangential distortion are comparatively smaller in magnitude than that of radial distortion. The expression for tangential distortion is as follows:
x = x + [ 2 p 1 y + p 2 ( r 2 + 2 x 2 ) ] y = y + [ p 1 ( r 2 + 2 y 2 ) + 2 p 2 x ]
The eccentric distortion coefficients, denoted by p 1 , p 2 , r 2 = x 2 + y 2 .

2.3. Distortion Correction

Lens distortion is actually a collective term for inherent perspective distortions in optical lenses [18]. Generally, there are three types of camera distortions:
(1)
Pincushion distortion: The magnification rate in the peripheral regions of the field of view is much larger than that near the optical axis center, commonly found in telephoto lenses [19].
(2)
Barrel distortion: In contrast to pincushion distortion, the magnification rate near the optical axis center is much larger than that in the peripheral regions [20].
(3)
Linear distortion: When the optical axis is not orthogonal to the vertical plane of objects being photographed, the convergence of the far side that should be parallel to the near side occurs at different angles, resulting in distortion. This distortion is essentially a form of perspective transformation, meaning that at certain angles, any lens will produce similar distortions [21].
The types of distortions in a camera are as follows (Figure 5, Figure 6 and Figure 7):
The aforementioned distortions were taken into account during the calibration process [22]. Following the transformation Formulas (13) and (14), undistorted calibration results can be obtained using the following equation.
x u y u = [ 1 + k 1 ( x d 2 + x d 2 ) + k 2 ( x d 2 + x d 2 ) + k 3 ( x d 2 + x d 2 ) ] [ x d y d ] + 2 p 1 x d y d + p 2 ( 3 x d 2 + y d 2 ) p 2 ( 3 x d 2 + y d 2 ) + 2 p 2 x d y d
Here, a nonlinear model is introduced to account for distortion in camera calibration. In real-world scenarios, the calibration of distortion serves as an effective means to minimize the influence of distortion, thus safeguarding the precision of the calibration outcomes.

3. Camera Calibration

3.1. Camera Calibration Algorithm

After analyzing the coordinate transformation relationship in photogrammetric measurement and the previously delineated camera distortion model, the calibration of the camera is conducted [23].
First, construct the homograph matrix to establish the relationship between world coordinates and pixel coordinates. This matrix describes the homographs between the target point in the world coordinate system and the image point. Subsequently, solve the initial values for the camera’s intrinsic and extrinsic parameters using orthogonal constraint conditions. Finally, an optimization algorithm is applied to obtain reliable matrices for the intrinsic and extrinsic parameters [24]. According to the camera imaging model and the transformation relationship between pixel coordinates and world coordinates, the expression for the camera imaging model is as follows:
s m ¯ = A [ R , T ] M ¯
In Equation (15), the pixel coordinates of the image are denoted as m ¯ , the world coordinates as M ¯ , the intrinsic matrix as A , the extrinsic matrix as [ R , T ] , and the scale factor as s . The expression based on Equation (15) is presented in homogeneous coordinates as follows:
s u v 1 = A [ r 1 r 2 r 3 t ] X Y Z 1
Then, the internal parameters of the camera can be expressed as:
A = f u γ u 0 0 f v v 0 0 0 1
Since the scale factor s does not alter the homogeneous coordinate values, in Zhang’s calibration method, where pixel coordinates are in two dimensions, we can set Z = 0 . Consequently, Equation (15) can be expressed as follows:
s u v 1 = λ A [ r 1 r 2 r 3 t ] X Y 0 1 = λ A [ r 1 r 2 t ] X Y 1
Given that λ = 1 / s , and the homographs matrix is denoted as H , we can set H = λ A [ r 1 r 2 t ] . Consequently, the camera model can be represented by Equation (19):
s m ¯ = H M ¯
Let H = [ h 1 h 2 h 3 ] = λ A [ r 1 r 2 t ] , the r vector is represented by the homologous matrix H as follows:
r 1 = 1 λ A 1 h 1 r 2 = 1 λ A 1 h 2
According to orthogonality, the vector’s own constraints are expressed as:
r 1 2 = r 2 2 = r 3 2 = 1 r 1 r 2 = r 1 r 3 = r 2 r 3 = 0
r 1 T r 2 = 0 r 1 = r 2 = 1
By bringing Formula (21) into Formula (22), we can obtain:
h 1 T A T A 1 h 2 = 0 h 1 T A T A 1 h 1 = h 2 T A T A 1 h 2
For each feature point in the target image, we can derive two equations. By selecting four points from the target image, as long as three of them are not collinear, the matrix H can be computed. Consequently, a matrix H can be derived, with the following expression:
s u v 1 = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 X Y 1
The expression for Equation (24) can be transformed into a set of equations as follows:
u X h 31 + u Y h 32 + u h 33 = h 11 X + h 12 Y + h 13 v X h 31 + v Y h 32 + v h 33 = h 21 X + h 22 Y + h 23
Expressing Equation (25) in matrix form, while setting h = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] T , yields the following matrix expression:
X Y 1 0 0 0 u X u Y u 0 0 0 X Y 1 v X v Y v h = 0
By applying the orthogonality condition stated in Equation (23), we can construct a matrix of homographs to determine the camera’s intrinsic and extrinsic parameters. Since A T A 1 is present within Equation (23), the following matrix is formed by setting B = A T A 1 . Its expression is as follows:
B = A T A 1 = B 11 B 12 B 13 B 21 B 22 B 23 B 31 B 32 B 33
Bringing Equation (17) into Equation (27) yields the following expression:
B = A T A 1 = 1 f u 2 γ f u 2 f v v 0 γ u 0 f v f u 2 f v γ f u 2 f v γ 2 f u 2 f v 2 + 1 f v 2 γ ( v 0 γ u 0 f v ) f u 2 f v 2 v 0 f v 2 v 0 γ u 0 f v f u 2 f v γ ( v 0 γ u 0 f v ) f u 2 f v 2 v 0 f v 2 ( v 0 γ u 0 f v ) 2 f u 2 f v 2 + v 0 f v 2 + 1
The matrix B possesses a symmetric structure and can be represented by Equation (28):
b = B 11 B 12 B 22 B 13 B 23 B 33 T
According to the orthogonal constraint: h i T B h j = v i j T b , we can obtain:
v i j = h i 1 h j 1 h i 1 h j 2 + h i 2 h j 1 h i 2 h j 2 h i 3 h j 1 + h i 1 h j 3 h i 3 h j 2 + h i 2 h j 3 h i 3 h j 3 T
The equation’s expression of orthogonality constraint results in v 12 T b = 0 .
Similarly, based on another constraint, it can be represented as ( v 11 v 22 ) T b = 0 . These two constraints can be expressed in matrix form, and their expression is as follows:
v 12 T ( v 11 v 22 ) T b = 0
During the camera calibration process, for each image, we can obtain a homographs matrix H . Based on Equation (29), each of these matrices can be used to form the system of equations described in the equation. According to Equation (28), v i j has six unknowns; thus, at least six sets of equations need to be established in order to solve them. In practical experiments, there are often redundant data available, allowing for the calculation of the B matrix. By solving the B matrix using the Cholesky (square root) method and taking its inverse, we can obtain the camera’s intrinsic parameters. Its expression is given by Equation (30).
v 0 = ( B 12 B 13 B 11 B 23 ) ( B 11 B 22 B 12 2 ) λ = B 33 B 13 2 + v 0 ( B 12 B 13 B 11 B 23 ) B 11 f u = λ B 11 f v = λ B 11 B 11 B 22 B 12 2 γ = B 12 f u 2 f v λ u 0 = v 0 γ f v B 13 f u 2 λ
Then, according to H = [ h 1 h 2 h 3 ] = λ A [ r 1 r 2 t ] , the external parameters are simplified, and the result is expressed as Equation (31):
r 1 = λ A 1 h 1 r 2 = λ A 1 h 2 r 3 = r 1 r 2 t = λ A 1 h 3 λ = 1 A 1 h 1 = 1 A 1 h 2
Solving Equations (30) and (31) allows for the determination of both the camera’s internal and external parameters.

3.2. Binocular Stereo Calibration Method

Based on the camera imaging model, Equation (32) can be derived when considering the internal and external parameter matrices of the left and right cameras in a binocular vision system portrayed as R 1 , T 1 , R 2 , and T 2 , respectively. Furthermore, point P in world coordinates are P W , while its coordinates on the imaging plane of the left camera and the right camera are p 1 and p 2 , respectively.
p 1 = R 1 P W + T 1 p 2 = R 2 P W + T 2
Equation (33) is obtained by eliminating P W from the upper equations:
p 2 = R 2 R 1 1 p 1 + T 2 R 2 R 1 1 T 1 = R p 1 + T
Equation (33) defines R as the rotation matrix relating the two cameras and T as the translation vector representing the spatial relationship between the cameras. Consequently, the positional correlation between the left and right cameras is articulated through Equation (34).
R = R 2 R 1 1 T = T 2 R 2 R 1 1 T 1
Based on Equation (34), the individual calibration of the internal and external parameters for the left and right cameras enables the definition of their positional relationship. This process completes the stereo calibration for the binocular vision system. Subsequently, intelligent optimization algorithms are utilized to enhance the obtained internal and external parameters, as they may not be optimal.

4. Camera Parameter Optimization Algorithm

The Zhang’s calibration method yields unsatisfactory results when multiple images are used as input, with poor performance due to suboptimal initial values and noise-contaminated data. In order to tackle this issue, the linear model solution is utilized as the starting point for optimization, aiming to enhance the resilience of conventional optimization approaches [25]. However, these drawbacks require additional input constraints to obtain more accurate parameter results, making the traditional calibration process more complex and impractical. Hence, this section primarily utilizes intelligent optimization algorithms to optimize the method and address the aforementioned limitations.
Assuming there exist n template images depicting planar surfaces, with each image containing m calibration points within the identical environment, the formulation of the objective function is presented as follows:
f o b j = min i = 1 n j = 1 m p ^ i j p ( M A , k 1 , k 2 , k 3 , p 1 , p 2 , R i , T i , P j )
The pixel coordinates of the j calibration point in the i image are represented by p i j in the objective function, whereas k 1 , k 2 , k 3 , p 1 , p 2 represents the distortion parameters, and R i , T i denotes the rotation and translation matrix corresponding to the i image. By utilizing optimization algorithms to optimize the calibration objective function based on Equation (35), the optimal solution for the camera calibration parameters can be obtained.

4.1. Honey Badger Algorithm

The Honey Badger Algorithm (HBA) presents a pioneering heuristic optimization approach inspired by the foraging behavior of honey badgers, which has devised an efficient search strategy for addressing mathematical optimization problems. The key principle revolves around the honey badger’s capacity to locate beehives through actions such as sniffing, digging, and following honeyguide birds, categorized as the excavation mode and the honey mode, respectively. During the excavation mode, the honey badger uses its keen sense of smell to estimate the beehive’s location and chooses a suitable spot for excavation upon nearing the hive. Conversely, in the honey mode, the honey badger directly relies on the guidance of the honeyguide bird to find the beehive, ultimately yielding the optimal outcome.
The optimization of camera parameters based on the honey badger algorithm involves establishing an objective function using the residuals between the actual image coordinates ( x , y ) of calibration points and the projected coordinates ( x , y ) calculated from the camera model.
f ( X ) = i N [ ( x x ) 2 + ( y y ) 2 ]
By utilizing the excavation mode and honey mode to minimize the value of f ( X ) , an optimal set of parameters is obtained, achieving the optimization of camera parameters.
Here, X represents the parameter to be optimized, specifically X = [ f x , f y , C x , C y , k 1 , k 2 , k 3 , p 1 , p 2 ] , with the parameter’s search range set as [ X u p , X d o w n ] , as shown below:
X u p = [ f x + 3 , f y + 3 , C x + 2 , C y + 2 , k 1 + 0.1 , k 2 + 0.02 , k 3 + 0.002 , p 1 + 2 × 10 5 , p 2 + 0.02 ]
X d o w n = [ f x 3 , f y 3 , C x 2 , C y 2 , k 1 0.1 , k 2 0.02 , k 3 0.002 , p 1 2 × 10 5 , p 2 0.02 ]
The population of honey badgers, with a size of N, representing the optimization of D parameters, can be expressed as follows:
x 11 x 12 x 13 x 1 D x 21 x 22 x 23 x 2 D x N 1 x N 2 x N 3 x N D
The position of the i honey badger is:
x i = [ x i 1 , x i 2 , x i 3 , , x i D ]
The flowchart of the honey badger algorithm is as follows (Figure 8):
The algorithm steps are as follows:
Step 1: The initialization phase entails setting the population size N and determining the corresponding positions of the honey badgers.
x i = l b i + r 1 × ( u b i l b i )
where r 1 denotes a random number within the range of 0 to 1, l b i signifies the lower boundary of the search domain, and u b i denotes the upper boundary of the search domain.
Determine the maximum number of iterations, denoted as T, as well as parameters C and β .
Step 2: Specifying the intensity I. The intensity is linked to the prey’s concentration and the spatial separation between the prey and honey badgers. I i denotes the olfactory potency of the prey, where a high olfactory strength results in fast movement, whereas a low olfactory strength leads to slow movement.
I i = r 2 × S 4 π d i 2
S = ( x i x i + 1 ) 2
d i = x p r e y x i
In this section, the symbol r 2 represents a random number between 0 and 1, S denotes the source strength or concentration strength, and d i represents the distance between the i-th honey badger and its prey.
Step 3: Enhancing the density factor. The density factor α controls the dynamic randomization process, facilitating a seamless transition from exploration to exploitation.
α = C × exp ( t t max )
Step 4: Escape from local optima. This step is aimed at escaping from local optima regions. The search algorithm employs a flag F to alter the search direction, facilitating an extensive exploration of the search space.
Step 5: Adjusting individual positions. The process of updating positions comprises two components: the “excavation phase” and the “honey collection phase”.
Excavation phase:
x n e w = x p r e y + F × β × I × x p r e y + F × r 3 × α × d i × cos ( 2 π r 4 ) × [ 1 cos ( 2 π r 5 ) ]
In this context, x n e w signifies the revised location of the honey badger individual, while x p r e y denotes the prey’s position.
Meanwhile, β signifies the honey badger’s capability to obtain physical items, where β 1 with a default of 6, and d i denotes the distance between the i-th honey badger and the prey. Additionally, r 3 , r 4 , r 5 , r 6 , r 7 are random numbers between 0 and 1.
F = 1     i f     r 6 0.5 1     e l s e
Honey gathering stage:
x n e w = x p r e y + F × r 7 × α × d i
x n e w represents the revised location of the honey badger individual, while x p r e y indicates the position of the prey, which represents the globally optimal position. The variable d represents the distance between the i-th honey badger and the prey. The value of F can be calculated using Equation (44), and the value of α can be derived from Equation (43).

4.2. Improved Differential Evolution Particle Swarm Algorithm

Considering the limitations of the honey badger algorithm, including rapid convergence, low accuracy of convergence, tendency to diverge, and decreasing population diversity as iterations increase, the particle swarm optimization algorithm incorporates the differential evolution algorithm to overcome these limitations. In this enhanced algorithm, differential evolution mutation and crossover operations are integrated into the particle swarm algorithm at each iteration. This modification aims to preserve the diversity of the particle population and improve the selection of the optimal particle in each iteration, ultimately leading to enhanced performance.

4.2.1. Principle of Differential Evolution Algorithm

The main components of the differential evolution (DE) algorithm include mutation, crossover, and selection operations. Within this algorithm, a diverse subset of individuals is randomly chosen to form the variance vector. Subsequently, an additional individual is selected and incorporated into the variance vector to produce an experimental individual. The crossover operation is then executed between the original individual and the respective experimental individual, perturbing the existing population and extending the exploration range. Ultimately, a selection process takes place between the original and offspring individuals, retaining the individuals that satisfy the criteria for the subsequent generation population.
(1)
The initialization of the population
The population of size ( N P , D ) is randomly generated in the solution space. Each individual, denoted by the i-th element, is given a random value within the predefined range.
X i ( G ) = { x i 1 ( G ) , x i 2 ( G ) , , x i D ( G ) } ,   i = 1 , 2 , , N P
Within the context of this study, the value of N P corresponds to the population size, while D represents the number of decision variables. Each individual within the initial population is generated in the range [ x min , x max ] according to Equation (46).
x i D ( 0 ) = x min + r a n d ( 0 , 1 ) ( x max x min )
In the above equation, the variable G indicates the G generation, and [ x min , x max ] represents the search space domain of the decision variables.
(2)
Variation operation
The mutation operation is employed in the process of G evolution to generate the mutation vector V i ( G ) for each individual X i ( G ) in the population of the current generation. The calculation of the mutation vector varies depending on the chosen mutation strategy, and the subsequent section outlines five frequently employed mutation strategies.
D E / r a n d / 1 : V i ( G ) = X r 1 ( G ) + F ( X r 2 ( G ) X r 3 ( G ) ) D E / b e s t / 1 : V i ( G ) = X b e s t ( G ) + F ( X r 1 ( G ) X r 2 ( G ) )   D E / r a n d t o b e s t / 1 : V i ( G ) = X i ( G ) + F ( X b e s t ( G ) X i ( G ) ) + F ( X r 1 ( G ) X r 2 ( G ) ) D E / b e s t / 2 : V i ( G ) = X b e s t ( G ) + F ( X r 1 ( G ) X r 2 ( G ) ) + F ( X r 3 ( G ) X r 4 ( G ) ) D E / r a n d / 2 : V i ( G ) = X r 1 ( G ) + F ( X r 2 ( G ) X r 3 ( G ) ) + F ( X r 4 ( G ) X r 5 ( G ) )
In the above equation, r 1 , r 2 , r 3 , r 4 , r 5 is a mutually exclusive random integer within the range [ 1 , N P ] and different from the index i . The scaling factor F is a positive controlling parameter of the differential vector. X b e s t ( G ) represents the best individual vector in the G generation population with the optimal fitness value.
(3)
Crossover Operation
For each pair of target vector X i ( G ) and mutation vector V i ( G ) , a trial vector is generated through crossover. The differential evolution method employs a binomial crossover defined as follows in Equation (49): U i ( G ) = ( u i ( G ) , u i ( G ) , , u I ( G ) ) .
u i ( G ) = v i ( G ) ,   ( r a n d j ( 0 , 1 ) C R ) o r ( j = j r a n d ) x i ( G ) ,   o t h e r w i s e j = 1 , 2 , , D
Among them, CR is a specified constant crossover probability between ( 0 , 1 ) , and j r a n d is a randomly selected integer within the range [ 1 , D ] .
(4)
Selection Operation
In the differential evolution method, a greedy selection rule is utilized to determine the offspring. This rule involves comparing the fitness values of the trial individuals with the target individuals and selecting the superior individuals to be inherited in the subsequent generation.
X i ( G + 1 ) = U i ( G ) , i f ( f ( U i ( G ) ) f ( X i ( G ) ) ) X i ( G ) , o t h e r w i s e
Among these, f ( ) denotes the fitness value of the objective function for both the target individual and the trial individual in the G generation. The above operations are repeated in each generation until certain specific termination conditions are met.

4.2.2. Improved Differential Evolution Particle Swarm Hybrid Optimization Algorithm Design

The population size in the context of the differential evolution algorithm is denoted by N, with each individual possessing a multidimensional vector, which is expressed as the target vector and the trial vector: X i t = ( x i 1 t , x i 2 t , , x i D t ) , V i t = ( v i 1 t , v i 2 t , , v i D t ) . The population is initialized as S = X 1 , X 2 , , X D , where T represents each target vector. Therefore, the mutation operation can be improved as follows.
V i T = x b e s t t + f ( x r 2 t x r 3 t )
V i T = x r 1 t + f ( x r 2 t x r 3 t )
V i T = x i t + f 1 ( x b e s t t x i t ) + f 2 ( x r 1 t x r 2 t )
V i T = x best t + f ( x r 1 t + x r 2 t x r 3 t x r 4 t )
V i T = x r 1 t + f ( x r 2 t + x r 3 t x r 4 t x r 5 t )
The equation above introduces f as the mutation parameter, which serves to control the differential speed, while r 1 , r 2 , r 3 , r 4 , r 5 represents distinct random integers falling within the range of [ 1 , N ] . The population iteration algorithm incorporates a mutation method to enhance population diversity, ensuring that mutation takes place after each iteration. Moreover, it integrates crossover and selection operations to facilitate the selection of the optimal individual in each iteration.
The Figure 9 depicts the flowchart of the enhanced algorithm for differential evolution particle swarm optimization:
The algorithm implementation steps are as follows:
Step 1: Using calibration algorithm, obtain the coordinates ( x , y ) of calibration points and camera parameters f x , f y , C x , C y , k 1 , k 2 , k 3 , p 1 , p 2 .
Step 2: A group of particles is initialized and distributed throughout the search space, with each particle possessing flight velocity denoted as V i and position represented as X i . Set the population size N , parameter search range, maximum flight velocity V u p , inertia weight coefficient ω , individual learning factor c 1 , social learning factor c 2 , and other parameters. Define the search range of the parameters as [ X u p , X d o w n ] .
Step 3: An enhanced dynamic adjustment strategy is introduced for the modification of the algorithm’s weights w , mutation control parameter f , and crossover control parameter C R . The maximum number of iterations λ max and the current iteration count λ are determined based on the upper and lower limits of the parameters specified in Step 2:
δ ( w , f , C R ) = δ max δ max δ min λ max λ
Step 4: The fitness value of each particle is computed, and the fitness evaluation can be represented by Equation (57).
  f i t n e s s = min i = 1 m u i x i 2 + v i y i 2
Step 5: The current individual extreme value P i t for each particle is updated, and the best individual extreme value is recorded as the current global optimal solution P g t .
Step 6: Each particle updates its flight velocity and position information. The update equations for flight velocity and position information in the d-th dimension are as follows:
v i d t + 1 = ω v i d t + c 1 r 1 ( p i d t x i d t ) + c 2 r 2 ( p g d t x i d t )
x i d t + 1 = x i d t + v i d t + 1
where i = 1 , 2 , , N and d = 1 , 2 , , D , v i d t + 1 and x i d t + 1 , respectively represent the flight velocity and position information of the i particle in the t + 1 generation.
Step 7: Executing the crossover operation enhances population diversity and promotes the adaptation of exceptional individuals.
Step 8: The mutation operation is performed to generate excellent individuals, with a higher probability of mutation for individuals with lower fitness.
Step 9: Updating the entire population is based on the new fitness values, which involves the update of individual best and global best.
Step 10: Checking if the termination condition is met to determine whether to output the results; if the condition is met, the results will be outputted. If not, the process will return to Step 2 for additional iterations.

5. Experimental Comparison and Result Analysis

Using Zhang’s calibration method, the camera was calibrated to obtain the parameters necessary for rectifying the captured images. This process ensured the precise acquisition of pixel coordinates for the chessboard pattern captured by both the left and right cameras. A binocular stereo vision model was then employed to determine the world coordinates of the chessboard pattern. The obtained world coordinates were used as input, while the theoretical world coordinates were used as output. For data computation, a comparative analysis was conducted using different optimization algorithms, including genetic algorithm, particle swarm optimization algorithm, honey badger optimization algorithm, and an enhanced hybrid optimization algorithm that combines differential evolution and particle swarm optimization algorithm.

5.1. Procedure of Test

Step 1: Multiple calibration board images were captured using the experimental equipment. Subsequently, functions such as “findChessboardCorners”, “calibrateCamera”, and “stereoRectify” from the OpenCV library were invoked in C++ to solve for the camera’s intrinsic and extrinsic parameters.
Step 2: Multiple sets of captured images were selected. Using OpenCV, pixel coordinates were first determined, followed by the calculation of world coordinates. Figure 10 depicts the calibration board employed in the experiment, which consists of a total of 40 corners. It was manufactured using photolithography techniques, with a fabrication error within 1 millimeter. Each chessboard square measures 80 mm × 80 mm, resulting in a total chessboard size of 800 × 600.
Step 3: The collected calibration images were input into the camera calibration data processing module, resulting in initial data after calibration, including initial values for various types of distortion coefficients.
Step 4: Different optimization techniques were applied to optimize the initial calibration parameters and obtain the global optimum solution.

5.2. Results and Analysis

A binocular camera is used to take multiple photos, which are shown in Figure 10:
In the camera calibration experiments, the calibration board underwent movement within the field of view of the camera, assuming different positions and angles. This allowed for the capture of multiple sets of detection images for the purpose of calibration. Due to the continuous movement of the calibration board, image blurring may occur during image acquisition, which in turn can affect the accuracy of subsequent corner detection. Therefore, high-quality, non-blurred images were selected from the collected calibration data for the calibration experiments. The process of binocular calibration and corner detection is illustrated in Figure 11.
The method for camera calibration detailed earlier was applied to derive parameter matrices M 1 and M 2 for the left and right cameras. Correspondingly, matrices D 1 and D 2 representing the distortion parameters for the cameras were acquired. A summary of the findings can be found in Table 1, with all data rounded to five decimal places.
During this experiment, 15 sets of calibration board images were utilized. The intrinsic and extrinsic parameters of the camera, along with the distortion parameters obtained from the calibration process, were employed to project the corners of the calibration board onto the imaging plane. The calibration error was subsequently evaluated by comparing the pixel coordinates of these points with their actual coordinates on the calibration board. The tabulated results of the calibration errors are documented in Table 2.
As shown in Table 2, the overall average calibration error falls within a range of 0.25 pixels.
Figure 12 illustrates the projection of the calibration board’s corner coordinates onto the camera coordinate plane using the camera’s intrinsic and extrinsic parameters derived from reverse projection. Discrepancies between these projected coordinates and the actual corner coordinates in the original image were computed in both the x and y directions. The deviations, quantified in pixels, are denoted by red “o” for the calibration disparities of the left camera and blue “+” for those of the right camera.
After the calibration of both the left and right cameras, the stereo vision calibration can be conducted. The calibration outcomes for the binocular vision system are displayed in Table 3.
The rotation matrix and translation matrix for stereo calibration, presented in Table 3, indicate that the rotation matrix bears a striking resemblance to the identity matrix. This similarity can be attributed to the meticulous efforts employed during the setup of the left and right cameras to achieve and maintain their equilibrium.
Following the procedure outlined in the preceding section, the MATLAB implementation of the optimization algorithm program utilized 10 sets of calibration images and experimented with maximum iteration numbers of 200, 400, 600, 800, and 1000. In order to evaluate the calibration performance of the algorithm proposed in this chapter, assessments were carried out employing the genetic algorithm, particle swarm algorithm, honey badger algorithm, and improved differential particle swarm algorithm. The evaluation criteria involved nine parameters, f x , f y , c x , c y , k 1 , k 2 , k 3 , p 1 , p 2 , and their performance was compared, as presented in Table 4.
Based on the findings displayed in Table 4, the camera parameter calibration performed in this investigation meticulously accounted for both radial and tangential distortions. As a result, the influence of distortion and lens aberrations on the measurement outcomes during the camera calibration process was significantly mitigated. To conduct a more comprehensive analysis of the optimization test results, the positional deviations were assessed using the error range and root-mean-square error (RMSE) as indicators of accuracy. The calculation formulas for these two evaluation criteria are provided below:
E r ( k ) = X ^ ( k ) X ( k ) 2 k = 1 , 2 , , N
R M S E = 1 N i = 1 N ( X ^ ( k ) X ( k ) ) 2
The root-mean-square errors for position in each direction calibrated by the four optimization algorithms are presented in Table 5.
Result 1: The fitness curve for the optimization using genetic algorithm is shown in Figure 13, with a maximum iteration number of 455. The fitness value decreases in a stepwise manner as the iteration number increases and finally reaches zero after 455 iterations. The solution stabilizes thereafter and remains consistent.
Based on Figure 13 and Figure 14, it can be observed that the optimization of calibration parameters using genetic algorithm resulted in significant improvement in the corresponding coordinate errors. The error in the x-coordinate was reduced from approximately 0.3 to within 0.05 after optimization, while the error in the y-coordinate improved from around −0.5 to within ±0.07. The error in the z-coordinate was within 0.1 before parameter optimization.
Result 2: The fitness curve for the optimization using particle swarm algorithm is depicted in Figure 15. The algorithm terminated after 225 iterations, and the fitness value exhibited a stepwise decrease with the increase in iteration count. After 225 iterations, the solution approached zero and stabilized.
The error in the coordinates of the corresponding points before and after optimization can be obtained from Figure 16. Prior to optimizing the calibration parameters, the error in the x-direction of the corresponding points was approximately within 0.27, the y-direction error was within −0.65, and the z-direction error was within −0.12. After optimization using the particle swarm algorithm, the x-direction error was within 0.05, the y-direction error fluctuated within ±0.07, and the z-direction error was within 0.07. It can be noted that the particle swarm optimization method demonstrated a relatively better improvement in the y-direction error.
Result 3: The fitness curve after optimizing using the honey badger algorithm is depicted in Figure 17. The algorithm terminated after 122 iterations, and the fitness value progressively decreased in a stepped manner with increasing iteration count. After 122 iterations, the solution approached zero and reached stability. Based on the convergence of the curve, it is evident that this algorithm exhibits noticeably faster convergence speed and requires fewer iterations compared to the previous two optimization algorithms.
Figure 18 shows the coordinate error curve before and after optimizing the camera calibration parameters using the honey badger algorithm. Based on the error curve, it can be concluded that the honey badger optimization algorithm reduced the error in the x-direction from 0.56 before optimization to within 0.05 after optimization. Furthermore, the algorithm decreased the error in the y-direction from −0.66 to within 0.07 and in the z-direction from −0.14 to within ±0.06. These results indicate that the algorithm greatly improves the error.
Result 4: The fitness curve after optimizing using the improved differential evolution particle swarm algorithm is shown in the Figure 19 and Figure 20. It can be observed that the curve depicted fluctuations throughout the first 150 iterations before stabilizing around iteration 400, at which point the calibration optimization results attain stability.
As shown in the above figure, the error curve of the experimental data point coordinates before and after optimizing the camera calibration parameters using the mixed optimization algorithm based on improved differential evolution and particle swarm can be observed. It is apparent that prior to using the mixed optimization algorithm, the error in the x-direction was approximately 0.7, which was reduced to within 0.005 after optimization. Additionally, the error in the y-direction decreased from −0.66 to around −0.07, and the error in the z-direction decreased from −0.17 to near 0. These experimental results indicate that although the convergence speed and iteration count of this algorithm are not the fastest compared to several other optimization algorithms, the optimized parameter results are favorable. Therefore, the mixed optimization algorithm based on improved differential evolution and particle swarm demonstrates the best optimization effect with the lowest error.
Result 5: The fitness curve following DE optimization, as shown in Figure 21, concluded at iteration 335, demonstrating a stepwise decrease in fitness as the number of iterations increased.
As evidenced by Figure 21 and Figure 22, notable changes in the corresponding coordinate errors are observed before and after optimization using the differential evolution algorithm. Specifically, errors in the corresponding x -coordinate parameters were approximately 1.5 before optimization and reduced to within 0.1 post-optimization. Errors in the corresponding y -coordinates were approximately 0.6 before optimization and reduced to within plus or minus 0.1 afterward. For z -coordinates, errors were within 0.2 before optimization and reduced to within 0.05 after optimization. It is evident that prior to optimization, error fluctuations were significant, whereas a substantial reduction in errors was observed following optimization.
Upon comparison with the DE algorithm, it is noted that there is not a significant disparity in the fitness curves of the two algorithms. In light of this observation, additional experiments were conducted wherein the fitness curves of both algorithms were juxtaposed. The experiments were conducted over 200 iterations, yielding the following results (Figure 23):
Specific indicators are shown in Table 6:
The examination of the referenced charts reveals that the IDEPSO achieves a more rapid convergence compared to DE. It is observed that the proposed algorithm approaches stability after 84 iterations, exhibiting a stable fitness value of 1.04068, whereas the stability for the DE algorithm is reached after 128 iterations with a fitness value of 1.04488. In the context of mean-square errors (MSE) across various axes, it is evident that post-optimization, the algorithm we introduced demonstrates significantly reduced MSE values on all axes when juxtaposed with the DE algorithm. Furthermore, the enhanced Differential Evolution Particle Swarm Optimization (DEPSO) algorithm yields a mere 0.1690 root-mean-square error in the x-axis, 0.3780 in the y-axis, and 0.0638 in the z-axis. These calibration results undercut the calibration errors associated with the other three optimization calibration methods, substantiating the exemplary performance and high accuracy of our algorithm.

6. Conclusions

This paper first introduces the structure and coordinate system conversion of the photogrammetry system. It then provides a detailed description of commonly used camera calibration methods and investigates distortion models for camera calibration. Based on this research, an optimized method for calibrating cameras is proposed, which utilizes a new algorithm capable of accurately calibrating cameras. In this algorithm, iterations are incorporated into the mutation and crossover stages of the differential evolution process, and the dynamic adjustment strategy is refined. Introducing the concept of differential evolution ensures the preservation of particle population diversity, enabling the selection of the globally optimal particle at each iteration for the attainment of more precise results. Experimental comparisons among the calibration algorithm proposed in this paper and particle swarm optimization calibration, genetic algorithm optimization, and honey badger algorithm demonstrate its superior performance parameters, smaller calibration errors, and advantageous algorithm performance.

Author Contributions

Conceptualization, F.Q.; Validation, X.S.; Writing—original draft, X.S.; Writing—review & editing, F.Q.; Supervision, Guide experiment, Program making, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 62073259; Key R&D projects in Shaanxi Province under Grant No. 2023-YBGY-380.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Han, C.Z.; Zhu, H.Y. Multi-Source Information Fusion; Tsinghua University Press: Beijing, China, 2010; pp. 1–18. [Google Scholar]
  2. Ma, T.L.; Wang, X.M. Variational Bayesian STCKF for systems with uncertain model errors. Control Decis. 2016, 31, 2255–2260. [Google Scholar]
  3. Feng, X.X.; Chi, L.J.; Wang, Q. An improved generalized labeled multi-Bernoulli filter for maneuvering extended target tracking. Control Decis. 2019, 43, 2143–2149. [Google Scholar]
  4. Rui, Z.; Yu, F.; Bin, D.; Jiang, Z. Multi-UAV cooperative target tracking with bounded noise for connectivity preservation. Front. Inf. Technol. Electron. Eng. 2020, 21, 1413–1534. [Google Scholar]
  5. Sarkka, S. Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations. IEEE Trans. Autom. Control 2009, 54, 596–600. [Google Scholar] [CrossRef]
  6. Xu, H.; Xie, W.C.; Yuan, H.D. Maneuvering Target Tracking Algorithm Based on the Adaptive Augmented State Interacting Multiple Model. J. Electron. Inf. Technol. 2020, 42, 2749–2757. [Google Scholar]
  7. Liang, Y.; Jia, Y.G.; Pan, Q. Parameter Identification in Switching Multiple Model Estimation and Adaptive Interacting Multiple Model Estimator. Control Theory Appl. 2001, 18, 653–656. [Google Scholar]
  8. Jiang, W.; Wang, Z. Calibration of visual model for space manipulator with a hybrid LM–GA algorithm. Mech. Syst. Signal Process. 2016, 66–67, 399–409. [Google Scholar] [CrossRef]
  9. Liu, J.Y.; Wang, C.P.; Wang, W. SMC-CMeMBer filter based on pairwise Markov chains. Syst. Eng. Electron. 2019, 41, 1686–1691. [Google Scholar]
  10. Huang, D.; Leung, H. Maximum Likelihood State Estimation of Semi-Markovian Switching System in Non-Gaussian Measurement Noise. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 133–146. [Google Scholar] [CrossRef]
  11. Wang, Z.; Shen, X.; Zhu, Y. Ellipsoidal Fusion Estimation for Multisensor Dynamic Systems with Bounded Noises. IEEE Trans. Autom. Control 2019, 64, 4725–4732. [Google Scholar] [CrossRef]
  12. Sun, W.; Zhao, X.Y. Maneuvering target tracking with the extended set-membership filter and information geometry. J. Terahertz Sci. Electron. Inf. Technol. 2018, 16, 786–790. [Google Scholar]
  13. Mahler, R. General Bayes filtering of quantized measurements. In Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011; pp. 1–6. [Google Scholar]
  14. Ristic, B. Bayesian Estimation with Imprecise Likelihoods: Random Set Approach. IEEE Signal Process. Lett. 2011, 18, 395–398. [Google Scholar] [CrossRef]
  15. Hanebeck, U.D.; Horn, J. Fusing information simultaneously corrupted by uncertainties with known bounds and random noise with known distribution. Inf. Fusion 2000, 1, 55–63. [Google Scholar] [CrossRef]
  16. Duan, Z.; Jilkov, V.P.; Li, X.R. State estimation with quantized measurements: Approximate MMSE approach. In Proceedings of the 11th International Conference on Information Fusion, Cologne, Germany, 30 June–3 July 2008; pp. 1–6. [Google Scholar]
  17. Miranda, E. A survey of the theory of coherent lower previsions. Int. J. Approx. Reason. 2008, 48, 628–658. [Google Scholar] [CrossRef]
  18. Benavoli, A.; Zaffalon, M.; Miranda, E. Robust Filtering Through Coherent Lower Previsions. IEEE Trans. Autom. Control 2011, 56, 1567–1581. [Google Scholar] [CrossRef]
  19. Peter, W. Statistical Reasoning with Imprecise Probabilities; Chapman and Hall: New York, NY, USA, 1991; pp. 1–35. [Google Scholar]
  20. Noack, B.; Klumpp, V.; Hanebeck, U.D. State estimation with sets of densities considering stochastic and systematic errors. In Proceedings of the 12th International Conference on Information Fusion, Seattle, WA, USA, 6–9 July 2009; pp. 1–6. [Google Scholar]
  21. Klumpp, V.; Noack, B.; Baum, M. Combined set-theoretic and stochastic estimation: A comparison of the SSI and the CS filter. In Proceedings of the 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010; pp. 1–7. [Google Scholar]
  22. Gning, A.; Mihaylova, L.; Abdallah, F. Mixture of uniform probability density functions for nonlinear state estimation using interval analysis. In Proceedings of the 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010; pp. 1–6. [Google Scholar]
  23. Henningsson, T. Recursive state estimation for linear systems with mixed stochastic and set-bounded disturbances. In Proceedings of the 47th IEEE Conference on Decision and Control, Cancun, Mexico, 9–11 December 2008; pp. 1–6. [Google Scholar]
  24. Jiang, T.; Qian, F.C.; Yang, H.Z. A New Combined Filtering Algorithm for Systems with Dual Uncertainties. Acta Autom. Sin. 2016, 42, 535–544. [Google Scholar]
  25. Matthew, J.B. Variational Algorithms for Approximate Bayesian Inference; University of Cambridge: London, UK, 1998; pp. 20–32. [Google Scholar]
Figure 1. The coordinates of the camera imaging model.
Figure 1. The coordinates of the camera imaging model.
Mathematics 12 00870 g001
Figure 2. Relationship between camera coordinates and image coordinates.
Figure 2. Relationship between camera coordinates and image coordinates.
Mathematics 12 00870 g002
Figure 3. Schematic diagram of conversion between camera coordinate system and world coordinate system.
Figure 3. Schematic diagram of conversion between camera coordinate system and world coordinate system.
Mathematics 12 00870 g003
Figure 4. Schematic diagram of transformation between image coordinate system and pixel coordinate system.
Figure 4. Schematic diagram of transformation between image coordinate system and pixel coordinate system.
Mathematics 12 00870 g004
Figure 5. Pillow distortion.
Figure 5. Pillow distortion.
Mathematics 12 00870 g005
Figure 6. Barrel distortion.
Figure 6. Barrel distortion.
Mathematics 12 00870 g006
Figure 7. Linear distortion.
Figure 7. Linear distortion.
Mathematics 12 00870 g007
Figure 8. Honey badger optimization algorithm flowchart.
Figure 8. Honey badger optimization algorithm flowchart.
Mathematics 12 00870 g008
Figure 9. Flowchart based on improved differential evolution particle swarm hybrid optimization algorithm.
Figure 9. Flowchart based on improved differential evolution particle swarm hybrid optimization algorithm.
Mathematics 12 00870 g009
Figure 10. Checkerboard for calibration.
Figure 10. Checkerboard for calibration.
Mathematics 12 00870 g010
Figure 11. Camera calibration and corner detection results.
Figure 11. Camera calibration and corner detection results.
Mathematics 12 00870 g011
Figure 12. Calibration error analysis diagram.
Figure 12. Calibration error analysis diagram.
Mathematics 12 00870 g012
Figure 13. Fitness curve after genetic algorithm optimization.
Figure 13. Fitness curve after genetic algorithm optimization.
Mathematics 12 00870 g013
Figure 14. Genetic algorithm optimizes the coordinate error contrast curve.
Figure 14. Genetic algorithm optimizes the coordinate error contrast curve.
Mathematics 12 00870 g014
Figure 15. Fitness curve optimized by particle swarm optimization algorithm.
Figure 15. Fitness curve optimized by particle swarm optimization algorithm.
Mathematics 12 00870 g015
Figure 16. Particle swarm optimization before and after the coordinate error comparison curve.
Figure 16. Particle swarm optimization before and after the coordinate error comparison curve.
Mathematics 12 00870 g016
Figure 17. The fitness curve of honey badger optimization algorithm.
Figure 17. The fitness curve of honey badger optimization algorithm.
Mathematics 12 00870 g017
Figure 18. The honey badger optimization algorithm optimizes the coordinate error contrast curve.
Figure 18. The honey badger optimization algorithm optimizes the coordinate error contrast curve.
Mathematics 12 00870 g018
Figure 19. Improved differential evolution particle swarm hybrid optimization algorithm convergence curve.
Figure 19. Improved differential evolution particle swarm hybrid optimization algorithm convergence curve.
Mathematics 12 00870 g019
Figure 20. Improved differential evolution particle swarm hybrid optimization algorithm to optimize the coordinate error contrast curve.
Figure 20. Improved differential evolution particle swarm hybrid optimization algorithm to optimize the coordinate error contrast curve.
Mathematics 12 00870 g020
Figure 21. Fitness curve after optimization of DE algorithm.
Figure 21. Fitness curve after optimization of DE algorithm.
Mathematics 12 00870 g021
Figure 22. Comparison curve of coordinate error before and after optimization by differential evolution algorithm.
Figure 22. Comparison curve of coordinate error before and after optimization by differential evolution algorithm.
Mathematics 12 00870 g022
Figure 23. Comparison of convergence curves of the algorithm.
Figure 23. Comparison of convergence curves of the algorithm.
Mathematics 12 00870 g023
Table 1. The calibration parameter matrix for the left and right cameras.
Table 1. The calibration parameter matrix for the left and right cameras.
Camera Parameter MatrixDistortion Parameter Matrix
M 1 = 5035 . 73040 0 533 . 22766 0 5036 . 77009 978 . 66847 0 0 1 D 1 = [ 0 . 08209 ,   1 . 58242 ,   0 . 00166 ,   0 . 00032 ,   10 . 23747 ]
M 2 = 5018 . 27022 0 516 . 49581 0 5017 . 48057 905 . 64914 0 0 1 D 2 = [ 0 . 05939 ,   0 . 48486 ,   0 . 00168 ,   0 . 00026 ,   38 . 70554 ]
Table 2. Calibration average error table.
Table 2. Calibration average error table.
LabelAverage Error (Pixel)LabelAverage Error (Pixel)LabelAverage Error (Pixel)
10.17264960.138521110.189421
20.15798170.137715120.212029
30.08194780.141941130.118467
40.10523790.166069140.193648
50.160528100.233748150.189421
Table 3. Binocular camera calibration results.
Table 3. Binocular camera calibration results.
Rotation MatrixTranslation Vector
R = 0 . 99993098 0 . 0004564127 0 . 011711541 0 . 00057125383   0 . 99995023   0 . 0098654972 0 . 011706779 0 . 0098716523 0 . 99988127 T = 15 . 728618 0 . 081634976 0 . 082594357
Table 4. Table of the optimization results of the four optimization algorithms for the nine calibration parameters.
Table 4. Table of the optimization results of the four optimization algorithms for the nine calibration parameters.
ArgumentGAPSOHBAIDEPSODE
f x 1156.78611154.40271156.40271157.87831157.4027
f y 1154.20041155.06841155.06851153.69611152.0685
c x 662.7546660.9385660.9385663.02028662.9385
c y 387.9115389.3181389.1970387.9177387.8497
k 1 −0.2451−0.26817−0.26614−0.245762−0.245485
k 2 −0.044766−0.06452−0.06452−0.044699−0.044525
k 3 −0.00049006−0.000713−0.000313−0.0004727−0.0005139
p 1 5.6293 × 10−54.1408 × 10−58.1408 × 10−54.5905 × 10−56.1408 × 10−5
p 2 0.0457620.0413020.0453020.0456860.0456797
Table 5. Comparison of the four optimization algorithms.
Table 5. Comparison of the four optimization algorithms.
RMSEXYZ
GA0.61890.51620.1124
PSO0.48950.51370.0947
HBA0.32010.51500.0657
IDEPSO0.16900.37800.0638
DE0.87580.51380.1151
Table 6. Performance comparison of algorithm convergence curves.
Table 6. Performance comparison of algorithm convergence curves.
DEIDEPSO
Fitness value1.044881.04068
Number of iterations when stabilizing12884
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sha, X.; Qian, F.; He, H. Research on Improved Differential Evolution Particle Swarm Hybrid Optimization Method and Its Application in Camera Calibration. Mathematics 2024, 12, 870. https://doi.org/10.3390/math12060870

AMA Style

Sha X, Qian F, He H. Research on Improved Differential Evolution Particle Swarm Hybrid Optimization Method and Its Application in Camera Calibration. Mathematics. 2024; 12(6):870. https://doi.org/10.3390/math12060870

Chicago/Turabian Style

Sha, Xinyu, Fucai Qian, and Hongli He. 2024. "Research on Improved Differential Evolution Particle Swarm Hybrid Optimization Method and Its Application in Camera Calibration" Mathematics 12, no. 6: 870. https://doi.org/10.3390/math12060870

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop