Next Article in Journal
Geometrical Optimal Navigation and Path Planning—Bridging Theory, Algorithms, and Applications
Previous Article in Journal
Online Mapping from Weight Matching Odometry and Highly Dynamic Point Cloud Filtering via Pseudo-Occupancy Grid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Focal Length Calibration Method for Vision Measurement Systems Based on Multi-Feature Composite Variable Weighting

1
Institute of Agricultural Machinery Engineering Design and Research, Hubei University of Technology, Wuhan 430068, China
2
The Library of Wuhan University of Technology, Wuhan 430070, China
3
National Key Laboratory of Agricultural Equipment Technology, Chinese Academy of Agricultural Mechanization Sciences Group Co., Ltd., Beijing 100083, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(22), 6873; https://doi.org/10.3390/s25226873
Submission received: 2 October 2025 / Revised: 29 October 2025 / Accepted: 5 November 2025 / Published: 11 November 2025
(This article belongs to the Section Intelligent Sensors)

Abstract

Existing focal length calibration methods rely on predefined calibration fields or control point networks, which are unsuitable for real-time applications with variable zoom in industrial and agricultural environments. This paper proposes a method based on global scanning principles and geometric constraints, eliminating control points and using symmetric features. A spatial weighting strategy optimizes redundant measurements by integrating optical distortion and the spatial distribution of measured points, enhancing accuracy. Experimental results show that the method achieves micron-level calibration precision, significantly improving visual measurement system accuracy under complex zoom conditions.

1. Introduction

In frontline engineering applications, target objects are often characterized by large variations in distance and significant spatial scale differences. Such scenarios typically demand imaging systems with a large depth of field, necessitating frequent focusing and zooming operations [1]. These operations inevitably cause substantial changes in the camera’s focal length, rendering pre-calibrated parameters ineffective. Moreover, vision systems are commonly mounted on high-dynamic platforms such as industrial robots, agricultural machinery, or mobile inspection devices. During operation, these platforms are prone to intense vibrations, which may lead to subtle yet critical shifts in the relative positions of camera lens elements—directly affecting the stability of the focal length [2]. Over time or with prolonged equipment movement, vibration-induced focal length drift can accumulate, amplifying errors and severely compromising measurement accuracy [3,4]. Therefore, there is an urgent need for systems capable of dynamically responding to on-site disturbances and updating calibration parameters in real time [5,6].
Again, conventional focal length calibration methods generally depend on known three-dimensional control points or dedicated calibration setups [7,8,9]. These methods are not only complex to deploy and constrained by specific environments, but also poorly suited for scenarios where the target is inaccessible or where control points cannot be physically arranged [10,11]. In applications such as outdoor agricultural surveying or high-temperature and high-risk industrial inspections, this reliance on control points becomes a critical bottleneck that limits the applicability of such methods.
To address the aforementioned issues, this paper proposes a control-point-free focal length calibration method based on geometric symmetry constraints and the principle of global scanning (hereafter referred to as the traditional algorithm) [12].

2. Global Scanning Principle

Based on the collinearity Equation (1), a global scan is performed for focal length calibration.The schematic diagram of the global scanning principle is shown in Figure 1.
Figure 1. Schematic diagram of the global scanning principle.
Figure 1. Schematic diagram of the global scanning principle.
Sensors 25 06873 g001
v x v y = A Δ X s Δ Y s Δ Z s Δ ω Δ φ Δ κ + B Δ X Δ Y Δ Z l x l y .
where: A = O 11 O 12 O 13 O 14 O 15 O 16 O 21 O 22 O 23 O 24 O 25 O 26 ; B = O 11 O 12 O 13 O 21 O 22 O 23 ; where:
O 11 = a 1 · f + a 3 · ( x x 0 ) Z ¯ ; O 12 = b 1 · f + b 3 · ( x x 0 ) Z ¯ ;
O 13 = c 1 · f + c 3 · ( x x 0 ) Z ¯ ;
O 14 = a 2 · ( x x 0 ) 2 f a 1 · ( x x 0 ) ( y y 0 ) f + a 3 · ( y y 0 ) + a 2 · f ;
O 15 = c o s κ · ( x x 0 ) 2 f s i n κ · ( x x 0 ) ( y y 0 ) f + f · c o s κ ; O 16 = y y 0 ;
O 21 = a 2 · f + a 3 · ( x x 0 ) Z ¯ ; O 22 = b 2 · f + b 3 · ( x x 0 ) Z ¯ ;
O 23 = c 2 · f + c 3 · ( x x 0 ) Z ¯ ;
O 24 = a 1 · ( y y 0 ) 2 f + a 2 · ( x x 0 ) · ( y y 0 ) f a 3 · ( x x 0 ) a 1 · f ;
O 25 = s i n κ · ( y y 0 ) 2 f + c o s κ · ( x x 0 ) ( y y 0 ) f + f · s i n κ ; O 26 = ( x x 0 ) ;
a 1 = c o s φ c o s κ s i n φ s i n ω s i n κ ; a 2 = c o s φ s i n κ s i n φ s i n ω c o s κ ; a 3 = s i n φ c o s ω ;
b 1 = c o s ω s i n κ ; b 2 = c o s ω c o s κ ; b 3 = s i n ω ;
c 1 = s i n φ c o s κ + c o s φ s i n ω s i n κ ; c 2 = s i n φ s i n κ + c o s φ s i n ω c o s κ ; c 3 = c o s φ c o s ω ;
l x = x ( x ) ; l y = y ( y ) .
Table 1 provides the physical interpretations of the symbols in Equation (1).

2.1. Single-Position Focal Length Scanning Calibration Model

Let the parameters for a given pointer position be defined as follows:
Nominal focal length: f b c ; Zoom range of the lens: [ f m , f m ] ; Number of calibration iterations: n; Calibration step size: s t e p = 2 · f m n .
Then, the single-position focal length scanning calibration model can be expressed as:
q f = { Δ f 1 , Δ f 2 , , Δ f ( n 1 ) , Δ f n } .
where i = 1 , 2 , 3 , , n 1 , n ; Δ f i : error of the i-th iteration.
If the minimum value corresponding to q f is:
q f m i n = Δ f i , 1 i n .
Then, the calibrated focal length value f i corresponding to the single-position focal length scanning calibration model is:
f i = f b c f m + 2 · f m n · i .

2.2. Multi-Position Focal Length Scanning Calibration Model

By increasing redundant data, the focal length calibration error caused by random errors in the measurement values used as references can be minimized as much as possible.
Let there be k pointer positions configured for the same camera–lens combination.
Then, based on the same number of iterations and step size, k single-position focal length scanning calibration models can be obtained through computation:
q f 1 = { Δ f 11 , Δ f 12 , , Δ f 1 ( n 1 ) , Δ f 1 n } q f 2 = { Δ f 21 , Δ f 22 , , Δ f 2 ( n 1 ) , Δ f 2 n } q f k = { Δ f k 1 , Δ f k 2 , , Δ f k ( n 1 ) , Δ f k n } .
In this case, to select the focal length with the minimum error, it is necessary to combine the errors Δ f k i from each iteration at all pointer positions into a comprehensive error set Q F . This set is called the multi-position focal length scanning calibration model:
Q F = h = 1 k Δ f h 1 , h = 1 k Δ f h 2 , , h = 1 k Δ f h ( n 1 ) , h = 1 k Δ f h n .
Let the minimum value corresponding to Q F be:
Q F m i n = h = 1 k Δ f h i , 1 i n .
Then, the calibrated focal length value f j corresponding to the multi-position focal length scanning calibration model is:
f j = f b c f m + 2 · f m n · i .

3. Methodology

3.1. Focal Length Verification Algorithm Based on Geometric Symmetry Features

In the aforementioned traditional verification algorithms, the focal length verification involves comparing the computed coordinates of the pointer points with their actual coordinates [12]. This requires obtaining the actual object space coordinates of the verification points in advance, typically measured using precision instruments such as total stations.However, this approach is costly and inefficient. Research indicates that almost all mechanical structures possess symmetry [13], which can be exploited to transform the traditional verification based on pointer point object space coordinates into a symmetry-based verification, thereby enabling correction of the focal length [14,15].
The illustration shows a schematic diagram of geometric symmetry constraints.
The points A, B, and C in the Figure 2 are referred to as pointer points, and the symmetric constraint formed by these three points is referred to as a pointer point configuration. In the illustrated coordinate system, the coordinates of points A ( x a , y a , z a ) and C ( x c , y c , z c ) are symmetric with respect to point B ( x b , y b , z b ) .
x a + x c 2 = x b y a + y c 2 = y b z a + z c 2 = z b
According to Equation (1):
Δ f i = ( X 1 i + X 3 i 2 X 2 i ) 2 + ( Y 1 i + Y 3 i 2 Y 2 i ) 2 + ( Z 1 i + Z 3 i 2 Z 2 i ) 2 ;
X 1 i , X 2 i , X 3 i , Y 1 i , Y 2 i , Y 3 i , Z 1 i , Z 2 i , Z 3 i represent the object-space coordinates of the pointer points calculated from Equation (6) based on the current (to-be-evaluated) focal length, and are used as reference data for evaluation.

3.2. Weighted Algorithm Based on the Lens Distortion Model

3.2.1. Lens Distortion Model

To further improve the accuracy of the calibration results, a weighted algorithm based on the lens optical distortion model is proposed to optimize the calibration errors caused by lens optical distortion and manual point selection deviations.
Due to inevitable manufacturing and assembly errors in the production process of the optical lenses of measurement cameras, optical distortion will theoretically always occur in images captured by measurement cameras.Such optical distortion significantly interferes with the accuracy of photogrammetric systems [16]. Therefore, to improve the accuracy of photogrammetric results, it is essential to correct the optical distortion of measurement cameras [17].
The distortion errors of optical lenses mainly include radial distortion, decentering distortion, and thin prism distortion. According to the study by RICOLFE-VIALA C et al. [18], the distortion model of an optical lens can be divided into distortion models along the x-axis and the y-axis directions:
δ x = δ j x + δ l x + δ b x = k 1 x x 2 + y 2 + q 1 3 x 2 + y 2 + 2 q 2 x y + s 1 x 2 + y 2 , δ y = δ j y + δ l y + δ b y = k 2 y x 2 + y 2 + q 2 3 x 2 + y 2 + 2 q 1 x y + s 2 x 2 + y 2 .
Here, δ x and δ y represent the combined values of nonlinear distortion in the x-axis and y-axis directions of the image plane coordinate system, respectively; δ j x and δ j y denote the radial distortion parameters along the x-axis and y-axis directions, respectively; δ l x and δ l y denote the decentering distortion parameters along the x-axis and y-axis directions, respectively; δ b x and δ b y denote the thin prism distortion parameters along the x-axis and y-axis directions, respectively; k 1 and k 2 are the radial distortion coefficients; q 1 and q 2 are the decentering distortion coefficients; and s 1 and s 2 are the thin prism distortion coefficients. The spatial positional relationship of δ x and δ y within the image plane coordinate system is shown in Figure 3.
In the figure, point p represents the theoretical position of the pixel, point p represents the actual position of the pixel, and δ p p denotes the distance between points p and p .
In practical engineering applications, although professional industrial cameras are rarely used, mid-to-high-end DSLRs such as the Canon 5Ds and their compatible original lenses are generally employed.Such equipment typically possesses relatively high manufacturing and assembly precision. Therefore, based on the theory proposed by RICOLFE-VIALA C et al., the optical distortion calculation model for this type of equipment can be moderately simplified [18], retaining the first-order radial distortion and second-order decentering distortion of the lens:
δ p p = k 1 u d r d 2 + 2 q 1 u d ν d + q 2 ( r d 2 + 2 u d 2 ) 2 + k 1 ν d r d 2 + 2 q 2 u d ν d + q 1 ( r d 2 + 2 ν d 2 ) 2 ·
where: u d = x u 0 ; ν d = y ν 0 ; r d = u d 2 + ν d 2 ; ( u 0 , ν 0 ) denotes the coordinates of the camera principal point; k 1 and k 2 are the radial distortion parameters; q 1 and q 2 are the decentering distortion parameters.

3.2.2. Weighting Method Based on Optical Distortion Parameters

Based on the above model, the coordinates of the pointer points for each position differ in the image, thus each has a distinct distortion value δ pp . Furthermore, this distortion value is inversely proportional to the confidence level C I 1 of the image point:
C I 1 δ p p 1
For each point, there are 3 pointer points, and the corresponding distortion values δ pp 1 , δ pp 2 , and δ pp 3 are calculated respectively.The distortion value of a point in a single image is then given by:
δ p p = δ p p 1 + δ p p 2 + δ p p 3 .
Suppose there are k inspection points and a total of s images are used in the calculation process.According to Equation (5), the optical distortion sets of all points in the multi-point focal length scanning calibration model can be obtained within each image plane coordinate system:
δ 1 = { δ p p 11 , δ p p 12 , , δ p p 1 ( s 1 ) , δ p p 1 s } δ 2 = { δ p p 21 , δ p p 22 , , δ p p 2 ( s 1 ) , δ p p 2 s } δ k = { δ p p k 1 , δ p p k 2 , , δ p p k ( s 1 ) , δ p p k s } ·
According to Equation (14), the weight coefficients for each point can be calculated as follows:
q 1 = t = 1 s δ p p 1 t 1 , q 2 = t = 1 s δ p p 2 t 1 , , q k = t = 1 s δ p p k t 1 ·
To improve the numerical stability of the model, the weight coefficients calculated by Equation (15) are normalized as follows:
Q 1 = t = 1 s δ p p 1 t 1 u = 1 k t = 1 s δ p p u t 1 , Q 2 = t = 1 s δ p p 2 t 1 u = 1 k t = 1 s δ p p u t 1 , , Q k = t = 1 s δ p p k t 1 u = 1 k t = 1 s δ p p u t 1 ·
By substituting the weight coefficients Q 1 , Q 2 , ⋯, Q k into Equation (6) of the multi-point focal length scanning calibration model, the weighted multi-point focal length scanning calibration model is obtained:
Q F Q = h = 1 k Q h · Δ f h 1 , h = 1 k Q h · Δ f h 2 , , h = 1 k Q h · Δ f h n
Let the minimum value corresponding to Q F Q be:
Q F Q m i n = h = 1 k Q h · Δ f h q j , 1 q j n .
Then, the calibrated focal length value corresponding to the weighted multi-point focal length scanning calibration model is given by f q j :
f q j = f b c f m + 2 · f m n · q j

3.3. The Weighting Algorithm Based on the Spatial Distance to the Target Point

After obtaining the internal and external parameters of the camera, the object-space coordinates of the target point can be calculated using forward intersection.To further refine the focal length, once the object-space coordinates of the target point are derived using the focal length estimated by the traditional algorithm, different weights Q are assigned to the pointer points based on the distance from the target point to the pointer point ( D d d ) and the distance from the target point to the line formed by the pointer positions ( D d l ). These weights are then averaged with the corresponding optical distortion weights Q to form a composite weight Q f .

3.3.1. Weighting Based on the Distance Between the Target Point and the Pointer Points

As shown in Figure 4, let p 1 be the target point to be measured.Based on the aforementioned algorithm, its coordinates are computed as p 1 ( x 1 , y 1 , z 1 ) . Let l h be the line passing through the pointer points a h , b h , c h , whose coordinates are calculated as a h ( x a h , y a h , z a h ) , b h ( x b h , y b h , z b h ) , and c h ( x c h , y c h , z c h ) , respectively. The distances from p 1 to a h , b h , c h are denoted by d a h , d b h , d c h , where h = 1 , 2 , , k . Then:
D d d h = d a h + d b h + d c h
The confidence level C I 2 of l u is inversely proportional to the distance:
C I 2 D d d h 1
Then, the weight of the pointer point Q d d h satisfies:
Q d d 1 = D d d 1 1 , Q d d 2 = D d d 2 1 , , Q d d k 1 = D d d k 1 1 , Q d d k = D d d k 1
To improve the numerical stability of the model, the weights are normalized as follows:
Q d d 1 = D d d 1 1 h = 1 k D d d h 1 , Q d d 2 = D d d 2 1 h = 1 k D d d h 1 , , Q d d k 1 = D d d k 1 1 h = 1 k D d d h 1 , Q d d k = D d d k 1 h = 1 k D d d h 1

3.3.2. Weighting Based on the Distance from the Target Point to the Pointer Points’ Line

As shown in Figure 5, given the coordinates of a u and c u as ( x a u , y a u , z a u ) and ( x c u , y c u , z c u ) , respectively, the line l u is fitted based on a u and c u . The distance from the target point p 1 ( x 1 , y 1 , z 1 ) to the pointer points’ line l u is denoted as d u , where u = 1 , 2 , , k .
Due to the presence of lens optical distortion, the confidence level C I 3 of each inspection point is inversely proportional to the distance d u :
C I 3 d u 1 .
Then, the weight of the pointer point Q d l satisfies:
Q d l 1 = d 1 1 , Q d l 2 = d 2 1 , , Q d l k 1 = d k 1 1 , Q d l k = d k 1
To improve the numerical stability of the model, the weight coefficients calculated by Equation (25) are normalized as follows:
Q d l 1 = d 1 1 h = 1 k d h 1 , Q d l 2 = d 2 1 h = 1 k d h 1 , , Q d l k 1 = d k 1 1 h = 1 k d h 1 , Q d l k = d k 1 h = 1 k d h 1 .

3.4. Based on Composite Weighting

By taking the average of the weights Q d d h and Q d l h , the composite spatial distance weight Q h for the pointer point is obtained:
Q h = Q d d h + Q d l h 2
The normalized final optical distortion composite weight Q h h is obtained by normalizing the composite spatial distance weight Q h and the optical distortion weight Q h of each pointer point.Substituting Q h h into Equation (17) of the weighted multi-point focal length scanning calibration model yields the spatial composite weighted multi-point focal length scanning calibration model:
Q F Q = h = 1 k Q h h · Δ f h 1 , h = 1 k Q h h · Δ f h 2 , , h = 1 k Q h h · Δ f h n
Let the minimum value corresponding to Q F Q be:
Q F Q m i n = h = 1 k Q h h · Δ f h q j , 1 q j n .
Then, the calibrated focal length value corresponding to the spatial weighted multi-point focal length scanning calibration model is given by f q j :
f q j = f b c f m + 2 · f m n · q j

4. Experimental Validation

4.1. Experimental Procedure

To validate the effectiveness of incorporating geometric constraints as pointer points and compensating for the focal length, the traditional algorithm was set as the control group. To further verify the effectiveness of distortion weighting and spatial distance weighting, the focal length was corrected according to different weighting values. The evaluation was based on the 3D coordinates of the test points calculated using the corresponding focal lengths, with the 3D coordinates obtained from total station calibration serving as the reference standard to measure the errors. The experimental procedure is shown in Figure 6. A total of five groups of experiments were designed, where test images were captured at different locations. Each group included the same six test points, resulting in five sets of data from different algorithms. The comprehensive analysis of these five groups of experimental results led to the conclusions.

4.2. Matlab Programming

Based on the mathematical model described in the methodology and the experimental procedure shown in Figure 7, a Matlab program was developed for the experiment. The graphical user interface (GUI) and its operation process is shown in the figure.
The computer is equipped with a 13th Gen Intel(R) Core(TM) i5-13490F processor (2.5 GHz), 10 cores and 16 logical processors, and 32.0 GB of physical memory (RAM).

4.3. Experimental Design

4.3.1. Calibrate the Test Camera

The camera used in this experiment is a high-resolution DSLR, the Canon EOS 5Ds, Canon Inc., Japan equipped with a Canon EF 50mm f/1.8 STM lens. The technical specifications of the camera are shown in Table 2.
The camera calibration tool used in this experiment is the MATLAB Camera Calibrator Toolbox 6.6. The calibration results of the camera are shown in Table 3.
In the table: f represents the calibrated focal length, which will be used as the initial calibration value for the traditional method; Δ x and Δ y are the offsets of the principal point in the image plane coordinate system; k 1 and k 2 are the radial distortion parameters of the camera; q 1 and q 2 are the tangential distortion parameters; and s 1 and s 2 represent the thin prism distortion parameters.

4.3.2. Set Up the Indoor Control Field

To facilitate the verification of the algorithms proposed in this paper under laboratory conditions, an indoor control field was designed, as shown in Figure 8. The following conditions are satisfied:
  • A total of 120 three-dimensional control point targets were established;
  • The control points are evenly distributed and extend along all three coordinate axes;
  • Sufficient shooting space was reserved for the camera.
The 3D coordinates of the control points were obtained in advance using a reflectorless total station.
As shown in Figure 9, in this experiment: Points 1-L12, 2-L20, 3-R34, 4-L34, 5-R24, and 6-R19 serve as control points; points a-L4, b-L17, c-L9, d-R17, and e-R8 are used as indicator points and are only involved in the computation of the traditional method; c1, c2, and c3 are steel rulers measuring 50 mm, 15 mm, and 1000 mm, respectively, serving as geometrically symmetric constraint points; A-L16, B-R11, C-L30, D-R28, E-R2, and F-R32 are designated as test points.
The precise object-space coordinates of all points were measured using a total station and are listed in Table 4.

4.3.3. Capture of Experimental Images

According to classical photogrammetric methods, at least one pair of left and right images is required to calculate the interior and exterior orientation elements. As shown in the experimental procedure in Figure 6, the first image pair is taken after focusing, followed by a second image pair captured after repositioning and refocusing. Due to this process, the first and second image pairs have different focal lengths. The 3D coordinates of the test points in the second image pair are calculated using the focal length of the first image pair, simulating focal length variations caused by refocusing or vibration in real engineering scenarios. The images from the first sets of experiments are shown in Figure 10 and Figure 11.

4.4. Calibration and Solution

During the calibration process, the traditional method sets the focal length search range to 20–80 mm.
In the subsequent algorithm, the search range was adjusted to 6 mm with 12,000 iterations. Since Newton’s iteration is sensitive to the initial value, the calibrated focal lengths f 0 , f 1 , f 2 , and f 3 were sequentially used as the initial values for focal length calibration in the following steps.
The lens used in this experiment is Canon EF 50mm f/1.8 STM. According to the manufacturer’s specifications and our previous experimental experience, the principal distance variation of this lens during the focusing process does not exceed ±3 mm. Therefore, the iterative parameters were set accordingly.
The number of iterations is a key factor affecting the accuracy of the algorithm described in this paper. We generally set it to 12,000 iterations for:
  • Given the typical configuration of laptops used by on-site engineers, 12,000 iterations can be completed within an acceptable time—usually within 200 s;
  • Once the single-step precision reaches 0.001 mm, further reducing this parameter contributes little to the overall solution accuracy.
The image plane coordinates of each point in first experimental image groups are listed in Table 5 and Table 6, where points a, b, c, d, e, f, g, h, and i represent the left endpoint, midpoint, and right endpoint of the symmetry-constrained points, respectively.
As shown in Table 7, f 0 , f 1 , f 2 , f 3 , and f 4 represent, respectively: the manually set initial focal length; the focal length calculated from the first set of images using the traditional algorithm; the focal length calculated from the second set of images using the geometric constraint algorithm; the focal length calculated from the second set of images using the weighted algorithm incorporating optical distortion; and the focal length calculated from the second set of images using the composite weighting algorithm incorporating spatial distance.
The Table 8, Table 9, Table 10 and Table 11 shows the exterior orientation elements of each image in the first set of experiments.
Based on the image plane coordinates and the interior and exterior orientation parameters obtained from calibration, the coordinates of the test points were computed. The results are shown in Table 12, Table 13, Table 14, Table 15 and Table 16.

5. Results and Discussion

As shown in Figure 12, the average point error of each algorithm across the five test groups is presented. It can be seen that the traditional algorithm yields an average error of 8.78 mm. When using an incorrect focal length after the focal length was changed, the average error increases drastically to 233.17 mm, indicating a severe degradation in measurement accuracy. After recalculating the focal length using the algorithm proposed in this paper, the average error in each test group is significantly reduced. Furthermore, the measurement accuracy is further improved by introducing optical distortion weighting and spatial distance weighting.
As shown in Figure 13, the error comparison for each algorithm is presented. With iterative algorithmic optimization, the error gradually decreases from 8.78 mm to 7.41 mm, resulting in an overall accuracy improvement of approximately 15.6%. The relative improvement rates at each stage are 2.62%, 5.73%, and 8.07%, respectively.
In summary, the control-point-free focal length calibration method proposed in this paper not only overcomes the traditional algorithms’ reliance on control points and calibration fields, but also adapts well to complex, dynamic, and uncontrollable conditions commonly encountered in engineering applications. While maintaining system flexibility, it achieves micron-level calibration accuracy of the focal length, providing a reliable technical foundation for high-precision vision measurement in fields such as industry and agriculture.

Author Contributions

Methodology, E.L.; Validation, X.L.; Investigation, F.Y.; Resources, D.Z.; Supervision, X.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Hubei Province, China (Youth Program), under Grant No. 2024AFB501, titled “Research on Calibration Methods for Distributed Visual Perception Systems of Intelligent Agricultural Machinery Clusters in Hilly and Mountainous Areas”. Wuhan Key R&D Program Project, under Grant No. 2023010402010589, titled “Digital Twin System for Factory-Based Aquaculture Systems Projec”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Acknowledgments

We extend our gratitude to the National Key Laboratory of Agricultural Equipment Technology at the China Academy of Agricultural Mechanization Sciences Group Co., Ltd. for its support.

Conflicts of Interest

Author Xing Sun was employed by the company Chinese Academy of Agricultural Mechanization Sciences Group Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Chen, L.; Yang, Y.; Zhang, S. Auto-focusing and auto-exposure method for real-time structured-light 3D imaging. Opt. Eng. 2025, 64, 044105. [Google Scholar] [CrossRef]
  2. Liu, X.; Hu, B.; Yin, Y.; Zhang, Y.; Chen, W.; Yu, Q.; Ding, X.; Han, L. Parallel camera network: Motion-compensation vision measurement method and system for structural displacement. Autom. Constr. 2024, 165, 105559. [Google Scholar] [CrossRef]
  3. Weng, Y.; Lu, Z.; Lu, X.; Spencer, B.F., Jr. Visual-inertial structural acceleration measurement. Comput.-Aided Civ. Infrastruct. Eng. 2022, 37, 1146–1159. [Google Scholar] [CrossRef]
  4. Zhou, X.; Liu, H.; Li, Y.; Ma, M.; Liu, Q.; Lin, J. Analysis of the influence of vibrations on the imaging quality of an integrated TDICCD aerial camera. Opt. Express 2021, 29, 18108–18121. [Google Scholar] [CrossRef] [PubMed]
  5. Ou, J.; Xu, T.; Gan, X.; He, X.; Li, Y.; Qu, J.; Zhang, W. Research on a dynamic calibration method for photogrammetry based on rotary motion. Appl. Sci. 2023, 13, 3317. [Google Scholar] [CrossRef]
  6. Yuan, F.; Xia, Z.; Tang, B.; Yin, Z.; Shao, X.; He, X. Calibration accuracy evaluation method for multi-camera measurement systems. Measurement 2025, 242, 116311. [Google Scholar] [CrossRef]
  7. Gong, X.; Liu, X.; Cao, Z.; Ma, L.; Yang, Y.; Yu, J. Industrial binocular vision system calibration with unknown imaging model. IEEE Trans. Ind. Inform. 2024, 20, 7370–7379. [Google Scholar] [CrossRef]
  8. Tian, X.; Gao, Q.; Luo, Q.; Feng, J. Trinocular camera self-calibration based on spatio-temporal multi-layer optimization. Measurement 2023, 217, 113003. [Google Scholar] [CrossRef]
  9. Meng, Z.; Zhang, H.; Guo, D.; Chen, S.; Huo, J. Defocused calibration for large field-of-view binocular cameras. Autom. Constr. 2023, 147, 104737. [Google Scholar] [CrossRef]
  10. Chen, H.-Y.; Fu, C.-K.; Chuang, J.-H. Variation of camera parameters due to common physical changes in focal length and camera pose. arXiv 2024, arXiv:2409.01171. [Google Scholar] [CrossRef]
  11. Huang, W.; Peng, X.; Li, L.; Li, X. Review on camera calibration methods and recent advances. Prog. Laser Optoelectron. 2023, 60, 9–19. (In Chinese) [Google Scholar]
  12. Lu, E. Research on Multi-Line Intersection Photogrammetry Method and Its Application in Port Machinery. Ph.D. Thesis, Wuhan University of Technology, Wuhan, China, 2019. (In Chinese). [Google Scholar]
  13. Chen, X.; Qiu, Q.; Yang, C.; Feng, P. Concept system and application of point group symmetry in mechanical structure design. Symmetry 2020, 12, 1507. [Google Scholar] [CrossRef]
  14. Zhang, X.; Lv, T.; Dan, W.; Zhang, M. High-precision binocular camera calibration method based on a 3D calibration object. Appl. Opt. 2024, 63, 2667–2682. [Google Scholar] [CrossRef] [PubMed]
  15. Su, Z.; Yu, Z.; Guan, B.; Yu, Q.; Zhang, D. High-accurate camera calibration in three-dimensional visual displacement measurements based on ordinary planar pattern and analytical distortion model. Opt. Eng. 2021, 60, 054104. [Google Scholar] [CrossRef]
  16. Zhang, G.; Xu, B.; Lau, D.L.; Zhu, C.; Liu, K. Correcting projector lens distortion in real time with a scale-offset model for structured light illumination. Opt. Express 2022, 30, 24507–24522. [Google Scholar] [CrossRef] [PubMed]
  17. Bu, L.; Wang, R.; Wang, X.; Hou, Z.; Zhou, Y.; Wang, Y.; Bu, F. Calibration method for fringe projection profilometry based on rational function lens distortion model. Measurement 2023, 217, 112996. [Google Scholar] [CrossRef]
  18. Neale, W.T.; Hessel, D.; Terpstra, T. Photogrammetric measurement error associated with lens distortion. In Proceedings of the SAE 2011 World Congress & Exhibition, Detroit, MI, USA, 12–14 April 2011. SAE Technical Paper. [Google Scholar]
Figure 2. Symmetry cases: (a) Agricultural machinery symmetrical structure. (b) Symmetrical structure at bridge construction site.
Figure 2. Symmetry cases: (a) Agricultural machinery symmetrical structure. (b) Symmetrical structure at bridge construction site.
Sensors 25 06873 g002
Figure 3. Spatial positional relationship within the image plane coordinate system.
Figure 3. Spatial positional relationship within the image plane coordinate system.
Sensors 25 06873 g003
Figure 4. Schematic diagram of the distance between the target point and the pointer points.
Figure 4. Schematic diagram of the distance between the target point and the pointer points.
Sensors 25 06873 g004
Figure 5. Schematic diagram of the distance from the target point to the pointer points’ line.
Figure 5. Schematic diagram of the distance from the target point to the pointer points’ line.
Sensors 25 06873 g005
Figure 6. Schematic diagram of the experimental procedure.
Figure 6. Schematic diagram of the experimental procedure.
Sensors 25 06873 g006
Figure 7. GUI of the MATLAB2015b program.
Figure 7. GUI of the MATLAB2015b program.
Sensors 25 06873 g007
Figure 8. Indoor control field.
Figure 8. Indoor control field.
Sensors 25 06873 g008
Figure 9. Guidelines for selecting test points.
Figure 9. Guidelines for selecting test points.
Sensors 25 06873 g009
Figure 10. Image pairs from Experiment Group 1, pair 1.
Figure 10. Image pairs from Experiment Group 1, pair 1.
Sensors 25 06873 g010
Figure 11. Image pairs from Experiment Group 1, pair 2.
Figure 11. Image pairs from Experiment Group 1, pair 2.
Sensors 25 06873 g011
Figure 12. Bar chart of mean errors.
Figure 12. Bar chart of mean errors.
Sensors 25 06873 g012
Figure 13. Box plot of mean errors.
Figure 13. Box plot of mean errors.
Sensors 25 06873 g013
Table 1. Table of symbols.
Table 1. Table of symbols.
NameExplanationUnit
v x , v y image point coordinate correctionsmm
( ω , φ , κ ) exterior orientation elements (orientation angles)°
( x , y ) image-plane coordinates of the image pointmm
( x 0 , y 0 ) image-plane coordinates of the principal pointmm
( X s , Y s , Z s ) object-space coordinates of the camera centermm
( X ¯ , Y ¯ , Z ¯ ) image-space coordinates of the object pointmm
( x ) , ( y ) approximate value from the previous iterationmm
ffocal lengthmm
( X , Y , Z ) object-space coordinates of the object pointmm
Δ correction operator
Table 2. Technical specifications of the test camera.
Table 2. Technical specifications of the test camera.
ParameterValueUnit
CCD resolution8688 × 5792pixels
Pixel size4.14μm
Table 3. Calibration results of the test camera.
Table 3. Calibration results of the test camera.
ParameterValueParameterValue
f(mm)54.39 q 1 −0.0007295
Δ x (mm)17.97 q 2 −0.0007047
Δ y (mm)11.30 s 1 0
k 1 −0.1495 s 2 0
k 2 −0.1458
Table 4. Object-space coordinates of control points, indicator points, and test points.
Table 4. Object-space coordinates of control points, indicator points, and test points.
Numberx (mm)y (mm)z (mm)Numberx (mm)y (mm)z (mm)
1-L12−115784961690d-R17−175883191489
2-L20−8978166890e-R8−16028449887
3-R34−221579351290A-L16−128786451287
4-L34−63578611291B-R11−160484481488
5-R24−190981921689C-L30−76380141690
6-R19−19078191688D-R28−206280631289
a-L4−128786451287E-R2−14508577886
b-L17−102783391492F-R32−22137934889
c-L9−16028449887
Table 5. First set of images, first group of image plane coordinates for the first experiment.
Table 5. First set of images, first group of image plane coordinates for the first experiment.
No. x l (Pixel) y l (Pixel) x r (Pixel) y r (Pixel)
12915.3044464268.4591523685.6120124167.712359
22465.5555002057.0427613021.6384362260.264324
36177.9942763223.9742626409.4097063311.038932
41895.8392243310.2693312239.1169463295.281711
55092.1683164271.7829265531.2384144264.914888
65111.9003641575.9408625549.5298751778.117378
a3126.2647423158.9926504003.1318163211.693944
b2694.8592313784.1874513358.2254853732.738076
c2923.3558722642.6129963695.0189322757.767444
d4583.2157653717.9487345139.3446513740.670620
e4077.9748942117.5786054761.0966072288.814240
Table 6. First set of images, second group of image plane coordinates for the first experiment.
Table 6. First set of images, second group of image plane coordinates for the first experiment.
No. x l (Pixel) y l (Pixel) x r (Pixel) y r (Pixel)
13014.8946494174.0543003534.5116744105.041802
22568.8496252151.1490452920.4165282246.963792
36006.3190483206.4363746220.3356823250.511088
42028.9060223285.7295412204.1553403261.390220
55019.2102224177.8945405355.0450354175.430140
65024.6269751690.5136645363.0012701778.841921
a2663.2430383271.5993433052.6640243263.793174
b2854.2516872811.4438803316.2939942845.845069
c3035.0320602371.8879333565.7048432448.340542
d4295.4332203336.2697284748.0925943354.009166
e5462.6842693364.2957735743.5901793396.980181
f6698.0267363392.7428456847.0652163445.642024
g5089.4447332319.6663725420.0340362384.448572
h5260.3556462321.8779095566.4250562384.534010
i5431.0615512323.4317015715.0343742385.162602
Table 7. Focal length.
Table 7. Focal length.
No. f 0 f 1 f 2 f 3 f 4 1 f 4 2 f 4 3 f 4 4 f 4 5 f 4 6
1 61.4558.4655.4753.6553.3953.5453.3953.2553.05
2 61.2352.4553.0953.4253.0953.3353.0953.2653.09
35052.9852.7452.6753.4252.4953.3152.4952.8452.68
4 59.7651.0950.9651.1651.0851.0951.0050.5850.40
5 60.5253.7053.7053.2753.8353.7053.8653.4953.50
Table 8. External orientation element for the 1st group using the traditional algorithm.
Table 8. External orientation element for the 1st group using the traditional algorithm.
No.X (mm)Y (mm)Z (mm) ω (°) ϕ (°) κ (°)
Left408.183217.881020.24−1.19−4.79−6.36
Right−1385.052261.01993.19−1.550.28−1.30
Table 9. External orientation elements for the 1st group using the geometric constraint algorithm.
Table 9. External orientation elements for the 1st group using the geometric constraint algorithm.
No.X (mm)Y (mm)Z (mm) ω (°) ϕ (°) κ (°)
Left325.632979.921031.79−1.211.50−62.90
Right−1058.382369.111013.67−1.65877.75945.29
Table 10. External orientation elements for the 1st group using the optical distortion algorithm.
Table 10. External orientation elements for the 1st group using the optical distortion algorithm.
No.X (mm)Y (mm)Z (mm) ω (°) ϕ (°) κ (°)
Left267.293266.471046.40−1.211.51−69.18
Right−1054.232667.911036.63−1.491.30−63.11
Table 11. External orientation elements of left and right images for test points A to F in the 1st group using composite weighting.
Table 11. External orientation elements of left and right images for test points A to F in the 1st group using composite weighting.
Test PointNo.X (mm)Y (mm)Z (mm) ω (°) ϕ (°) κ (°)
ALeft233.063438.141055.77−1.9423.50−34.62
Right−1051.072846.571049.27−1.491.33−69.36
BLeft228.123462.931057.11−1.201.51−44.04
Right−1050.582872.351051.08−1.6561.03−3.38
CLeft231.083448.061056.31−1.201.51−69.18
Right−1050.872856.881050.00−1.491.33−69.36
DLeft228.103463.031057.12−1.944.65−34.62
Right−1050.572872.451051.08−1.6573.599.19
ELeft225.483476.201057.83−1.207.79−56.61
Right−1050.312886.161052.04−1.65−14.37−85.06
FLeft221.733495.051058.85−1.203777.713725.87
Right−1049.932905.761053.414.80−42.64−245.28
Table 12. Calculation errors of test points using traditional algorithm in the first experiment.
Table 12. Calculation errors of test points using traditional algorithm in the first experiment.
No.XYZDistance ErrorX ErrorY ErrorZ Error
A-L16−1026.458327.161291.066.303.455.161.06
B-R11−1604.338460.121488.4912.130.3312.120.49
C-L30−764.418012.071696.206.641.411.936.20
D-R28−2064.248067.131289.434.722.244.130.43
E-R2−1451.618594.38891.3718.261.6117.385.37
F-R32−2216.757937.13889.174.893.753.130.17
Average error (mm)8.822.137.312.29
Table 13. Calculation errors of test points using traditional algorithm in the first experiment (simulated zoom).
Table 13. Calculation errors of test points using traditional algorithm in the first experiment (simulated zoom).
No.XYZDistance ErrorX ErrorY ErrorZ Error
A-L16−991.568077.331270.39247.4631.44244.6719.61
B-R11−1522.728233.911449.41232.2381.28214.0938.59
C-L30−760.577832.441638.94188.622.43181.5651.06
D-R28−1986.068013.031273.3792.2475.9449.9715.63
E-R2−1371.118303.76910.50285.4678.89273.2424.50
F-R32−2142.747934.28899.7571.0870.260.2810.75
Average error (mm)186.1856.71160.6426.69
Table 14. Calculation errors of test points using geometry-constrained algorithm in the first experiment.
Table 14. Calculation errors of test points using geometry-constrained algorithm in the first experiment.
No.XYZDistance ErrorX ErrorY ErrorZ Error
A-L16−1025.038327.971290.686.352.035.970.68
B-R11−1604.558460.661488.6912.690.5512.660.69
C-L30−763.668013.001693.723.910.661.003.72
D-R28−2063.528066.671289.003.971.523.670.00
E-R2−1451.558595.64889.8219.091.5518.643.82
F-R32−2214.847935.82888.862.591.841.820.14
Average error (mm)8.101.367.291.51
Table 15. Calculation errors of test points using optical-distortion-incorporated geometry-constrained algorithm in the first experiment.
Table 15. Calculation errors of test points using optical-distortion-incorporated geometry-constrained algorithm in the first experiment.
No.XYZDistance ErrorX ErrorY ErrorZ Error
A-L16−1024.448327.761290.305.951.445.760.30
B-R11−1606.118462.071489.0814.272.1114.071.08
C-L30−763.658014.351692.002.130.650.352.00
D-R28−2063.338065.941289.003.231.332.940.00
E-R2−1452.858595.86887.9319.172.8518.861.93
F-R32−2213.007932.98889.181.040.001.020.18
Average error (mm)7.631.407.170.92
Table 16. Calculation errors of test points using composite weighted algorithm incorporating geometric constraints, optical distortion and spatial distance metrics in the first experiment.
Table 16. Calculation errors of test points using composite weighted algorithm incorporating geometric constraints, optical distortion and spatial distance metrics in the first experiment.
No.XYZDistance ErrorX ErrorY ErrorZ Error
A-L16−1024.048327.461289.885.561.045.460.12
B-R11−1607.248463.101489.2515.493.2415.101.25
C-L30−763.588014.701690.491.030.580.700.49
D-R28−2062.958065.081289.062.290.952.080.06
E-R2−1453.328595.11886.3218.413.3218.110.32
F-R32−2212.547935.71889.201.780.461.710.20
Average error (mm)7.431.607.190.40
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, E.; Li, X.; Yang, F.; Zhang, D.; Sun, X. A Focal Length Calibration Method for Vision Measurement Systems Based on Multi-Feature Composite Variable Weighting. Sensors 2025, 25, 6873. https://doi.org/10.3390/s25226873

AMA Style

Lu E, Li X, Yang F, Zhang D, Sun X. A Focal Length Calibration Method for Vision Measurement Systems Based on Multi-Feature Composite Variable Weighting. Sensors. 2025; 25(22):6873. https://doi.org/10.3390/s25226873

Chicago/Turabian Style

Lu, Enshun, Xiaofeng Li, Fangjing Yang, Daode Zhang, and Xing Sun. 2025. "A Focal Length Calibration Method for Vision Measurement Systems Based on Multi-Feature Composite Variable Weighting" Sensors 25, no. 22: 6873. https://doi.org/10.3390/s25226873

APA Style

Lu, E., Li, X., Yang, F., Zhang, D., & Sun, X. (2025). A Focal Length Calibration Method for Vision Measurement Systems Based on Multi-Feature Composite Variable Weighting. Sensors, 25(22), 6873. https://doi.org/10.3390/s25226873

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop