Next Article in Journal
Study on Deformation–Failure Behavior and Bearing Mechanism of Tunnel-Type Anchorage for Suspension Bridges Based on Physical Model Tests
Previous Article in Journal
Prediction of Combustion Parameters and Pollutant Emissions of a Dual-Fuel Engine Based on Recurrent Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Locations of Non-Cooperative Targets Based on Binocular Vision Intersection and Its Error Analysis

by
Kui Shi
1,2,*,
Hongtao Yang
1,
Jia Feng
1,2,
Guangsen Liu
1 and
Weining Chen
1
1
Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 9867; https://doi.org/10.3390/app15189867
Submission received: 30 July 2025 / Revised: 4 September 2025 / Accepted: 5 September 2025 / Published: 9 September 2025

Abstract

The precise locations of unknown non-cooperative targets are a long-standing technical problem that needs to be solved urgently in disaster relief and emergency rescue. An imaging model of photography to a non-cooperative target was established based on the binocular vision forward intersection. The collinear equation representing the spatial position relationship between the target and its two images was obtained through coordinate system transformation, and the system of equations to calculate the geographic coordinates of the target was derived, which realized the geo-location of the unknown non-cooperative target with no control points and no source. The composition and source of the error of this target location method were analyzed, and the equation to calculate the total error of the target location was obtained according to the error synthesis theory. The accuracy of the target location was predicted. When the elevation difference between the camera and the target is 3 km, the location accuracy is 15.5 m. The same ground target was imaged by a certain type of aerial camera at different locations 3097 m above ground, and a target location verification experiment was completed. The longitude and latitude of the target obtained were compared with the true geographic longitude and latitude, and the location error of the verification experiment was calculated to be 16.3 m. The research work of this paper provides a theoretical basis and methods for the precise locations of unknown non-cooperative targets and proposes specific measures to improve the accuracy of target location.

1. Introduction

Precise localization of unknown non-cooperative targets has been widely used in many fields such as reconnaissance, monitoring, emergency rescue, topographic mapping, investigations, etc. [1,2]. The position information of non-cooperative targets can only be obtained by direct measurement through sensors, and it is impossible to obtain it by other cooperative means (such as active feedback of self-position information by the targets, radio navigation auxiliary positioning, equipping the targets with GPS receivers, etc.). In recent years, non-cooperative target location technology is becoming more and more widely used, and the trend is that a location system can simultaneously locate and track multiple targets in real-time; high-precision rapid non-cooperative passive target location without control points; the vision of multi-cameras forward intersect to form a stereo image, and the three-dimensional spatial stereo location of targets can be achieved; the combination of cameras, laser rangefinders, synthetic aperture radars, and other means to achieve high-precision location; rough location of targets first, and high-precision location of key targets in the later stage [3,4,5,6].
Based on the target location by stereo images, multiple non-cooperative passive targets in the stereo images can be precisely located at the same time without control points. The location data can be processed quickly, and real-time location can be achieved. The technology of target location by shooting stereo images with multi-eye vision aerial is developing rapidly, and many models of multi-eye stereo vision aerial cameras have appeared at home and abroad. Among them, the A3 digital aerial photogrammetry system designed and manufactured by the Israeli company VisionMap uses a rigid fixed combination of dual-measurement digital cameras. Through many overlapping sub-images, and utilizing multi-eyes vision, multi-baseline matching technology, it can obtain a large amount of matching data for the same ground target. The triangular network is very dense, and many redundant results can be obtained at the same time. These results are close enough to the true value with a high precision through the least square solution [7,8].
Due to the existence of factors such as platform stability errors, platform pointing errors, Position and Orientation System (POS) measurement errors, calibration errors, and other factors, there will be errors in the obtained image exterior orientation elements. There will also be errors in the calibration of image interior orientation elements and the measurement of image points. These errors in the exterior and interior orientation elements will bring errors to the target location at different levels. To improve the accuracy of target location, it is necessary to analyze the composition of location errors in detail and find out main sources of errors, and this has great significance for improving location accuracy [9,10].
To achieve high-precision target location in long-range oblique photography, a new method of high-precision target location based on multi-perspective observation was introduced by Liu [11]. First, feature matching between the images was used to acquire landmarks. Then, to obtain the geo-locations of these landmarks, a process of map-building is performed. Third, the spatial location of the landmarks and the camera parameters are optimized using bundle adjustment. Last, with the optimization result of camera parameters, the target location is calculated using a location method based on the line-of-sight.
To achieve real-time, high-precision, and long-range target location, a new location method and a set of work patterns were proposed by Zhang [12]. First, using the traditional location method, the rough location of a target is calculated. Then, the target with its rough location is re-imaged. Third, by processing the re-projection error, a weighted filtering algorithm is proposed, and the optimized target location is obtained. Last, the estimation of the target location can converge to the true value after repeating the process mentioned above several times.
In this paper, based on the binocular vision forward intersection to image a non-cooperative target, the target location model was established with no control point, two groups of photographic imaging collinear equations were obtained, and the calculation method of the geographical coordinates of the target was analyzed [13]. The constitution and source of the target location error were analyzed, and the accuracy of the target location was predicted by the error synthesis theory. A verification experiment of the target location method was completed based on the image obtained by a certain aerial camera in a flight experiment and the measurement information of the POS at the time of shooting. The longitude and latitude of geographical target A, located in the overlapping area of two images, were calculated, and the location accuracy of the verification experiment was obtained by comparing with the true longitude and latitude of geographical target A.

2. Establishment of Target Location Model

2.1. Establishment of Coordinate System and Definition of Internal and External Orientation Elements

2.1.1. Camera Coordinate System

For camera 1, the camera coordinate system C1-x1y1z1 is a three-dimensional rectangular coordinate system established by taking the photographic center C1 as the coordinate origin and the main optical axis of the camera as the z1-axis. The x1- and y1-axes are parallel to the transverse and longitudinal directions of the imaging detector of camera 1, respectively. The x1-, y1-, and z1-axes of the camera coordinate system C1-x1y1z1 conform to the right-hand rule, and the z1-axis points to camera 1 from the target area and passes through the main point of the imaging detector. Taking center O1 of the imaging detector as the origin and the transverse and longitudinal directions of the imaging detector as the x- and y-axes, the image 1 two-dimensional coordinate system is established. So, the x- and y-axes of the image 1 coordinate system are parallel and in the same direction as the x1- and y1-axes of the camera 1 coordinate system. In image 1, the image a 1 of the geographic target A has the coordinates (x1, y1) in the image 1 coordinate system. The coordinates of the main point of camera 1 in the image 1 coordinate system are (x1z, y1z); then, the image point a 1 has the coordinates (x1x1z, y1y1z, −f1) in the camera 1 coordinate system, where f1 is the focal length of camera 1 [14].
Similarly, for camera 2, the camera 2 coordinate system C2-x2y2z2 and the two-dimensional coordinate system of image 2 (x2, y2) can be established, and the coordinates of the image point a 2 of geographical target A in the camera 2 coordinate system are (x2x2z, y2y2z, −f2), where f2 is the focal length of camera 2.

2.1.2. POS Measurement Coordinate System

To achieve the goal of target location, it is necessary to obtain exterior orientation elements at the time of imaging, and a POS is usually installed fixedly with the camera to measure the exterior orientation elements at the time of imaging. The POS measurement system mainly consists of two parts: a Global Positioning System (GPS) receiver and an Inertial Measurement Unit (IMU).
For camera 1, the POS 1 measurement coordinate system is P1-x1py1pz1p, and the directions of the coordinate axes are defined on the top surface of the IMU 1, as shown in the structure dimension diagram in the IMU manual. The origin of the coordinates is point P1, which is the measurement center of the POS 1. It can be seen from the actual installation position and direction of the POS that, without considering the IMU installation errors, the x1-, y1-, and z1-axes of the camera 1 coordinate system C1-x1y1z1 are parallel and in the same direction as the x1p-, y1p-, and z1p axes of the POS 1 measurement coordinate system. The coordinates of the image point a 1 in the POS 1 measurement coordinate system (P1-x1py1pz1p) are (X1ap, Y1ap, Z1ap).
For camera 2, the POS 2 measurement coordinate system P2-x2py2pz2p can be similarly established, where the coordinate origin is the measurement center point P2 of POS 2, and the coordinates of the image point a 2 in the POS 2 measurement coordinate system P2-x2py2pz2p are (X2ap, Y2ap, Z2ap).

2.1.3. POS Navigation Coordinate System

The POS 1 navigation coordinate system W1-x1wy1wz1w is the geographical coordinate system of camera 1, where the x1w-axis points to the east, the y1w-axis points to the north, and the z1w-axis points to the sky. The origin of the POS 1 navigation coordinate system is P1, the measurement center of POS 1, which is the same as the origin of the POS 1 measurement coordinate system P1-x1py1pz1p. Geographical target A has the coordinates (X1, Y1, Z1) in the POS 1 navigation coordinate system W1-x1wy1wz1w, and the image point a 1 has the coordinates (X1aw, Y1aw, Z1aw) in the POS 1 navigation coordinate system W1-x1wy1wz1w.
For camera 2, the POS 2 navigation coordinate system W2-x2wy2wz2w can be established in the same way. Coordinates of geographical target A are (X2, Y2, Z2) in the POS 2 navigation coordinate system W2-x2wy2wz2w, and coordinates of image point a 2 are (X2aw, Y2aw, Z2aw) in the POS 2 navigation coordinate system W2-x2wy2wz2w.
From the imaging principle of binocular vision forward intersection, it can be found that there are two sets of photographic imaging coordinate system transformation models. In the target location model established in this paper, the coordinate systems involved are the camera 1 coordinate system (C1-x1y1z1), camera 2 coordinate system (C2-x2y2z2), image 1 coordinate system (x1, y1), image 2 coordinate system (x2, y2), POS 1 measurement coordinate system (P1-x1py1pz1p), POS 2 measurement coordinate system (P2-x2py2pz2p), POS 1 navigation coordinate system (W1-x1wy1wz1w), and POS 2 navigation coordinate system (W2-x2wy2wz2w).

2.2. Establishment of Coordinate Systems Transformation Relations

2.2.1. Transformation Between POS Measurement Coordinate System and Camera Coordinate System

The IMU in the POS is usually rigidly connected to the camera, so the distance between the origin of the POS measurement coordinate system and the origin of the camera coordinate system is less than 0.1 m, which is much less than the errors of the GPS of the POS, and the geographical location of the origin of the POS measurement coordinate system can be approximately considered the same as that of the origin of the camera coordinate system.
The three coordinate axes of the POS measurement coordinate system are parallel to the three coordinate axes of the camera coordinate system in the absence of installation errors, but due to the existence of actual assembly errors, there is a small assembly error angle between each of the three coordinate axes of the POS measurement coordinate system and the corresponding coordinate axis of the camera coordinate system. By calibrating the IMU assembly errors, the assembly error angles between the POS measurement coordinate system and the camera coordinate system can be measured [15].
For camera 1, the assembly error angles of IMU 1 are (Ψ1M, θ1M, γ1M). After the camera 1 coordinate system C1-x1y1z1 is rotated by the Ψ1M, θ1M, and γ1M angles in counterclockwise directions around the z1-, x1- and y1-axes in turn, the POS 1 measurement coordinate system P1-x1py1pz1p can be obtained.
Then, in the POS 1 measurement coordinate system P1-x1py1pz1p, the coordinates (X1ap, Y1ap, Z1ap) of point a 1 can be concluded as the following:
  X 1 a p Y 1 a p Z 1 a p = cos ψ 1 M sin ψ 1 M 0 sin ψ 1 M cos ψ 1 M 0 0 0 1 1 0 0 0 cos θ 1 M sin θ 1 M 0 sin θ 1 M cos θ 1 M cos γ 1 M 0 sin γ 1 M 0 1 0 sin γ 1 M 0 cos γ 1 M x 1 x 1 z y 1 y 1 z f 1 = cos ψ 1 M cos γ 1 M sin ψ 1 M sin θ 1 M sin γ 1 M sin ψ 1 M cos θ 1 M cos ψ 1 M sin γ 1 M + sin ψ 1 M sin θ 1 M cos γ 1 M sin ψ 1 M cos γ 1 M + cos ψ 1 M sin θ 1 M sin γ 1 M cos ψ 1 M cos θ 1 M sin ψ 1 M sin γ 1 M cos ψ 1 M sin θ 1 M cos γ 1 M cos θ 1 M sin γ 1 M sin θ 1 M cos θ 1 M cos γ 1 M x 1 x 1 z y 1 y 1 z f 1
Similarly, for camera 2, the coordinates of the image point a 2 in the POS 2 measurement coordinate system P2-x2py2pz2p can be concluded as the following:
X 2 a p Y 2 a p Z 2 a p = cos ψ 2 M cos γ 2 M sin ψ 2 M sin θ 2 M sin γ 2 M sin ψ 2 M cos θ 2 M cos ψ 2 M sin γ 2 M + sin ψ 2 M sin θ 2 M cos γ 2 M sin ψ 2 M cos γ 2 M + cos ψ 2 M sin θ 2 M sin γ 2 M cos ψ 2 M cos θ 2 M sin ψ 2 M sin γ 2 M cos ψ 2 M sin θ 2 M cos γ 2 M cos θ 2 M sin γ 2 M sin θ 2 M cos θ 2 M cos γ 2 M x 2 x 2 z y 2 y 2 z f 2
In Equation (2), (Ψ2M, θ2M, γ2M) are the assembly error angles of IMU 2.

2.2.2. Transformation Between POS Navigation Coordinate System and POS Measurement Coordinate System

The definition of the POS posture angles and their effective range are defined as follows:
The heading angle Ψ is the angle between the projection of the y-axis of the POS measurement coordinate system on the horizontal plane (x-y plane) of the POS navigation coordinate system and the y-axis of the POS navigation coordinate system, starting from the y-axis of the POS navigation coordinate system. Ψ is positive when it is counted in the “counterclockwise direction”, and the effective range is [0°, 360°].
The pitch angle θ is the angle between the y-axis of the POS measurement coordinate system and the horizontal plane (x-y plane) of the POS navigation coordinate system. When the y-axis of the POS measurement coordinate system points above the horizontal plane, θ is positive, otherwise it is negative, and the effective range is [−90°, 90°].
The roll angle γ is defined as positive when the IMU is right tilted with the y-axis of the POS measurement coordinate system pointing forward and the x-axis of the POS measurement coordinate system pointing right. γ is negative when the IMU is left tilted, and the effective range is [ 180°, 180°].
The inner orientation elements of camera 1 can be defined as (x1z, y1z, f1); the angular outer orientation elements of camera 1 are (Ψ1, θ1, γ1), where Ψ1 is heading angle 1, θ1 is pitch angle 1, and γ1 is roll angle 1. The inner orientation elements of camera 2 are (x2z, y2z, f2); the angular outer orientation elements of camera 2 are (Ψ2, θ2, γ2), where Ψ2 is heading angle 2, θ2 is pitch angle 2, and γ2 is roll angle 2 [16].
For camera 1, after the POS 1 navigation coordinate system (W1-x1wy1wz1w) is successively rotated by Ψ1, θ1, and γ1 with the z1w-, x1w-, and y1w-axes in the counterclockwise direction, the POS 1 measurement coordinate system P1-x1py1pz1p is obtained. Thus, the transformation relationship between the coordinates of image point a 1 in the POS 1 navigation coordinate system as (X1aw, Y1aw, Z1aw) and the coordinates of image point a 1 in the POS 1 measurement coordinate system as (X1ap, Y1ap, Z1ap) is as follows [17,18]:
X 1 a w Y 1 a w Z 1 a w = cos ψ 1 sin ψ 1 0 sin ψ 1 cos ψ 1 0 0 0 1 1 0 0 0 cos θ 1 sin θ 1 0 sin θ 1 cos θ 1 cos γ 1 0 sin γ 1 0 1 0 sin γ 1 0 cos γ 1 X 1 a p Y 1 a p Z 1 a p = cos ψ 1 cos γ 1 sin ψ 1 sin θ 1 sin γ 1 sin ψ 1 cos θ 1 cos ψ 1 sin γ 1 + sin ψ 1 sin θ 1 cos γ 1 sin ψ 1 cos γ 1 + cos ψ 1 sin θ 1 sin γ 1 cos ψ 1 cos θ 1 sin ψ 1 sin γ 1 cos ψ 1 sin θ 1 cos γ 1 cos θ 1 sin γ 1 sin θ 1 cos θ 1 cos γ 1 X 1 a p Y 1 a p Z 1 a p = a 11 1 a 12 1 a 13 1 a 21 1 a 22 1 a 23 1 a 31 1 a 32 1 a 33 1 X 1 a p Y 1 a p Z 1 a p
where
a 11 1 = cos ψ 1 cos γ 1 sin ψ 1 sin θ 1 sin γ 1 a 21 1 = sin ψ 1 cos γ 1 + cos ψ 1 sin θ 1 sin γ 1 a 31 1 = cos θ 1 sin γ 1 a 12 1 = sin ψ 1 cos θ 1 a 22 1 = cos ψ 1 cos θ 1 a 32 1 = sin θ 1 a 13 1 = cos ψ 1 sin γ 1 + sin ψ 1 sin θ 1 cos γ 1 a 23 1 = sin ψ 1 sin γ 1 cos ψ 1 sin θ 1 cos γ 1 a 33 1 = cos θ 1 cos γ 1
Similarly, for camera 2, the coordinates (X2aw, Y2aw, Z2aw) of the image point a 2 in the POS 2 navigation coordinate system W2-x2wy2wz2w can be concluded as follows:
X 2 a w Y 2 a w Z 2 a w = cos ψ 2 cos γ 2 sin ψ 2 sin θ 2 sin γ 2 sin ψ 2 cos θ 2 cos ψ 2 sin γ 2 + sin ψ 2 sin θ 2 cos γ 2 sin ψ 2 cos γ 2 + cos ψ 2 sin θ 2 sin γ 2 cos ψ 2 cos θ 2 sin ψ 2 sin γ 2 cos ψ 2 sin θ 2 cos γ 2 cos θ 2 sin γ 2 sin θ 2 cos θ 2 cos γ 2 X 2 a p Y 2 a p Z 2 a p = a 11 2 a 12 2 a 13 2 a 21 2 a 22 2 a 23 2 a 31 2 a 32 2 a 33 2 X 1 a p Y 1 a p Z 1 a p
where
a 11 2 = cos ψ 2 cos γ 2 sin ψ 2 sin θ 2 sin γ 2 a 21 2 = sin ψ 2 cos γ 2 + cos ψ 2 sin θ 2 sin γ 2 a 31 2 = cos θ 2 sin γ 2 a 12 2 = sin ψ 2 cos θ 2 a 22 2 = cos ψ 2 cos θ 2 a 32 2 = sin θ 2 a 13 2 = cos ψ 2 sin γ 2 + sin ψ 2 sin θ 2 cos γ 2 a 23 2 = sin ψ 2 sin γ 2 cos ψ 2 sin θ 2 cos γ 2 a 33 2 = cos θ 2 cos γ 2

3. Principle of Forward Intersection of Binocular Vision

The schematic diagram of the forward intersection of a binocular vision is shown in Figure 1, where a target A appears in two images of camera 1 and camera 2, and the two images will present a pair of stereo images. For camera 1, in the POS 1 navigation coordinate system W1-x1wy1wz1w, the coordinates (X1, Y1, Z1) of geographical target A can be derived from the similar triangle geometry relationship of photographic imaging in the spatial rectangular coordinate system, and the following equation of proportional relationship is satisfied:
X 1 X 1 a w = Y 1 Y 1 a w = Z 1 Z 1 a w
The classical collinear equation for photogrammetry obtained by combining Equations (3) and (7) is as follows [19]:
X 1 Z 1 = a 11 1 X 1 a p + a 12 1 Y 1 a p + a 13 1 Z 1 a p a 31 1 X 1 a p + a 32 1 Y 1 a p + a 33 1 Z 1 a p Y 1 Z 1 = a 21 1 X 1 a p + a 22 1 Y 1 a p + a 23 1 Z 1 a p a 31 1 X 1 a p + a 32 1 Y 1 a p + a 33 1 Z 1 a p
From the above collinear equation, it is known that image 1 and image 2 can produce four equations as shown in Equation (9), and the position coordinates of target A containing three unknown values need to be solved.
X 1 Z 1 = a 11 1 X 1 a p + a 12 1 Y 1 a p + a 13 1 Z 1 a p a 31 1 X 1 a p + a 32 1 Y 1 a p + a 33 1 Z 1 a p Y 1 Z 1 = a 21 1 X 1 a p + a 22 1 Y 1 a p + a 23 1 Z 1 a p a 31 1 X 1 a p + a 32 1 Y 1 a p + a 33 1 Z 1 a p X 2 Z 2 = a 11 2 X 2 a p + a 12 2 Y 2 a p + a 13 2 Z 2 a p a 31 2 X 2 a p + a 32 2 Y 2 a p + a 33 2 Z 2 a p Y 2 Z 2 = a 21 2 X 2 a p + a 22 2 Y 2 a p + a 23 2 Z 2 a p a 31 2 X 2 a p + a 32 2 Y 2 a p + a 33 2 Z 2 a p
In the POS 1 navigation coordinate system W1-x1wy1wz1w and POS 2 navigation coordinate system W2-x2wy2wz2w, the x1w- and x2w-axes point east, y1w- and y2w-axes point north, and z1w- and z2w-axes point to the sky, so the three coordinate axes of the POS 1 navigation coordinate system W1-x1wy1wz1w are parallel to the three coordinate axes of the POS 2 navigation coordinate system W2-x2wy2wz2w. P2 is the coordinate origin of the POS 2 navigation coordinate system W2-x2wy2wz2w, and the coordinates of P2 in the POS 1 navigation coordinate system W1-x1wy1wz1w is (XP12, YP12, ZP12); thus,
X 2 = X 1 X P 12 Y 2 = Y 1 Y P 12 Z 2 = Z 1 Z P 12
The following equations can be concluded by combining Equations (9) and (10):
X 1 Z 1 = a 11 1 X 1 a p + a 12 1 Y 1 a p + a 13 1 Z 1 a p a 31 1 X 1 a p + a 32 1 Y 1 a p + a 33 1 Z 1 a p Y 1 Z 1 = a 21 1 X 1 a p + a 22 1 Y 1 a p + a 23 1 Z 1 a p a 31 1 X 1 a p + a 32 1 Y 1 a p + a 33 1 Z 1 a p X 1 X P 12 Z 1 Z P 12 = a 11 2 X 2 a p + a 12 2 Y 2 a p + a 13 2 Z 2 a p a 31 2 X 2 a p + a 32 2 Y 2 a p + a 33 2 Z 2 a p Y 1 Y P 12 Z 1 Z P 12 = a 21 2 X 2 a p + a 22 2 Y 2 a p + a 23 2 Z 2 a p a 31 2 X 2 a p + a 32 2 Y 2 a p + a 33 2 Z 2 a p
Equation (11) is a system of ternary quartic equations. First, the first and third equations are combined to obtain the first solution of Z1, and then the second and fourth equations are combined to obtain the second solution of Z1. The average of the two solutions of Z1 is used as the final solution of the geographical location coordinate value Z1 of target A, and then, the final solution of Z1 is substituted into the first and second equations to obtain the final solution of X1 and Y1. In this way, the accuracy of the location of target A can be improved.
Thereby, the coordinate values (X1, Y1, Z1) of geographic target A in the POS 1 navigation coordinate system W1-x1wy1wz1w are obtained, and the geographic location of target A is obtained.

4. Analysis and Estimation of Location Error

4.1. Analysis of Location Error

Since the IMU assembly angle errors are systematic errors, the precise values of the IMU assembly angle errors can be obtained after ground calibration. In the process of calculating the target position, the location error caused by the IMU assembly angle errors can be eliminated by modifying the angular outer orientation elements. Therefore, the IMU assembly angle errors will not bring a large target location error. Since the IMU assembly angle errors are extremely small angle values, in the error analysis of target location, the three IMU assembly angle errors (Ψ1M, θ1M, γ1M) can be approximately zero to simplify the calculation process. At this time, Equation (1) can be simplified as follows [20]:
X 1 a p Y 1 a p Z 1 a p = x 1 x 1 z y 1 y 1 z f 1
The following equation can be concluded by combining Equations (8) and (12):
X 1 = a 11 1 x 1 x 1 z + a 12 1 y 1 y 1 z a 13 1 f 1 a 31 1 x 1 x 1 z + a 32 1 y 1 y 1 z a 33 1 f 1 Z 1 Y 1 = a 21 1 x 1 x 1 z + a 22 1 y 1 y 1 z a 23 1 f 1 a 31 1 x 1 x 1 z + a 32 1 y 1 y 1 z a 33 1 f 1 Z 1
Z1 = HAH1, where HA is the elevation of geographical target A, and H1 is the elevation of P1, the measurement center of POS 1. The errors of the target location are generated in both the x1w- and y1w-axis directions. The error of the target location in the x1w-axis direction is first determined, and it can be concluded from Equation (13) as follows:
X 1 = a 11 1 x 1 x 1 z + a 12 1 y 1 y 1 z a 13 1 f 1 a 31 1 x 1 x 1 z + a 32 1 y 1 y 1 z a 33 1 f 1 ( H A H 1 )
Based on Equation (14), the error of the target location in the x1w-axis direction of the POS 1 navigation coordinate system can be used by finding the partial differential of X1:
M X 1 = δ X P 1 2 + X 1 H 1 δ H 1 2 + X 1 ψ 1 δ ψ 1 2 + X 1 θ 1 δ θ 1 2 + X 1 γ 1 δ γ 1 2 + X 1 f 1 δ f 1 2 + X 1 x 1 δ x 1 2 + X 1 y 1 δ y 1 2 + X 1 x 1 z δ x 1 z 2 + X 1 y 1 z δ y 1 z 2      
where δ X P 1 is the location error of P 1 , the measurement center of POS 1 in the x1w-axis direction, which is determined by the location accuracy of the GPS.
Set up M X P 1 = δ X P 1 , M Y P 1 = δ Y P 1 , M H 1 = X 1 H 1 δ H 1 , M ψ 1 = X 1 ψ 1 δ ψ 1 , M θ 1 = X 1 θ 1 δ θ 1 , M γ 1 = X 1 γ 1 δ γ 1 , M f 1 = X 1 f 1 δ f 1 , M x 1 = X 1 x 1 δ x 1 , M y 1 = X 1 y 1 δ y 1 , M x 1 z = X 1 x 1 z δ x 1 z , M y 1 z = X 1 y 1 z δ y 1 z , Equation (15) can be simplified as follows:
M X 1 = M X P 1 2 + M H 1 2 + M ψ 1 2 + M θ 1 2 + M γ 1 2 + M f 1 2 + M x 1 2 + M y 1 2 + M X 1 z 2 + M y 1 z 2  
Similarly, the target location error in the y1w-axis direction of the POS 1 navigation coordinate system is MY1.
The total error of this target location method is calculated as follows:
M = M X 1 2 + M Y 1 2

4.2. Estimation of Location Error

The input parameters for calculating the target location accuracy are obtained by analyzing the error sources given by Equation (15), and combining the actual parameters of the camera, they are as follows [21]:
  • δ X P 1 , δ Y P 1 , and δ H 1 : the location errors of P1—the measurement center of POS 1—are caused by the errors of the position measurement system (such as the GPS of POS 1). δ X P 1 and δ Y P 1 are taken to be 3 m, and δ H 1 is taken to be 5 m.
  • H A H 1 : The elevation difference between camera 1 and target A is represented by H A H 1 . When calculating the location error, H A H 1 is taken to be 2000 m, 3000 m, and 5000 m, respectively.
  • δ ψ 1 , δ θ 1 , and δ γ 1 : The IMU drift error is a random error, which cannot be corrected and will bring a large target location error. When calculating the target location error, the main error source of the errors of the attitude angles measurement δ ψ 1 , δ θ 1 , and δ γ 1 is the IMU drift error. The measurement errors for the heading, pitch, and roll angles can be estimated based on the specification of the IMU of the POS. In the target location method, the real-time measurement accuracy of the IMU and the calibration accuracy of the IMU installation are considered comprehensively; the heading angle measurement error δ ψ 1 is taken to be 0.08° and the pitch angle measurement error δ θ 1 and the roll angle measurement error δ γ 1 are both taken to be 0.04°.
  • ψ 1 , θ 1 , and γ 1 : for camera photography imaging, the average values of the angles between its principal ray and the three coordinate axes of the POS 1 navigation coordinate system is 45°, and the three posture angles-heading angle ψ 1 , pitch angle θ 1 , and roll angle γ 1 —are all set to be 45° when calculating the location accuracy.
  • f 1 and δ f 1 : The focal length of the camera 1 is represented by f 1 , and the value of f 1 is 129.4 mm. The calibration error of the focal length is represented by   δ f 1 , and the value of δ f 1 is 9 µm.
  • x 1 x 1 z and y 1 y 1 z : In the image 1 coordinate system, the coordinate values of image point a 1 are x 1 and y 1 . Based on the size of the photosensitive surface of the imaging detector, the maximum value of x 1 x 1 z is determined to be 8.3 mm, and y 1 y 1 z is determined to be 7 mm.
  • δ x 1 and δ y 1 : The measurement errors of image point a 1 are represented by δ x 1 and δ y 1 , and their generation stem from factors such as image distortion and target identification errors. The distortion of the camera optical lens is less than 0.4%, and after the image is processed by the distortion correction software, the distortion of the final image is less than 0.1%. The main error sources of the measurement errors of the image point are the misidentification and errors in image registration. δ x 1 and δ y 1 are determined to all be 26 µm (4 pixels) after comprehensive consideration.
  • δ x 1 z and δ y 1 z : The calibration errors of the main point of the internal orientation elements are represented by δ x 1 z and δ y 1 z , which are determined to all be 3 µm.
After substituting the above parameters into Equations (15)–(17), the target location accuracy can be calculated using the MATLAB 2020a calculation software. Table 1 shows the summarized data of location errors.
It can be seen from Table 1 that the percentages of the location errors ( M x 1 z , M y 1 z , M f 1 , M x 1 , M y 1 ) caused by the calibration errors of the internal orientation elements of the camera, the calibration errors of the principal distance (focal length), and the measurement errors of the image points are all less than 22%, and they account for a small proportion of the total location error, and they have little effect on the total location error. When shooting at different heights, the main component of the total error is the location error M H 1 which is caused by δ H 1 (the height location error of point P1 which is the measurement center of the POS).
When the elevation difference between camera 1 and target A is 2000 m; the pitch angle measurement error δ θ 1 and the roll angle measurement error δ γ 1 of the camera posture have little effect on the location accuracy. When the elevation difference between the camera and target A is 5000 m, the pitch angle measurement error δ θ 1 and the roll angle measurement error δ γ 1 of the camera posture have a greater effect on the location accuracy. The location errors caused by the measurement error of the camera heading angle δ ψ 1 are all less than 2 m, and its impact on the location accuracy is small.
According to the data in Table 1, when the elevation difference between the camera and target A is 3000 m, the accuracy of the target location method is 15.5 m.
When the elevation difference between the camera and the target is small (less than 3000 m), the main method to improve the target location accuracy is to improve the elevation accuracy of the camera itself. When the elevation difference between the camera and the target is large (greater than 5000 m), the main method to improve the target location accuracy is to improve the pitch and roll angle measurement accuracy of the POS.

5. Verification Experiment of Target Location Based on Binocular Vision

A certain aerial camera has obtained a large number of ground images in a flight experiment, some of which have 20% overlapping areas. Two images with overlapping areas were selected, and the longitude and latitude of geographical target A in the image overlapping area can be solved using the target location method, based on the POS measurement information at the time of image shooting. The calculation results of the longitude and latitude of target A are compared with the true longitude and latitude of target A; the location accuracy can be obtained, and thus, the feasibility of the target location method studied in this paper is verified.

5.1. Coordinate Extraction of the Image Point

Ground image 1 captured by the camera during the flight experiment is shown in Figure 2. Ground image 2 captured by the camera during the flight experiment is shown in Figure 3.
The images were taken by a black and white camera on an aircraft flying at an altitude of about 3000 m. The detailed parameters of the camera are the same as the input parameters for calculating the target location accuracy in Section 4.2. The target area in the images of Figure 2 and Figure 3 is the People’s Square in Shahe City of Xingtai City, Hebei Province, China. The distinctive diamond-shaped building of the People’s Square and the sculpture in front of the building are clearly visible in the images. The left side of Figure 2 and the right side of Figure 3 have an overlapping area of about 20%.

5.1.1. Coordinate Extraction of the Image Point in Image 1

In Figure 2, at the moment of shooting the target image, the heading, pitch, and roll angles output by the POS were (272.2932781°, −0.0020300°, 15.5612191°), and the longitude, latitude, and altitude of the measurement center point P 1 were (114.5147927°, 36.8630194°, 3097 m). The focal length f 1 of the camera at the shooting moment was 129.4 mm.
As shown in Figure 2, geographic target A is set as the center position of a square sculpture on the left side of image 1. In image 1, the image point position of geographic target A is shown as point a 1 in Figure 2. According to the actual geographic direction of the target area, the north direction of the actual geography is the left direction of image 1, and the heading angle output by the POS at the shooting moment is 272.2932781°; thus, the x-axis direction of image 1 is the right direction, and the y-axis direction is the up direction.
The parameters of the image detector are shown in Table 2. According to the pixel position of point a 1 in image 1, the coordinate values of point a 1 in the image 1 coordinate system can be calculated as ( 7.4816 × 10 3   m , 0.2137 × 10 3   m ); thus [22],
x 1 = 7.4816 × 10 3   m y 1 = 0.2137 × 10 3   m

5.1.2. Coordinate Extraction of the Image Point in Image 2

In Figure 3, at the moment of shooting the target image, the heading, pitch, and roll angles output by the POS were (272.2898266°, −0.0123494°, 21.8834031°), and the longitude, latitude, and altitude of the measurement center point P 2 were (114.5147249°, 36.8630363°, 3097 m). The focal length f 2 of the camera at the shooting moment was 129.4 mm.
As shown in Figure 3, geographic target A is set as the center position of the square sculpture on the right side of image 2. In image 2, the image point position of geographic target A is shown as point a 2 in Figure 3. According to the actual geographic direction of the target area, the north direction of the actual geography is the left direction of image 2, and the heading angle output by the POS at the shooting moment is 272.2898266°; thus, the x-axis direction of image 2 is the right direction, and the y-axis direction is the up direction.
According to the pixel position of point a 2 in image 2, the coordinate values of point a 2 in the image 2 coordinate system can be calculated as ( 6.9178 × 10 3   m , 0.4293 × 10 3   m ); thus,
x 2 = 6.9178 × 10 3   m y 2 = 0.4293 × 10 3   m

5.2. Calculation of Target Coordinate

It can be seen from the calibration results of the internal orientation elements of this camera that the coordinates of its principal point in the image coordinate system are ( 2.8 × 10 5   m , 2.34 × 10 5   m ). Since image 1 and image 2 were shot by the same camera, and the focal length of the camera at both shooting moments was 129.4 mm, and the coordinates of the principal point of camera 1 in the image 1 coordinate system ( x 1 z , y 1 z ) are the same as the coordinates of the principal point of camera 2 in the image 2 coordinate system ( x 2 z , y 2 z ), the relevant parameters are shown as follows:
x 1 z = x 2 z = 2.8 × 10 5   m y 1 z = y 2 z = 2.34 × 10 5   m f 1 = f 2 = 0.1294   m
According to the calibration results of the installation errors between the IMU and the camera, the error angles of the IMU installation are (−0.0494°, 0.0549°, 0.0346°) and they can be obtained as follows:
Ψ 1 M = Ψ 2 M = 0.0494 ° θ 1 M = θ 2 M = 0.0549 ° γ 1 M = γ 2 M = 0.0346 °
Substituting the data in Equations (18), (20), and (21) into Equation (1), the following can be obtained:
X 1 a p = 7.4816 × 10 3 + 2.80 × 10 5 + 0.0009 0.2137 × 10 3 2.34 × 10 5 0.0776 × 10 3 Y 1 a p = 0.0009 7.4816 × 10 3 + 2.80 × 10 5 + 0.2137 × 10 3 2.34 × 10 5 + 0.1165 × 10 3 Z 1 a p = 0.0006 7.4816 × 10 3 + 2.80 × 10 5 + 0.0009 0.2137 × 10 3 2.34 × 10 5 0.1294
The solution of Equation (22) can be obtained as follows:
X 1 a p = 0.007531   m Y 1 a p = 0.000321   m Z 1 a p = 0.129395   m
Substituting the data in Equations (19)–(21) into Equation (2), the following can be obtained:
X 2 a p = 6.9178 × 10 3 + 2.80 × 10 5 + 0.0009 0.4293 × 10 3 2.34 × 10 5 0.0776 × 10 3 Y 2 a p = 0.0009 6.9178 × 10 3 + 2.80 × 10 5 + 0.4293 × 10 3 2.34 × 10 5 + 0.1165 × 10 3 Z 2 a p = 0.0006 6.9178 × 10 3 + 2.80 × 10 5 + 0.0009 0.4293 × 10 3 2.34 × 10 5 0.1294
The solution of Equation (24) can be obtained as follows:
X 2 a p = 0.006868   m Y 2 a p = 0.000524   m Z 2 a p = 0.129404   m
The posture angles of image 1 can be obtained from the information output by the POS at the shooting moment of image 1 as follows:
Ψ 1 = 272.2932781 ° θ 1 = 0.0020300 ° γ 1 = 15.5612191 °
The posture angles of image 2 can be obtained from the information output by the POS at the shooting moment of image 2 as follows:
Ψ 2 = 272.2898266 ° θ 2 = 0.0123494 ° γ 2 = 21.8834031 °
Substituting the values of the parameters in Equation (26) into Equation (4), the following can be obtained:
a 11 1 =   0.0362 a 21 1 = 0.9627 a 31 1 = 0.2681 a 12 1 =   0.9993 a 22 1 =   0.0376 a 32 1 = 3.5412 × 10 5 a 13 1 = 0.0101 a 23 1 = 0.2679 a 33 1 = 0.9634
Substituting the values of the parameters in Equation (27) into Equation (6the following can be obtained:
a 11 2 =   0.0348 a 21 2 =   0.9274 a 31 2 = 0.3725 a 12 2 =   0.9993 a 22 2 =   0.0375 a 32 2 =   2.1543 × 10 4 a 13 2 = 0.0142 a 23 2 = 0.3723 a 33 2 = 0.9280
Substituting the data in Equations (23), (25), (28), and (29) into Equation (11), the following can be obtained:
X 1 Z 1 = 0.0103 Y 1 Z 1 = 0.3419 X 1 X P 12 Z 1 Z P 12 = 0.0088 Y 1 Y P 12 Z 1 Z P 12 = 0.3410
Since the longitude J 1 (the eastern hemisphere of Earth is positive), latitude W 1 (the northern hemisphere of Earth is positive), and elevation H 1 of point P 1 are (114.5147927°, 36.8630194°, 3097 m), and the longitude J 2 , latitude W 2 , and elevation H 2 of point P 2 are (114.5147249°, 36.8630363°, 3097 m), the increases in longitude and latitude ( J P 12 , W P 12 ) from point P 1 to point P 2 are (−0.0000145°, 0.0001465°), and the increase in elevation is 0 m.
The Earth is approximately a rotational ellipsoid with a small flattening, and the maximum height difference between the geoid and the surface of the Earth ellipsoid is about 110 m. The surface of the Earth ellipsoid can be used instead of the geoid. The rotational ellipsoid is a collection formed by rotating the meridian ellipse around its minor axis. The length of the major semi-axis is a = 6,378,137   m , and the length of the minor semi-axis is b = 6,356,752   m [23]. The points on the surface of the Earth approximately satisfy the ellipsoid surface equation as shown in Equation (31), and ( X E , Y E , Z E ) is the coordinate values of the point on the surface of the Earth in the Earth-Centered Earth-Fixed (ECEF) coordinate system [24].
X E 2 + Y E 2 a 2 + Z E 2 b 2 = 1
Since the x 1 w , y 1 w , and z 1 w axes of the POS 1 navigation coordinate system are defined as the east, north, and sky directions of point P 1 , based on the longitude and latitude of the origin P 1 of the POS 1 navigation coordinate system and the conversion relationship between the ECEF coordinate system and the POS 1 navigation coordinate system, the position difference on the surface of the Earth (in the POS 1 navigation coordinate system) corresponding to the longitude and latitude difference from point P 1 to point P 2 can be obtained as follows [25]:
X P 12 = π a b 2 a 2 tan 2 w 1 + b 2 180 ° Δ J P 12 Y P 12 = π b 2 + a 2 b 2 b 2 a 2 tan 2 w 1 + b 2 180 ° Δ W P 12 Z P 12 = H 2 H 1
After substituting the values of J P 12 , W P 12 , W 1 , a , b , H 1 , H 2 into Equation (32), the following can be obtained after calculation:
X P 12 = 6.03   m Y P 12 = 1.88   m Z P 12 = 0   m
The following equation can be calculated by combining Equations (30) and (33):
X 1 Z 1 = 0.0103 Y 1 Z 1 = 0.3419 X 1 + 6.03 Z 1 = 0.0088 Y 1 1.88 Z 1 = 0.3410
Equation (34) is a system of ternary quartic equations. First, the first and third equations are combined to obtain the first solution: X 11 is −41.41 m and Z 11 is −4020 m. Then, the second and fourth equations are combined to obtain the second solution: Y 12 is 714.2 m and Z 12 is −2089 m. The average of the two solutions of Z 1 is used as the final solution of Z 1 —the geographical location coordinate of target point A—so Z 1 is −3055 m.
The final solution of Z 1 is substituted into the first and second equations of Equation (34) to obtain the final solutions of X 1 and Y 1 .
X 1 = 31.5   m m Y 1 = 1044.5   m m Z 1 = 3055   m

5.3. Calculation of the Latitude and Longitude of Target

Similar to Equation (32), the longitude, latitude, and altitude of geographic target A can be calculated as follows [26]:
J A = J 1 + X 1 π a b 2 a 2 tan 2 w 1 + b 2 18 0 ° W A = W 1 + Y 1 π b 2 + a 2 b 2 b 2 a 2 tan 2 w 1 + b 2 18 0 ° H A = Z 1 + H 1
After substituting the values of X 1 , Y 1 , Z 1 , J 1 , W 1 , a , b , and H 1 into Equation (36) for calculation, the longitude, latitude and altitude of geographical target A can be obtained as follows:
J A = 114.5143698 ° W A = 36.8724197 ° H A = 42   m
Thus, the geographic location of target A is achieved by the method of target location in this paper. The longitude, latitude, and elevation of A, the center position of the square sculpture in the image, are (114.5143698°, 36.8724197°, 42 m).

5.4. Calculation of the Target Location Error in Experiment

The errors of the target coordinates obtained from the GPS map are less than 3 m, far less than the target location error obtained in this study, so it can be considered that the coordinates obtained from the GPS map are approximately the true coordinates of the target. The link of the website of the GPS map is https://www.whatsmygps.com, accessed on 8 July 2025. As shown in Figure 4, the true longitude and latitude ( J A r , W A r ) of the ground target A are (114°30′51.78″, 36°52′20.18″) or (114.5143843°, 36.8722732°), which are obtained by searching in the GPS map. Compared with the true longitude and latitude of target A, there are errors in the longitude and latitude obtained by the location method based on binocular vision intersection, which are as follows:
δ J A = 0.0000145 ° δ W A = 0.0001465 °
Similar to Equation (32), the horizontal location errors on the ground surface (in the POS 1 navigation coordinate system) corresponding to the longitude error and latitude error of target A are calculated as follows:
δ X A = π a b 2 a 2 tan 2 W A r + b 2 180 ° δ J A δ Y A = π b 2 + a 2 b 2 b 2 a 2 tan 2 W A r + b 2 180 ° δ W A
The following result can be calculated by combining Equations (38) and (39):
δ X A = 1.1   m δ Y A = 16.3   m
The location error of geographic target A by using the location method based on binocular vision intersection is as follows:
δ L A = δ X A 2 + δ Y A 2 16.3   m

5.5. Analysis of the Target Location Experiment Result

In this target location verification experiment, the location error is 16.3 m, and the target location accuracy is high, which verifies the feasibility and correctness of the target location method studied in this paper and can meet the actual application requirements.
However, in this location verification experiment, because the two positions of the camera at the two shooting moments are too close, the flight baseline of the forward intersection imaging is too short, which leads to a large error of Z 1 , the height of the target location. The flight baseline is too short so the location error of the verification experiment is larger than prediction, and it is the deficiency of this target location verification experiment.

6. Conclusions

Based on binocular vision forward intersection to image non-cooperative targets, this paper established the model of photographic imaging and the model of target location. The collinear equations which represent the spatial position relationship between the target and its two image points were obtained through coordinate system transformation, and the system of equations for calculating the geographical coordinates of the target was derived, which realized the geo-location of unknown non-cooperative targets with no control point and no source.
The error composition and source of the target location method were analyzed, and the calculation equation of the total error of target location was obtained based on the error synthesis theory. The target location accuracy under the three elevation difference conditions between the camera and the target was predicted by substituting specific parameters such as the errors of the internal and external orientation elements and the calibration errors of the camera into the calculation equation of the total error. When the elevation difference of the camera and the target is 3 km, the location accuracy is 15.5 m. Through the composition, source, and weight analysis of the location error, specific measures to improve the target location accuracy were proposed.
Based on the images acquired by a certain aerial camera and the POS measurement information at the shooting moments during a flight experiment, a location verification experiment was completed, and the feasibility and correctness of the location method studied in this paper were verified. In the flight experiment, the aerial camera was 3097 m above ground, the longitude and latitude of geographical target A located in the overlapping area of the two images were calculated, and the location error of the verification experiment was obtained as 16.3 m.
In the verification experiment, due to the overly short baseline between the two positions of the camera, a large elevation error was produced in the target location experiment. The target location accuracy will be significantly different under the conditions of different baseline lengths and different camera tilt angles. In the following research work, the flight routes and camera tilt angles will be further planned, and the target location accuracy under different baseline lengths and camera tilt angles will be verified. Finally, the baseline length and camera tilt angle corresponding to the highest target location accuracy will be found.
The target location method studied in this paper has high location accuracy and can better meet the actual demand for unknown target location in emergency rescue. The research work of this paper gives a method for the accurate location of unknown non-cooperative targets without control points, involving the establishment of the calculation model, the calculation results, the errors analysis and the rules of errors, the verification experiment, and the conclusion. Through the error analysis of the target location method, the rules of the error composition were found, the weight ratio of each error source was obtained, and some methods to improve the target location accuracy were given. In the verification experiment of target location, the actual method and process of location accuracy calculation in the experiment was given. The research work of this paper has certain reference significance for many actual application needs related to target location.

Author Contributions

Investigation, resources, conceptualization, and writing—original draft preparation, K.S.; methodology and investigation, H.Y.; supervision, J.F.; project administration, G.L.; methodology, W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant number: 52206113).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Some data, models, and codes that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

The following abbreviations are used in this manuscript:
POSPosition and Orientation System
GPSGlobal Positioning System
IMUInertial Measurement Unit
ECEFEarth-Centered Earth-Fixed

References

  1. Santana, B.; Cherif, E.; Bernardino, A.; Ribeiro, R. Real-Time Georeferencing of Fire Front Aerial Images Using Iterative Ray-Tracing and the Bearings-Range Extended Kalman Filter. Sensors 2022, 22, 1150. [Google Scholar] [CrossRef] [PubMed]
  2. Li, S.; Yoon, H. Vehicle Localization in 3D World Coordinates Using Single Camera at Traffic Intersection. Sensors 2023, 23, 3661. [Google Scholar] [CrossRef]
  3. Liu, C.; Cui, X.; Guo, L.; Wu, L.; Tang, X.; Liu, S.; Yuan, D.; Wang, X. Satellite Laser Altimetry Data-Supported High-Accuracy Mapping of GF-7 Stereo Images. Remote Sens. 2022, 14, 5868. [Google Scholar] [CrossRef]
  4. Yang, B.; Ali, F.; Zhou, B.; Li, S.; Yu, Y.; Yang, T.; Liu, X.; Liang, Z.; Zhang, K. A Novel Approach of Efficient 3D Reconstruction for Real Scene Using Unmanned Aerial Vehicle Oblique Photogrammetry with Five Cameras. Comput. Electr. Eng. 2022, 99, 107804. [Google Scholar] [CrossRef]
  5. Wang, J.; Choi, W.; Diaz, J.; Trott, C. The 3D Position Estimation and Tracking of a Surface Vehicle Using a Mono-Camera and Machine Learning. Electronics 2022, 11, 2141. [Google Scholar] [CrossRef]
  6. Shao, X.; Tao, J. Location Method of Static Object based on Monocular Vision. Acta Photonica Sin. 2016, 45, 1012003. [Google Scholar]
  7. Wang, T.; Zhang, Y.; Zhang, Y.; Yu, Y.; Li, L.; Liu, S.; Zhao, X.; Zhang, Z.; Wang, L. A Quadrifocal Tensor SFM Photogrammetry Positioning and Calibration Technique for HOFS Aerial Sensors. Remote Sens. 2022, 14, 3521. [Google Scholar] [CrossRef]
  8. Liu, Z.; Zhang, X.; Kuang, H.; Li, Q.; Qiao, C. Target Location Based on Stereo Imaging of Airborne Electro-Optical Camera. Acta Opt. Sin. 2019, 39, 1112003. [Google Scholar] [CrossRef]
  9. Sun, H.; Jia, H.; Wang, L.; Xu, F.; Liu, J. Systematic Error Correction for Geo-Location of Airborne Optoelectronic Platforms. Appl. Sci. 2021, 11, 11067. [Google Scholar] [CrossRef]
  10. Taghavi, E.; Song, D.; Tharmarasa, R.; Kirubarajan, T. Geo-registration and Geo-location Using Two Airborne Video Sensors. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 2910–2921. [Google Scholar] [CrossRef]
  11. Liu, C.; Ding, Y.; Zhang, H.; Xiu, J.; Kuang, H. Improving Target Geolocation Accuracy with Multi-View Aerial Images in Long-Range Oblique Photography. Drones 2024, 8, 177. [Google Scholar] [CrossRef]
  12. Zhang, X.; Yuan, G.; Zhang, H.; Qiao, C.; Liu, Z.; Ding, Y.; Liu, C. Precise Target Geo-Location of Long-Range Oblique Reconnaissance System for UAVs. Sensors 2022, 22, 1903. [Google Scholar] [CrossRef]
  13. Yang, B.; Ali, F.; Yin, P.; Yang, T.; Yu, Y.; Li, S.; Liu, X. Approaches for Exploration of Improving Multi-slice Mapping via Forwarding Intersection Based on Images of UAV Oblique Photogrammetry. Comput. Electr. Eng. 2021, 92, 107135. [Google Scholar] [CrossRef]
  14. Wang, X.; Liu, J.; Zhou, Q. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles. Sensors 2017, 17, 33. [Google Scholar] [CrossRef] [PubMed]
  15. Yuan, D.; Ding, Y.; Yuan, G.; Li, F.; Zhang, J.; Wang, Y.; Zhang, L. Two-step Calibration Method for Extrinsic Parameters of an Airborne Camera. Appl. Opt. 2021, 60, 1387–1398. [Google Scholar] [CrossRef] [PubMed]
  16. Cai, Y.; Zhou, Y.; Zhang, H.; Xia, Y.; Qiao, P.; Zhao, J. Review of Target Geo-Location Algorithms for Aerial Remote Sensing Cameras without Control Points. Appl. Sci. 2022, 12, 12689. [Google Scholar] [CrossRef]
  17. Xu, C.; Huang, D.; Liu, J. Target location of unmanned aerial vehicles based on the electro-optical stabilization and tracking platform. Measurement 2019, 147, 106848. [Google Scholar] [CrossRef]
  18. Mu, S.; Qiao, C. Ground-Target Geo-Location Method Based on Extended Kalman Filtering for Small-Scale Airborne Electro-Optical Platform. Acta Opt. Sin. 2019, 39, 0528001. [Google Scholar]
  19. Zhang, G.; Xu, K.; Jia, P.; Hao, X.; Li, D. Integrating Stereo Images and Laser Altimeter Data of the ZY3-02 Satellite for Improved Earth Topographic Modeling. Remote Sens. 2019, 11, 2453. [Google Scholar] [CrossRef]
  20. Li, B.; Ding, Y.; Xiu, J.; Li, J.; Qiao, C. System error corrected ground target geo-location method for long-distance aviation imaging with large inclination angle. Opt. Precis. Eng. 2020, 28, 1265–1274. [Google Scholar]
  21. Skibicki, J.; Jedrzejczyk, A.; Dzwonkowski, A. The Influence of Camera and Optical System Parameters on the Uncertainty of Object Location Measurement in Vision Systems. Sensors 2020, 20, 5433. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, Y.; Luo, L.; Jin, W.; Guo, H.; Zhao, S.; Yang, J. Target positioning method for Tian-shaped four-aperture infrared biomimetic compound eyes. Opt. Precis. Eng. 2024, 32, 1836–1848. [Google Scholar] [CrossRef]
  23. Qiao, C.; Ding, Y.; Xu, Y.; Xiu, J.; Du, Y. Ground target geo-location using imaging aerial camera with large inclined angles. Opt. Precis. Eng. 2017, 25, 1714–1726. [Google Scholar]
  24. Li, Z.; Kuang, H.; Zhang, H.; Zhuang, C. A target location method for aerial images through fast iteration of elevation based on DEM. Chin. Opt. 2023, 16, 777–787. [Google Scholar] [CrossRef]
  25. Bai, G.; Liu, J.; Song, Y.; Zuo, Y. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform. Sensors 2017, 17, 98. [Google Scholar] [CrossRef]
  26. Yao, Y.; Song, C.; Shao, J. Real-time Detection and Localization Algorithm for Military Vehicles in Drone Aerial Photography. Acta Armamentarii 2024, 45, 354–360. [Google Scholar]
Figure 1. Principle of forward intersection of binocular vision.
Figure 1. Principle of forward intersection of binocular vision.
Applsci 15 09867 g001
Figure 2. Image 1 and image point a 1 of geographical target A.
Figure 2. Image 1 and image point a 1 of geographical target A.
Applsci 15 09867 g002
Figure 3. Image 2 and image point a 2 of geographical target A.
Figure 3. Image 2 and image point a 2 of geographical target A.
Applsci 15 09867 g003
Figure 4. The true longitude and latitude of geographic target A.
Figure 4. The true longitude and latitude of geographic target A.
Applsci 15 09867 g004
Table 1. Summary of location errors (unit: m).
Table 1. Summary of location errors (unit: m).
HAH1200030005000
x1wy1wx1wy1wx1wy1w
M X P 1 303030
M Y P 1 030303
M H 1 6.1310.126.1310.126.1310.12
M ψ 1 0.730.541.100.811.831.36
M θ 1 2.852.414.283.617.136.01
M γ 1 0.044.170.066.250.0910.42
M f 1 00.040.010.060.010.10
M x 1 0.901.711.352.572.254.28
M y 1 0.880.221.330.342.210.56
M x 1 z 0.100.200.160.300.260.49
M y 1 z 0.100.030.150.040.250.06
M X 1   or   M Y 1 7.5411.758.3513.0810.5216.64
M 14.015.519.7
Table 2. Parameters of the image detector.
Table 2. Parameters of the image detector.
ParametersValues
Detector pixel2560 (H 1) × 2160 (V 2)
Pixel size6.5 μm × 6.5 μm
Size of the photosensitive surface16.64 mm × 14.04 mm
1 H represents the horizontal. 2 V represents the vertical.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, K.; Yang, H.; Feng, J.; Liu, G.; Chen, W. Locations of Non-Cooperative Targets Based on Binocular Vision Intersection and Its Error Analysis. Appl. Sci. 2025, 15, 9867. https://doi.org/10.3390/app15189867

AMA Style

Shi K, Yang H, Feng J, Liu G, Chen W. Locations of Non-Cooperative Targets Based on Binocular Vision Intersection and Its Error Analysis. Applied Sciences. 2025; 15(18):9867. https://doi.org/10.3390/app15189867

Chicago/Turabian Style

Shi, Kui, Hongtao Yang, Jia Feng, Guangsen Liu, and Weining Chen. 2025. "Locations of Non-Cooperative Targets Based on Binocular Vision Intersection and Its Error Analysis" Applied Sciences 15, no. 18: 9867. https://doi.org/10.3390/app15189867

APA Style

Shi, K., Yang, H., Feng, J., Liu, G., & Chen, W. (2025). Locations of Non-Cooperative Targets Based on Binocular Vision Intersection and Its Error Analysis. Applied Sciences, 15(18), 9867. https://doi.org/10.3390/app15189867

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop