An Adaptive Radon-Transform-Based Marker Detection and Localization Method for Displacement Measurements Using Unmanned Aerial Vehicles

UAVs have been widely used in deformation monitoring because of their high availability and flexibility. However, the quality of UAV images is affected by changing attitude and surveying environments, resulting in a low monitoring accuracy. Cross-shaped markers are used to improve the accuracy of UAV monitoring due to their distinct center contrast and absence of eccentricity. However, existing methods cannot rapidly and precisely detect these markers in UAV images. To address these problems, this paper proposes an adaptive Radon-transform-based marker detection and localization method for UAV displacement measurements, focusing on two critical detection parameters, namely, the radius of marker information acquisition and the edge width of the cross-shaped scoring template. The experimental results show that the marker detection rate is 97.2% under different combinations of flight altitudes, radius ratios of marker information acquisition, and marker sizes. Furthermore, the root mean square error of detection and localization is 0.57 pixels, significantly surpassing the performance and accuracy of other methods. We also derive the critical detection radius and appropriate parameter combinations for different heights to further improve the practicality of the method.


Introduction
UAVs (unmanned aerial vehicles) are aircraft that can operate without a human pilot.They can carry out tasks through pre-programmed routes, remote control, or autonomous navigation, reducing the need for direct human intervention.UAVs are now beginning to be used to monitor and analyze deformations in civil engineering projects and infrastructure [1][2][3][4].Their high maneuverability, easy availability and unique aerial perspective make them highly valuable.However, current UAV displacement monitoring methods have a low accuracy that is insufficient for millimeter-scale deformation monitoring.Some experiments showed that using measurement markers can significantly enhance image matching and the camera attitude estimation accuracy [5,6].However, constantly changing imaging attitude and complex imaging environments (light, wind speed, etc.) result in blurred and noisy UAV images, which reduce the accuracy of measurement results [7].In addition, the camera sensor's resolution and the shadows caused by tall buildings or tree cover can also affect image interpretation and analysis.In image processing, the target coordinates cannot be accurately recognized on the map, reducing the monitoring accuracy and limiting the application of UAVs in high-precision surveys.To extend the application of UAV monitoring, it is crucial to realize high-precision and high-accuracy marker detection and localization.
UAV displacement measurement markers can be classified into three categories, inflection-point-type ("L" shaped), intersection-point-type (cross-shaped) and circular markers, which are placed within the survey area through spray painting or marker board Sensors 2024, 24, 1930 2 of 16 fixing.These markers have an inflection point, intersection point and center point as the measurement points, respectively [8].These measurement points help manual point selection on the map.Circular markers are easy to locate due to their simple graphics, but they can be affected by lighting changes and may not be accurately positioned in environments with uneven or strongly changing lighting [9].Better than L-shaped markers, cross-shaped markers provide rich texture information and have a central symmetry, making them ideal for automatic point selection and thus improving the detection efficiency.In addition, cross-shaped markers have a clear center contrast and no eccentricity.Therefore, we chose cross-shaped markers as the measuring markers.Researchers also have designed special markers for specific applications [10,11].However, these markers cannot be widely applied due to complicated usage conditions.
Traditionally, the coordinates of a cross-shaped marker's measuring point are extracted manually, which is labor-intensive and inefficient.Furthermore, the accuracy of the extracted coordinates depends on the vision effect and the operator's experience.Currently, various target detection methods are used successfully.Commonly used cross-shaped marker detection methods include the Harris algorithm [12], the template matching algorithm [13,14], and deep learning methods [15][16][17].Cheng et al. [18] used the Harris algorithm for UAV image corner detection, enhancing the feature point extraction accuracy by using the speed-up robust feature (SURF) algorithm.This improved the quality and efficiency of UAV image matching.Azimbeik K et al. [19] designed virtual markers with special shapes and used template matching and camera calibration methods to improve image-based full-field measurements.Their method has been successfully employed to measure the displacements of a railroad bridge.With the development of deep learning algorithms in recent years, Girshick et al. [20] proposed an R-CNN algorithm, which performs a similarity analysis on the whole detected region based on the known features of the target.This allows for selection of a region with a high similarity to the sample that was input into the convolutional neural network and for detection of the target.However, deep learning methods require a lot of labeled samples for training, which is labor-intensive and leads to a low efficiency.For non-deep learning methods, the efficiency and accuracy are affected by factors such as obscured marker imaging, overexposure, sensor displacement, and noise.Xing Lei et al. [21] introduced a method based on the Radon transform principle to accurately detect and locate cross-shaped markers using saliency maps.This approach improved the robustness and accuracy of marker detection.However, when this method is used for UAV marker detection, its detection parameters are affected by the UAV flight altitude and the marker parameters, as well as the focal length and pixel size of vision sensors.This dependency prevents automated detection and may affect the detection accuracy in complex UAV route planning.
In this study, we improve the cross-shaped marker detection method based on the Radon transform by proposing an adaptive method for selecting detection parameters considering factors such as the UAV vision sensor parameters, flight altitude and marker parameters in data processing.The marker detection and localization method based on the Radon transform (referred to as the original Radon transform method) is introduced in Section 2.1.Section 2.2 presents the proposed adaptive parameter-selecting Radon transform marker detection and localization method (referred to as the adaptive Radon transform method).In Section 3, the proposed adaptive Radon transform method is compared with the traditional Harris algorithm, the template-matching method and the original Radon transform method, and the appropriate detection parameters for different flying heights are investigated.Some conclusions are drawn in Section 4.

Cross-Shaped Marker Detection and Localization Method Based on the Radon Transform
Cross-shaped survey markers are characterized by a central symmetry.Based on this property, we define a centerline (width: 3 pixels [21]) passing through the marker center and rotate it around the center point.As Figure 1 shows, when the centerline reaches the Sensors 2024, 24, 1930 3 of 16 white sector, the sum of the grayscale values inside its area is large, but when it reaches the black sector, the sum of the grayscale values inside it is small.Calculating the difference between the maximum and minimum values, we can derive the significance level of the location being the marker center.After obtaining the saliency map of the whole marker image, the sub-pixel coordinate positioning method is used to obtain the peak point of the saliency map to precisely locate the marker center.

Cross-Shaped Marker Detection and Localization Method Based on the Radon Transform
Cross-shaped survey markers are characterized by a central symmetry.Based on property, we define a centerline (width: 3 pixels [21]) passing through the marker c and rotate it around the center point.As Figure 1 shows, when the centerline reache white sector, the sum of the grayscale values inside its area is large, but when it rea the black sector, the sum of the grayscale values inside it is small.Calculating the d ence between the maximum and minimum values, we can derive the significance lev the location being the marker center.After obtaining the saliency map of the whole m image, the sub-pixel coordinate positioning method is used to obtain the peak point o saliency map to precisely locate the marker center.Considering both the calculation accuracy and efficiency, we here divide the ce line rotation angle into 18 groups with the same intervals, i.e.,  , i = 1, 2, ..., 18.After obtaining the saliency map, the surface fitting method is used to precisely l the peak point to obtain the precise coordinates of the marker center.The Radon transform f R (x, y, θ) is expressed as where (x, y) denotes the target image coordinates and θ denotes the centerline rotation angle.R is the centerline radius.From Equation (1), we can calculate the sum of gray values in the centerline for any rotation angle θ.
Rotate any centerline for one rotation and calculate the square of the difference between the maximum and minimum gray values, i.e., the significance level m R (x, y) at point (x, y), by Equation (2).
Considering both the calculation accuracy and efficiency, we here divide the centerline rotation angle into 18 groups with the same intervals, i.e., θ = iπ 18 , i = 1, 2,. .., 18.After obtaining the saliency map, the surface fitting method is used to precisely locate the peak point to obtain the precise coordinates of the marker center.

Adaptive Cross-Shaped Marker Detection and Localization Method Based on the Radon Transform
The accuracy of the original Radon transform method depends on two parameters: the marker information acquisition radius R and the edge width L of the cross-shaped scoring template.The acquisition radius determines the richness of the marker image information acquired during detection.A too large radius R may contain unwanted nonmarker information and leads to a huge computation load.A too small radius R may lead to insufficient marker information, affecting the detection accuracy.The edge width of the scoring template determines its graphic style, and using an unreasonable edge width can result in a low score for the true center point, causing its elimination.
At present, the original Radon transform method selects parameters using vision interpretation, which involves enlarging markers for human observations.This method has a low efficiency and is not adaptable to changing measurement conditions, leading to false detections, a low detection accuracy and undetectable markers.To solve these problems, we optimized the original Radon transform method and developed an adaptive method for selecting detection parameters (Figure 2).

Adaptive Cross-Shaped Marker Detection and Localization Method Based on the Radon Transform
The accuracy of the original Radon transform method depends on two param the marker information acquisition radius  and the edge width  of the cross-sh scoring template.The acquisition radius determines the richness of the marker imag formation acquired during detection.A too large radius  may contain unwanted marker information and leads to a huge computation load.A too small radius  may to insufficient marker information, affecting the detection accuracy.The edge width o scoring template determines its graphic style, and using an unreasonable edge widt result in a low score for the true center point, causing its elimination.
At present, the original Radon transform method selects parameters using visio terpretation, which involves enlarging markers for human observations.This metho a low efficiency and is not adaptable to changing measurement conditions, leading to detections, a low detection accuracy and undetectable markers.To solve these prob we optimized the original Radon transform method and developed an adaptive me for selecting detection parameters (Figure 2).

Adaptive Method for Determining the Marker Information Acquisition Radiu
During flying, the focal length  and the pixel size  of a UAV's vision sens main the same.In such a case, the marker information acquisition radius is related t flight altitude , the marker size  * , and the marker information acquisition r ratio  (Figure 3).s denotes the ratio of the marker centerline length to the side le (), and its value range is 0  1.

Adaptive Method for Determining the Marker Information Acquisition Radius
During flying, the focal length f and the pixel size u of a UAV's vision sensor remain the same.In such a case, the marker information acquisition radius is related to the flight altitude H, the marker size W * W, and the marker information acquisition radius ratio s (Figure 3).s denotes the ratio of the marker centerline length to the side length (W), and its value range is 0 < s ≤ 1.

Adaptive Cross-Shaped Marker Detection and Localization Method Based on the Radon Transform
The accuracy of the original Radon transform method depends on two parame the marker information acquisition radius  and the edge width  of the cross-sha scoring template.The acquisition radius determines the richness of the marker imag formation acquired during detection.A too large radius  may contain unwanted n marker information and leads to a huge computation load.A too small radius  may to insufficient marker information, affecting the detection accuracy.The edge width o scoring template determines its graphic style, and using an unreasonable edge width result in a low score for the true center point, causing its elimination.
At present, the original Radon transform method selects parameters using vision terpretation, which involves enlarging markers for human observations.This method a low efficiency and is not adaptable to changing measurement conditions, leading to f detections, a low detection accuracy and undetectable markers.To solve these proble we optimized the original Radon transform method and developed an adaptive met for selecting detection parameters (Figure 2).

Adaptive Method for Determining the Marker Information Acquisition Radius
During flying, the focal length  and the pixel size  of a UAV's vision senso main the same.In such a case, the marker information acquisition radius is related to flight altitude , the marker size  * , and the marker information acquisition ra ratio  (Figure 3).s denotes the ratio of the marker centerline length to the side len (), and its value range is 0  1.The ground sampling distance, GSD, is calculated by Equation (3).
Sensors 2024, 24, 1930 5 of 16 After obtaining the ground sampling distance, the marker information acquisition radius R can be calculated by Equation (4).
One value of R may correspond to different combinations of {W, H, s} parameters.This paper will study the appropriate parameter combinations for different measurement conditions.

Adaptive Method for Determining the Edge Width of a Cross-Shaped Scoring Template
The original Radon transform method analyzes the conformity of each target point as a centroid using a scoring system.A higher score indicates a higher conformity.The width of the cross-shaped pattern in the scoring template, determined by the edge width L, plays a crucial role in the scoring calculation, as depicted in Figure 4.A larger or smaller width of the cross-shaped pattern leads to a reduced accuracy in marker detection and localization.Therefore, using a well-designed scoring template can improve the accuracy of marker detection.
Sensors 2024, 24, x FOR PEER REVIEW 5 of 17 After obtaining the ground sampling distance, the marker information acquisition radius  can be calculated by Equation (4).
One value of  may correspond to different combinations of , ,  parameters.This paper will study the appropriate parameter combinations for different measurement conditions.

Adaptive Method for Determining the Edge Width of a Cross-Shaped Scoring Template
The original Radon transform method analyzes the conformity of each target point as a centroid using a scoring system.A higher score indicates a higher conformity.The width of the cross-shaped pattern in the scoring template, determined by the edge width L, plays a crucial role in the scoring calculation, as depicted in Figure 4.A larger or smaller width of the cross-shaped pattern leads to a reduced accuracy in marker detection and localization.Therefore, using a well-designed scoring template can improve the accuracy of marker detection.When normalizing the gradient intensity map, we define pixel values greater than 0 as bright pixels while the rest are dark pixels.In a gradient intensity map, if bright pixels are predominantly found at the junction between black and white regions, the target point has a high possibility of being the marker center.The number of bright pixels  is counted and is taken as the theoretical sum of bright pixels in the template map.If the bright pixels are evenly distributed at the junction of black and white areas, we can calculate the edge width  of the template bright pixels as in Figure 5.When normalizing the gradient intensity map, we define pixel values greater than 0 as bright pixels while the rest are dark pixels.In a gradient intensity map, if bright pixels are predominantly found at the junction between black and white regions, the target point has a high possibility of being the marker center.The number of bright pixels k is counted and is taken as the theoretical sum of bright pixels in the template map.If the bright pixels are evenly distributed at the junction of black and white areas, we can calculate the edge width L of the template bright pixels as in Figure 5.
Firstly, the diagonal length D of the template is calculated via Equation ( 5).
Sensors 2024, 24, 1930 6 of 16 According to the principle that the number of theoretical bright pixels is approximately equal to the number of statistical bright pixels (k), Equation ( 6) is obtained.
Then, the edge width of the template is obtained using Sensors 2024, 24, x FOR PEER REVIEW 6 of 17 Firstly, the diagonal length  of the template is calculated via Equation (5).
According to the principle that the number of theoretical bright pixels is approximately equal to the number of statistical bright pixels (), Equation ( 6) is obtained.
Then, the edge width of the template is obtained using

Experiments and Analysis
This section verifies the adaptive methods for parameter selection introduced in Sections 2.2.1.and 2.2.2.It also investigates the appropriate parameter combinations for different measurement conditions.The adaptive method is compared with other marker detection methods.The flowchart of the proposed adaptive method in this paper is shown in Figure 6.

Experiments and Analysis
This section verifies the adaptive methods for parameter selection introduced in Sections 2.2.1 and 2.2.2.It also investigates the appropriate parameter combinations for different measurement conditions.The adaptive method is compared with other marker detection methods.The flowchart of the proposed adaptive method in this paper is shown in Figure 6.

Critical Detection Radius
The critical detection radius  refers to the minimum radius in which the mark center point can be detected.Only when    is satisfied ( being the pix length of the marker radius) can optimal marker detection be achieved.Obtaining th value of  can exclude a large number of , ,  parameter combinations, reducin Sensors 2024, 24, 1930 7 of 16

Critical Detection Radius
The critical detection radius R t refers to the minimum radius in which the marker center point can be detected.Only when R t < R < R m is satisfied (R m being the pixel length of the marker radius) can optimal marker detection be achieved.Obtaining the value of R t can exclude a large number of {W, H, s} parameter combinations, reducing the workload involved.R t is calculated as follows: any two parameters in the combination of {W, H, s} parameters are chosen, and only one parameter is adjusted downwards from its optimal value.If the marker cannot be detected, the R value of the previous detected set of markers is taken as a candidate for R t .After all the data have been processed, the largest R t candidate is the final R t .
The UAV utilized in the experiment is the DJI Phantom 4 RTK.The weather during the experiment was sunny (23 • C) with a breeze.The UAV camera parameters are shown in Table 1.As shown in Figure 7, data collection and processing were designed as follows: (1) H: images were acquired at different heights from 15 m to 50 m with a step size of 1 m; (2) W: five targets of {20, 25, 30, 35, 40} cm were laid at each height; (3) s: each target was processed with six presets ratios {1/4, 1/3, 1/2, 2/3, 3/4, 1}.The  value of different parameter combinations was calculated via Equation (4) and the results are shown in Table 2.The R value of different parameter combinations was calculated via Equation ( 4) and the results are shown in Table 2.
The maximum R value was determined via Equation ( 8) and taken as the critical radius R t .
where R i is the R of each parameter combination.The final critical detection radius is 6.

Appropriate Combinations of Detection Parameters
The R value calculated by Equation ( 4) is affected by the parameters W, H, and s.When determining an R value, we should consider factors like the marker installation difficulty, image quality, and marker detection calculation efficiency.Therefore, W and s should be minimized while maintaining the detection accuracy.Since UAV measurements have specific flight altitude requirements, the value of H fluctuates around a constant value.In this study, the detection accuracy is measured by the distance between the detected target point and the manually selected target point.The manually selected target point locations were obtained by averaging at least three selected points, and any R values smaller than R t were rounded off during processing.The same experimental data as in Section 3.1 were used for this experiment.We started from a height of 15 m and a step size of 5 m.The results are shown in Figure 8.
Sensors 2024, 24, x FOR PEER REVIEW detected target point and the manually selected target point.The manually selected point locations were obtained by averaging at least three selected points, and any  ues smaller than  were rounded off during processing.The same experimental d in Section 3.1 were used for this experiment.We started from a height of 15 m and size of 5 m.The results are shown in Figure 8.As shown in Figure 8, when the flight height is low, all the five target sizes ar to detect the marker center at all six acquisition radius ratios.As the height increas combination of a small target size and a small radius ratio appears to be unable to the marker (  ).Additionally, as the value of  As shown in Figure 8, when the flight height is low, all the five target sizes are able to detect the marker center at all six acquisition radius ratios.As the height increases, the combination of a small target size and a small radius ratio appears to be unable to detect the marker (R < R t ).Additionally, as the value of R gets closer to R t (R > R t ), the accuracy of marker detection decreases.At the flight altitude of 50 m, all targets are only detected at s = 1.We analyzed the results as follows: (1) The minimum detection error e h was determined for each height of h ∈ {15, 20 . . . 45,50} m; (2) The accuracy threshold τ was set and all detection errors e were found that satisfy e h ≤ e ≤ e h + τ at height h; (3) As the value of s increases, the difficulty of marker installation is more significant than the improvement in the computational efficiency.In this paper, we give priority to W rather than s.Therefore, among all the detection errors that satisfy the conditions in (2), we prioritized the combination of parameters with a small marker size as the appropriate detection parameter.
The analysis results are shown in Table 3.
where Avg_RMSE denotes the RMSE of all parameter combinations, i denotes the number of parameter combinations that can detect the marker, Min_Error denotes the minimum detection error at this height, and Error denotes the detection error of the best parameter combination in this paper.We have identified appropriate parameter combinations at various heights and will use them in the following sections.

Performance of the Adaptive Radon Transform Method
In this section, the performance of the proposed adaptive Radon transform method is assessed by comparing it with the original Radon transform method, the template matching method, and the Harris corner point detection method separately.For the experiment, we used the data in Section 3.1 after Gaussian blurring and adding Gaussian noise to simulate the low image quality that occurs during real UAV operations (Figure 9).
For the proposed method, the parameter combinations for different flying heights were selected according to the results of Section 3.2 using the nearest-neighbor rule.For example, when H = {18, 19, 20, 21, 22} m, all parameter combinations were selected based on H = 20 m.The original Radon transform method selects parameters based on vision experience (Figure 10).The marker information acquisition radius was determined by human judgment, and the pixel length of the marker image at that radius was acquired through statistical analysis.The edge width of the cross-shaped scoring template was determined by the distance between the edge and the centerline of the black and white areas.In this experiment, the selected heights, R and L are 15-50 m, 11 pixels, and 1.6 pixels, respectively.The same parameters were used in the Harris algorithm as those in paper [12].The template matching method determines a square template that can be uniformly divided into four rectangles.Two rectangles are defined as black sections and two others are white sections.These two colored sections are interlaced.The marker center is located by matching the square template with the image.
ber of parameter combinations that can detect the marker, _ denotes the minimum detection error at this height, and  denotes the detection error of the best parameter combination in this paper.We have identified appropriate parameter combinations at various heights and will use them in the following sections.

Performance of the Adaptive Radon Transform Method
In this section, the performance of the proposed adaptive Radon transform method is assessed by comparing it with the original Radon transform method, the template matching method, and the Harris corner point detection method separately.For the experiment, we used the data in Section 3.1 after Gaussian blurring and adding Gaussian noise to simulate the low image quality that occurs during real UAV operations (Figure 9).For the proposed method, the parameter combinations for different flying heights were selected according to the results of Section 3.2 using the nearest-neighbor rule.For example, when H = {18, 19, 20, 21, 22} m, all parameter combinations were selected based on H = 20 m.The original Radon transform method selects parameters based on vision experience (Figure 10).The marker information acquisition radius was determined by human judgment, and the pixel length of the marker image at that radius was acquired through statistical analysis.The edge width of the cross-shaped scoring template was determined by the distance between the edge and the centerline of the black and white areas.In this experiment, the selected heights, R and L are 15-50 m, 11 pixels, and 1.6 pixels, respectively.The same parameters were used in the Harris algorithm as those in paper [12].The template matching method determines a square template that can be uniformly divided into four rectangles.Two rectangles are defined as black sections and two others are white sections.These two colored sections are interlaced.The marker center is located by matching the square template with the image.We used the four methods to detect the markers in low-quality images at all heights.If the distance between the detected marker and the manually selected marker is greater than 3 pixels, detection is considered to have failed.The results are shown in Table 4.As shown in Table 4, the accuracy of the proposed method is much higher than that of the template matching method and the Harris corner point detection method and is also higher than the original Radon-transform-based detection method.The proposed method failed at detecting markers with a size of 20 cm at flight altitudes of 47, 48, 49, and 50 m (Figure 11).We used the four methods to detect the markers in low-quality images at all heights.If the distance between the detected marker and the manually selected marker is greater than 3 pixels, detection is considered to have failed.The results are shown in Table 4.As shown in Table 4, the accuracy of the proposed method is much higher than that of the template matching method and the Harris corner point detection method and is also higher than the original Radon-transform-based detection method.The proposed method failed at detecting markers with a size of 20 cm at flight altitudes of 47, 48, 49, and 50 m (Figure 11).The proposed method failed at detecting markers with a size of 20 cm at flight altitudes of 47, 48, 49, and 50 m (Figure 11).These images have some similarities: the marker's radius is under 6 pixels, the black sector area is severely lacking, and the center area is blurred.These features limit marker detection.At flight altitudes of 47 m or above, the marker radius should be larger than 20 cm for detection.
After detecting markers, we performed marker localization using a surface-fitting method [22].The Radon transform method locates markers in the saliency map, whereas other methods locate markers in the original image.To evaluate accuracy, we calculated the RMSE of the five markers at each height for all four methods.The results are shown in Figure 12.The results in Figure 12 show that the proposed method outperforms the other three methods in terms of the marker localization accuracy, which is as high as within 1 pixel.The overall accuracy of the original Radon transform method is high, but the positioning The proposed method failed at detecting markers with a size of 20 cm at flight altitudes of 47, 48, 49, and 50 m (Figure 11).These images have some similarities: the marker's radius is under 6 pixels, the black sector area is severely lacking, and the center area is blurred.These features limit marker detection.At flight altitudes of 47 m or above, the marker radius should be larger than 20 cm for detection.

RMSE/pixel
After detecting markers, we performed marker localization using a surface-fitting method [22].The Radon transform method locates markers in the saliency map, whereas other methods locate markers in the original image.To evaluate accuracy, we calculated the RMSE of the five markers at each height for all four methods.The results are shown in Figure 12.The proposed method failed at detecting markers with a size of 20 cm at flight tudes of 47, 48, 49, and 50 m (Figure 11).These images have some similarities: the mark radius is under 6 pixels, the black sector area is severely lacking, and the center are blurred.These features limit marker detection.At flight altitudes of 47 m or above, marker radius should be larger than 20 cm for detection.
After detecting markers, we performed marker localization using a surface-fittin method [22].The Radon transform method locates markers in the saliency map, wher other methods locate markers in the original image.To evaluate accuracy, we calcula the RMSE of the five markers at each height for all four methods.The results are show in Figure 12.The results in Figure 12 show that the proposed method outperforms the other th methods in terms of the marker localization accuracy, which is as high as within 1 pi The overall accuracy of the original Radon transform method is high, but the position can be inaccurate at certain heights.This is because parameter selection is influenced The results in Figure 12 show that the proposed method outperforms the other three methods in terms of the marker localization accuracy, which is as high as within 1 pixel.The overall accuracy of the original Radon transform method is high, but the positioning can be inaccurate at certain heights.This is because parameter selection is influenced by operator experience and the complex and changing image situation.The selected parameters may not be adaptable to the image data, resulting in a low accuracy.The proposed method has a higher accuracy than the template matching method and the Harris detection method, primarily because the latter two methods rely heavily on the original image quality, which is often compromised in UAV photography.The proposed adaptive Radon transform method locates markers in the saliency map.The original image quality only affects the circular spot size in the saliency map, but also has no effect on the location of the peak point.Therefore, using the generated marker saliency map for marker localization leads to a high accuracy.
As shown in Table 5, the proposed method outperforms the template matching method and Harris algorithm with regard to the detection success rate and localization accuracy.Additionally, compared with the original Radon transform method, the proposed method avoids many manual operations, greatly improving the efficiency and also improving the detection success rate and localization accuracy.In complex imaging environments, the proposed method is able to achieve a balance between efficiency and accuracy.It ultimately contributes to the achievement of a high precision, a high efficiency and automation of UAV displacement measurements.

Displacement Measurement Experiment
In order to evaluate the effectiveness of the method proposed in this paper for displacement measurements, we carried out three-dimensional displacement measurement experiments in an area of about 10,000 square meters located directly south of Central South University's stadium.The specific experimental flow is as follows: (1) Lay out measurement markers and acquire UAV images in four missions.The measurement markers were laid out as shown in Figure 13.Four marker control points, eleven marker displacement measurement points and two 3D slide table displacement simulation points were laid out.The markers were set at a size of 20 * 20 cm based on the results in Section 3.2.The main body of the 3D simulation point is a three-axis slide unit (with an accuracy of 1 mm) with a measurement mark fixed on the top.The slide scale can be adjusted to set the true displacement value (Figure 14).Control points were measured using a Leica TS09 model (Leica, Wetzlar, Germany) total station with an accuracy of 2.2 mm.
(2) Select the data of the first mission and locate all measurement markers using the proposed method, the original Radon transform method, the Harris algorithm, and the template matching method separately.Reconstruct the 3D model.
(3) Export the 3D coordinates of the displacement simulation points.Select the data of the second mission and repeat steps (1) and ( 2) to obtain the 3D coordinates of the displacement simulation points.
(4) Analyze the difference in the 3D coordinates of the displacement simulated points obtained from the two missions.Then, we can obtain the displacement measurements of these two UAV missions via Equation (11).
where (x ni , y ni , z ni ) and x nj , y nj , z nj denote the three-dimensional coordinates of the nth measurement point computed during the ith and jth UAV missions, respectively, and D n ij denotes the displacement measurement result of the nth measurement point in two adjacent UAV missions.
Sensors 2024, 24, x FOR PEER REVIEW 14 of 17  where  ,  ,  and  ,  ,  denote the three-dimensional coordinates of the nth measurement point computed during the th and th UAV missions, respectively, and  denotes the displacement measurement result of the nth measurement point in two adjacent UAV missions.
The weather during the experiment was sunny with a breeze.The other conditions during the experiment are shown in Table 6.where  ,  ,  and  ,  ,  denote the three-dimensional coordinates of the nth measurement point computed during the th and th UAV missions, respectively, and  denotes the displacement measurement result of the nth measurement point in two adjacent UAV missions.
The weather during the experiment was sunny with a breeze.The other conditions during the experiment are shown in Table 6.The weather during the experiment was sunny with a breeze.The other conditions during the experiment are shown in Table 6.The final displacement results of the displacement simulation points were obtained as shown in Table 7.

Figure 1 .
Figure 1.Principle of the cross-shaped marker detection and localization method based on th don transform.

Figure 1 .
Figure 1.Principle of the cross-shaped marker detection and localization method based on the Radon transform.

Figure 2 .
Figure 2. The parameter selection principles of the two methods.

Figure 3 .
Figure 3. Determining the marker information acquisition radius.The ground sampling distance, GSD, is calculated by Equation (3).

Figure 2 .
Figure 2. The parameter selection principles of the two methods.

Figure 2 .
Figure 2. The parameter selection principles of the two methods.

Figure 3 .
Figure 3. Determining the marker information acquisition radius.The ground sampling distance, GSD, is calculated by Equation (3).

Figure 3 .
Figure 3. Determining the marker information acquisition radius.

Figure 4 .
Figure 4. Schematic diagram of the target point scoring system. stands for gradient intensity map and  stands for template diagram.11 and 11 are the gray value of point 1,1 on the gradient intensity map and template diagram, respectively.

Figure 4 .
Figure 4. Schematic diagram of the target point scoring system.G stands for gradient intensity map and T stands for template diagram.G11 and T11 are the gray value of point (1, 1) on the gradient intensity map and template diagram, respectively.

Figure 5 .
Figure 5. Selecting the edge width of a cross-shaped scoring template.

Figure 5 .
Figure 5. Selecting the edge width of a cross-shaped scoring template.

Sensors 2024 ,Figure 6 .
Figure 6.Flowchart of the proposed adaptive method based on the Radon transform.

Figure 6 .
Figure 6.Flowchart of the proposed adaptive method based on the Radon transform.

Figure 8 .
Figure 8.Detection accuracy for different {W, H, s} parameter combinations.Flight heights a orded in the right upper corner of each figure.

Figure 8 .
Figure 8.Detection accuracy for different {W, H, s} parameter combinations.Flight heights are recorded in the right upper corner of each figure.

Figure 10 .
Figure 10.Manually selected parameters for the original Radon transform method.

Figure 12 .
Figure 12.RMSE of the four methods for five markers at different heights.

Figure 12 .
Figure 12.RMSE of the four methods for five markers at different heights.

Figure 12 .
Figure 12.RMSE of the four methods for five markers at different heights.

Figure 13 .
Figure 13.Layout of the measurement markers at the experimental site.

Figure 13 . 17 Figure 13 .
Figure 13.Layout of the measurement markers at the experimental site.

Table 2 .
value of different parameter combinations.

Table 2 .
R value of different parameter combinations.

Table 3 .
Appropriate detection and localization parameters and accuracy at different flying heights.

Table 4 .
Marker detection results of different methods.

Table 4 .
Marker detection results of different methods.

Table 5 .
Performance of different methods.
Figure 14.Three-dimensional slide table displacement simulation point.