To evaluate the performance of our multi-view stereo-matching algorithm for multiple UAV imagery in a more in-depth and comprehensive way, we will use three typical sets of UAV data with different texture features viewed from three perspectives: visual inspection, quantitative description and complexity.
3.2.2. The First Dataset: Northwest University Campus, China
This group of experiments uses UAV imagery data taken at the Northwest University campus in Shaanxi, China, by a Cannon EOS 400D. The photography flying height is 700 m, and the ground resolution of the imagery is approximately 0.166 m. The shooting lasted 40 min, and there are a total of 67 images. The specific parameters of the photography can be seen in
Table 1.
Table 1.
The parameters of the UAV photography in Northwest University.
Table 1.
The parameters of the UAV photography in Northwest University.
Camera Name | CCD Size (mm × mm) | Image Resolution (pixels × pixels) | Pixel Size (μm) | Focal Length (mm) | Flying Height (m) | Ground Resolution (m) | Number of Images |
---|
Canon EOS 400D | 22.16 × 14.77 | 3888 × 2592 | 5.7 | 24 | 700 | 0.166 | 67 |
This set of data is provided by Xi’an Dadi Surveying and Mapping Corporation. We used a commercial UAV photogrammetric processing software called GodWork, which was developed by Wuhan University, to perform automatic aerotriangulation to obtain the precise orientation elements of the images (we can also use the structure-from-motion software “VisualSFM” by Changchang Wu [
62,
63] to estimate the precise camera pose). The accuracy of aerotriangulation was as follows: the value of the unit-weight mean square error (Sigma0) was 0.49 pixels, and the average residual of the image point was 0.23 pixels. Because the data had no manual control point information, the bundle adjustment method and the orientation elements were under freenet.
Figure 9 is the tracking map of the 67 images under freenet. We used GodWork software to remove the lens distortion of the 67 images.
Table 2 shows part of the corrected images’ external orientation elements.
Figure 9.
The tracking map of the 67 images under freenet.
Figure 9.
The tracking map of the 67 images under freenet.
Table 2.
The external orientation elements of a portion of the corrected images.
Table 2.
The external orientation elements of a portion of the corrected images.
Image Name | X (m) | Y (m) | Z (m) | φ (Degree) | ω (Degree) | κ (Degree) |
---|
IMG_0555 | −201.736 | −31.7532 | −1.35375 | −1.0234 | −0.44255 | 166.7002 |
IMG_0554 | −194.482 | 17.90618 | −1.22801 | −0.71193 | −0.1569 | 166.2114 |
IMG_0553 | −187.641 | 67.63342 | −0.87626 | 0.320481 | −0.03621 | 166.3385 |
IMG_0552 | −180.965 | 116.052 | −0.61774 | 0.518773 | −0.87317 | 166.7509 |
IMG_0551 | −174.434 | 166.2001 | −0.59227 | 0.454139 | −0.86679 | 167.2784 |
IMG_0550 | −168.264 | 214.3878 | −0.79246 | −0.57551 | −0.67894 | 167.5912 |
IMG_0549 | −162.096 | 265.1268 | −0.70428 | −1.08565 | −0.7456 | 167.166 |
IMG_0548 | −156.002 | 314.2426 | −0.53393 | −1.40444 | −0.84598 | 166.5402 |
IMG_0547 | −148.987 | 367.3003 | −0.30983 | −0.77136 | −0.86104 | 166.5097 |
IMG_0546 | −142.152 | 417.2658 | −0.04708 | −0.38776 | −0.89084 | 166.5219 |
IMG_0545 | −135.102 | 466.7365 | 0.507349 | −0.08542 | −0.09929 | 166.4838 |
IMG_0544 | −128.322 | 520.032 | 1.246216 | −0.2938 | −0.41415 | 166.7157 |
IMG_0543 | −121.833 | 569.8722 | 1.862483 | −0.20825 | −0.21687 | 167.1119 |
First, we used the proposed UAV multiple image–grouping strategy to divide this set of 67 images into 12 groups. The serial number of each image group is shown in
Table 3.
Table 3.
Image-grouping result of the Northwest University data.
Table 3.
Image-grouping result of the Northwest University data.
Group | Image Number | Corresponding Image Name | Number of Images |
---|
0 | 0 1 2 3 4 5 | IMG_1093~IMG_1089 | 6 |
1 | 6 7 8 9 10 11 | IMG_ 1087~IMG_1082 | 6 |
2 | 12 13 14 15 | IMG_1081~IMG_1078 | 4 |
3 | 16 17 18 19 20 21 | IMG_0102~IMG_0107 | 6 |
4 | 22 23 24 25 26 27 | IMG_0108~IMG_0113 | 6 |
5 | 28 29 30 31 32 | IMG_0114~IMG_0118 | 5 |
6 | 33 34 35 36 37 38 | IMG_0641~IMG_0646 | 6 |
7 | 39 40 41 42 43 44 | IMG_0647~IMG_0652 | 6 |
8 | 45 46 47 48 49 50 | IMG_0653~IMG_0658 | 6 |
9 | 51 52 53 54 55 56 | IMG_0555~IMG_0550 | 6 |
10 | 57 58 59 60 61 62 | IMG_0549~IMG_0544 | 6 |
11 | 63 64 65 66 | IMG_0543~IMG_0540 | 4 |
After image-grouping, we used the proposed Self-Adaptive Patch-based Multi-View Stereo-matching algorithm (SAPMVS) to address each image group, and obtained the 3D dense point cloud data of each image group. Then, we merged the 3D dense point clouds of each group; the merged 3D dense point cloud is shown in
Figure 10 (the small black areas in the figures are water). We found that the merged 3D dense point cloud has 8,526,192 points, and the point density is approximately three points per square meter; thus, the ground resolution is approximately 0.3 m.
Because of the lack of control point data or high-precision reference data in the data set, such as the laser point cloud, we use the visual inspection method to evaluate the results of the proposed algorithm,
i.e., whether the shape of the 3D point cloud is consistent with the actual terrain. We compared the 3D dense point cloud and the corresponding corrected images that had the lens distortion removed, as shown in
Figure 11. By comparing the point clouds and images in
Figure 11, it can be seen that the 3D dense point clouds of the proposed algorithm accurately described the terrain features of the Northwest University campus as well as the shape and distribution of physical objects (such as roads and buildings).
Figure 10.
The merged 3D dense point cloud of Northwest University.
Figure 10.
The merged 3D dense point cloud of Northwest University.
For further analysis of the accuracy and efficiency of the proposed algorithm, we used the proposed IG-SAPMVS algorithm and PMVS algorithm [
18,
52], respectively, to process this set of data, and recorded the processing time and the 3D point cloud results.
Table 4 shows the statistics for these two algorithms with respect to the processing time and the point number of 3D dense point clouds.
Figure 12 shows the final 3D dense point cloud results.
From
Table 4, it can be seen that the processing time of the proposed IG-SAPMVS algorithm is approximately 0.5 times that of the PMVS algorithm; thus, the calculation efficiency of the proposed IG-SAPMVS algorithm is significantly higher than that of the PMVS algorithm. Because the terrain relief of the test area is not large and there are many flat square grounds in the test area, the proposed Self-Adaptive Patch-based Multi-View Stereo-matching algorithm (SAPMVS) can spread more quickly than the PMVS algorithm in the matching propagation process. On the other hand, based on
Table 4, it can be seen that the point number of the 3D dense point cloud by the proposed IG-SAPMVS algorithm is 1.15 times that of the PMVS algorithm;
Figure 12 illustrates that the 3D dense point cloud result of the proposed IG-SAPMVS algorithm is almost the same as that of the PMVS algorithm based on visual inspection. In general, the proposed IG-SAPMVS algorithm outperforms the PMVS algorithm in computing efficiency and the quantity of 3D dense point clouds.
Figure 11.
Comparison of the same areas in the 3D dense point clouds and the corresponding images. (a) The building area; (b) the flat area.
Figure 11.
Comparison of the same areas in the 3D dense point clouds and the corresponding images. (a) The building area; (b) the flat area.
Figure 12.
Final 3D point cloud results of the Northwest University campus using IG-SAPMVS (a) and PMVS (b).
Figure 12.
Final 3D point cloud results of the Northwest University campus using IG-SAPMVS (a) and PMVS (b).
Table 4.
Statistics for the proposed IG-SAPMVS algorithm and PMVS algorithm.
Table 4.
Statistics for the proposed IG-SAPMVS algorithm and PMVS algorithm.
Algorithm | RunTime (h:min:s) | Point Cloud Amount | Number of Images |
---|
IG-SAPMVS | 2:41:38 | 8526192 | 67 |
PMVS | 4:8:30 | 7428720 | 67 |
3.2.3. The Second Dataset: Remote Mountains
This group of experiments uses UAV imagery data of remote mountains characterized by large relief, heavy vegetation and a small amount of physical objects, such as roads and buildings, in China; they were also taken by a Cannon EOS 400D with a focus of 24 mm. The photography flying height is approximately 1900 m and the ground resolution of the imagery is approximately 0.451 m. There are a total of 125 images.
Figure 13 is the GPS tracking map under the geodetic control network. We also used GodWork software to remove the lens distortion of the 125 images.
Figure 13.
The GPS tracking map of the UAV images taken in remote mountains under geodetic control network.
Figure 13.
The GPS tracking map of the UAV images taken in remote mountains under geodetic control network.
This set of data is also provided by Xi'an Dadi Surveying and Mapping Corporation. Because of the low accuracy of the airborne GPS/IMU data, it cannot meet the requirements of the proposed multi-view stereo-matching algorithm. We used the commercial UAV photogrammetric processing software GodWork, which was developed by Wuhan University, to perform automatic aerotriangulation and obtain the images’ precise exterior orientation elements. The accuracy of aerotriangulation was as follows: the value of the unit-weight mean square error (Sigma0) was 0.77 pixels, and the average residual of the image point was 0.36 pixels.
Table 5 shows part of the corrected images’ external orientation elements.
First, we also used the UAV multiple image-grouping strategy to divide this set of 125 images into 21 groups. After image-grouping, we used the proposed Self-Adaptive Patch-based Multi-View Stereo-matching algorithm (SAPMVS) to address each image group, and obtained the 3D dense point cloud data of each image group. Then, we merged the 3D dense point clouds of each group, and the merged 3D dense point cloud is shown in
Figure 14 (the small black areas in the figures are water). We found that the merged 3D dense point cloud has 12,509,202 points, and the point density is approximately one point per square meter; thus, the ground resolution is approximately 1 m.
Table 5.
The external orientation elements of part of the corrected images (the remote mountains).
Table 5.
The external orientation elements of part of the corrected images (the remote mountains).
Image Name | X (m) | Y (m) | Z (m) | φ (Degree) | ω (Degree) | κ (Degree) |
---|
IMG_0250 | 453208.9 | 4312689 | 1926.36 | 4.251745 | −2.30691 | 10.23632 |
IMG_0251 | 453205.6 | 4312796 | 1926.462 | 2.852207 | −6.04494 | 10.94959 |
IMG_0252 | 453200.6 | 4312900 | 1921.71 | 0.681977 | −7.93856 | 11.60872 |
IMG_0253 | 453198.6 | 4313007 | 1920.499 | −0.01863 | −3.6135 | 8.714555 |
IMG_0254 | 453198.7 | 4313117 | 1915.972 | 3.277204 | −6.38877 | 8.599633 |
IMG_0255 | 453199.3 | 4313230 | 1910.448 | 6.64432 | −6.29777 | 8.965715 |
IMG_0256 | 453196.7 | 4313341 | 1904.105 | 3.33602 | −6.5811 | 10.72329 |
IMG_0257 | 453194.1 | 4313450 | 1903.386 | 1.594531 | −4.486 | 10.71933 |
IMG_0258 | 453192.8 | 4313559 | 1902.593 | −4.04339 | −5.57166 | 8.112464 |
IMG_0259 | 453194.9 | 4313667 | 1899.551 | 3.77633 | −2.96418 | 7.066141 |
IMG_0260 | 453198.2 | 4313776 | 1899.075 | 7.240338 | −5.29839 | 8.254281 |
IMG_0261 | 453196.6 | 4313882 | 1898.725 | 3.41796 | −1.02482 | 10.90049 |
IMG_0262 | 453193.1 | 4313986 | 1895.928 | 3.000324 | −5.9483 | 12.35882 |
IMG_0263 | 453185.4 | 4314095 | 1896.413 | 1.532879 | −4.00544 | 11.27756 |
IMG_0264 | 453184 | 4314196 | 1895.204 | 4.379199 | −2.4512 | 11.95034 |
Figure 14.
The merged 3D dense point cloud of the remote mountains. (a) The plan view of the merged 3D dense point cloud; (b) The side views of the merged 3D point-cloud.
Figure 14.
The merged 3D dense point cloud of the remote mountains. (a) The plan view of the merged 3D dense point cloud; (b) The side views of the merged 3D point-cloud.
For further analysis of the accuracy and efficiency of the proposed algorithm, we used the proposed IG-SAPMVS algorithm and PMVS algorithm [
18,
52], respectively, to process this set of remote mountain data and recorded the processing time and the 3D point cloud results.
Table 6 shows the statistics of these two algorithms with respect to the processing time and the point number of the 3D dense point clouds.
Figure 15 shows the final results of the 3D dense point cloud.
Figure 15.
Final 3D point cloud results of the remote mountains using IG-SAPMVS (a) and PMVS (b).
Figure 15.
Final 3D point cloud results of the remote mountains using IG-SAPMVS (a) and PMVS (b).
Table 6.
The statistics of the proposed IG-SAPMVS algorithm and PMVS algorithm.
Table 6.
The statistics of the proposed IG-SAPMVS algorithm and PMVS algorithm.
Algorithm | RunTime (h:min:s) | Point Cloud Amount | Number of Images |
---|
IG-SAPMVS | 4:36:5 | 12509202 | 125 |
PMVS | 15:48:11 | 8953228 | 125 |
From
Table 6, it can be seen that the processing time of the proposed IG-SAPMVS algorithm is about one-third of that of the PMVS algorithm; thus, the efficiency of the proposed IG-SAPMVS algorithm is significantly higher than that of the PMVS algorithm. Obviously, even in the remote mountain field with complex terrain, the proposed Self-Adaptive Patch-based Multi-View Stereo-matching algorithm (SAPMVS) can spread more quickly than the PMVS algorithm in the matching propagation process. On the other hand, based on
Table 6, it can be seen that the point number of the 3D dense point cloud by the proposed IG-SAPMVS algorithm is 1.40 times that of the PMVS algorithm, and
Figure 15 illustrates that the 3D dense point cloud result of the proposed IG-SAPMVS algorithm is nearly the same as the PMVS algorithm based on visual inspection. In general, the proposed IG-SAPMVS algorithm significantly outperforms the PMVS algorithm in computing efficiency and the quantity of 3D dense point clouds.
3.2.4. The Third Dataset: Vaihingen, Germany
The third dataset was captured over Vaihingen, Germany, by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) [
64]. It consists of three test areas of various object classes (three yellow areas in
Figure 16).
- •
Area 1 “Inner City”: This test area is situated in the center of the city of Vaihingen. It is characterized by dense development consisting of historic buildings with rather complex shapes, but there are also some trees (
Figure 17a).
- •
Area 2 “High Riser”: This area is characterized by a few high-rise residential buildings that are surrounded by trees (
Figure 17b).
- •
Area 3 “Residential Area”: This is a purely residential area with small detached houses (
Figure 17c).
Figure 16.
The Vaihingen test areas.
Figure 16.
The Vaihingen test areas.
Figure 17.
The three test sites in Vaihingen. (a) a1-a8: the eight cut images of the “Inner City” from the original images: 10030061.jpg, 10030062.jpg, 10040083.jpg, 10040084.jpg, 10050105.jpg, 10050106.jpg, 10250131.jpg, 10250132.jpg, respectively; (b) b1-b4: the four cut images of the “High Riser” from the original images: 10040082.jpg, 10040083.jpg, 10050104.jpg, 10050105.jpg, respectively; (c) c1-c6: the six cut images of the “Residential Area” from the original images: 10250134.jpg, 10250133.jpg, 10040083.jpg, 10040084.jpg, 10050105.jpg, 10050106.jpg, respectively.
Figure 17.
The three test sites in Vaihingen. (a) a1-a8: the eight cut images of the “Inner City” from the original images: 10030061.jpg, 10030062.jpg, 10040083.jpg, 10040084.jpg, 10050105.jpg, 10050106.jpg, 10250131.jpg, 10250132.jpg, respectively; (b) b1-b4: the four cut images of the “High Riser” from the original images: 10040082.jpg, 10040083.jpg, 10050104.jpg, 10050105.jpg, respectively; (c) c1-c6: the six cut images of the “Residential Area” from the original images: 10250134.jpg, 10250133.jpg, 10040083.jpg, 10040084.jpg, 10050105.jpg, 10050106.jpg, respectively.
The data include high-resolution digital aerial images and orientation parameters and airborne laser scanner data (available in [
65]).
Digital Aerial Images and Orientation Parameters: The images are a part of the Intergraph/ZI DMC block with 8 cm ground resolution [
64]. Each area is visible in multiple images from several strips. The orientation parameters are distributed together with the images. The accuracy of aerotriangulation is as follows: the value of unit-weight mean square error (Sigma0) is about 0.25 pixels.
Table 7 shows the external orientation elements of the images in the test region.
Airborne Laser Scanner Data: The test area was covered by 10 strips captured with a Leica ALS50 system. Inside an individual strip, the average point density is 4 points
[
66]. The airborne laser scanner data of the test region are shown in
Figure 18.
Table 7.
The external orientation elements of the experimental images (Vaihingen data).
Table 7.
The external orientation elements of the experimental images (Vaihingen data).
Image Name | X (m) | Y (m) | Z (m) | ω (Degree) | ϕ (Degree) | κ (Degree) |
---|
10030060.tif | 496803.043 | 5420298.566 | 1163.983 | 2.50674 | 0.73802 | 199.32970 |
10030061.tif | 497049.238 | 5420301.525 | 1163.806 | 2.05968 | 0.67409 | 199.23470 |
10030062.tif | 497294.288 | 5420301.839 | 1163.759 | 1.97825 | 0.51201 | 198.84290 |
10030063.tif | 497539.821 | 5420299.469 | 1164.423 | 1.40457 | 0.38326 | 198.88310 |
10040081.tif | 496558.488 | 5419884.008 | 1181.985 | −0.87093 | 0.36520 | −199.20110 |
10040082.tif | 496804.479 | 5419882.183 | 1183.373 | −0.26935 | −0.63812 | −198.97290 |
10040083.tif | 497048.699 | 5419882.847 | 1184.616 | 0.34834 | −0.40178 | −199.44720 |
10040084.tif | 497296.587 | 5419884.550 | 1185.010 | 0.81501 | −0.53024 | −199.35600 |
10040085.tif | 497540.779 | 5419886.806 | 1184.876 | 1.38534 | −0.46333 | −199.85010 |
10050103.tif | 496573.389 | 5419477.807 | 1161.431 | −0.48280 | −0.03105 | −0.23869 |
10050104.tif | 496817.972 | 5419476.832 | 1161.406 | −0.65210 | −0.06311 | −0.17326 |
10050105.tif | 497064.985 | 5419476.630 | 1159.940 | −0.74655 | 0.11683 | −0.09710 |
10050106.tif | 497312.996 | 5419477.065 | 1158.888 | −0.53451 | −0.19025 | −0.13489 |
10050107.tif | 497555.389 | 5419477.724 | 1158.655 | −0.55312 | −0.12844 | −0.13636 |
10250130.tif | 497622.784 | 5420189.950 | 1180.494 | 0.09448 | 3.41227 | −101.14170 |
10250131.tif | 497630.734 | 5419944.364 | 1181.015 | 0.61065 | 2.54420 | −97.84478 |
10250132.tif | 497633.024 | 5419698.973 | 1179.964 | 1.27053 | 1.62793 | −97.23292 |
10250133.tif | 497628.317 | 5419452.807 | 1179.237 | 0.90688 | 0.83308 | −98.72504 |
10250134.tif | 497620.954 | 5419207.621 | 1178.201 | 0.17675 | 1.27920 | −101.86160 |
10250135.tif | 497617.307 | 5418960.618 | 1176.629 | 0.22019 | 1.47729 | −101.55860 |
Figure 18.
The airborne laser scanner data of the experimental region (Vaihingen).
Figure 18.
The airborne laser scanner data of the experimental region (Vaihingen).
Because this dataset’s image pixel resolution (
square pixels) is large, it often exhausted the computer memory in the experiment when processing the original images. In addition, the imaging of any of the three experimental areas is only a small part of each image. Therefore, we can cut out the three experimental areas in each of the original images separately (
Figure 17). We used the proposed Self-Adaptive Patch-based Multi-View Stereo-matching algorithm (SAPMVS) to address the three sets of cut images separately, and obtained the 3D dense point cloud data of each dataset. The 3D dense point cloud data of each dataset are shown in
Figure 19. The statistics of the results by the proposed SAPMVS algorithm are shown in
Table 8.
Figure 19.
Final 3D point cloud results for the three sets of cut images. (a) Area 1: “Inner City”; (b) Area 2: “High Riser”; (c) Area 3: “Residential Area”.
Figure 19.
Final 3D point cloud results for the three sets of cut images. (a) Area 1: “Inner City”; (b) Area 2: “High Riser”; (c) Area 3: “Residential Area”.
Table 8.
The statistics for the results of the Vaihingen data by the proposed SAPMVS algorithm.
Table 8.
The statistics for the results of the Vaihingen data by the proposed SAPMVS algorithm.
Experiment Area | Number of Images (Image Pixel Resolution) | Point Amount | Average Distance between Points | RunTime (min:s) |
---|
Area 1: “Inner City” | 8 (1200*1200) | 253125 | 16 cm | 11:33 |
Area 2: “High Riser” | 4 (1200*1600) | 220073 | 16 cm | 6:30 |
Area 3: “Residential Area” | 6 (1400*1300) | 259637 | 16 cm | 9:46 |
From
Table 8 and
Figure 19, it can be seen that the computational efficiency of the proposed SAPMVS algorithm is high and the ground resolution of the obtained 3D dense point cloud is approximately 0.16 m.
To quantitatively describe the accuracy of the proposed algorithm, we can compare the obtained 3D point cloud results by the PMVS and the proposed algorithm with the high-precision airborne laser scanner data, respectively. The specific evaluation method is performed as follows: For each 3D point of the obtained point-cloud result that was assumed to be
, we determine all the laser points near the point
(the XY-plane distance between point
and the laser point should be smaller than the threshold value
, which is related to the average point density of the airborne laser scanner data, in our experiment
) in the laser point cloud data; we then calculate the average elevation
of these nearby laser points as the reference elevation of the point
[
36]. Finally, we compare the elevation
of the point
with its reference elevation
and calculate the root mean square error (
) and the maximum error (
) of the obtained 3D point cloud [
67]. The calculation formulas are as follows:
where
represents the point number of the obtained 3D dense point cloud result. It should be noted that using this method to evaluate the accuracy of the obtained 3D dense point cloud has one drawback: At the edge or the fracture line, the value of
may become a large value, which is inconsistent with the actual situation, resulting in a pseudo-error. That is to say, if a 3D point
is on the edge (or at one side of the edge) and the nearby laser points are located outside of the edge (or on the other side of the edge), the value of
may become very large, but the large error is not real (in fact, in that case, the maximum error
is not meaningful). The quantitative evaluation results if we do not remove the pseudo-errors (from statistics, the frequency of pseudo-errors is relatively small) are shown in
Table 9.
Table 9.
The quantitative evaluation accuracy of the obtained 3D dense point cloud without removing pseudo-errors. (a) PMVS; (b) The proposed algorithm.
Table 9.
The quantitative evaluation accuracy of the obtained 3D dense point cloud without removing pseudo-errors. (a) PMVS; (b) The proposed algorithm.
(a) |
Experiment Area | Checkpoint Amount | RMSE(m) | Max(m) | Percentage of Errors within 1 m |
Area 1: “Inner City” | 245752 | 2.180527 | 20.410379 | 66.7% |
Area 2: “High Riser” | 213679 | 4.032463 | 30.742815 | 46.1% |
Area 3: “Residential Area” | 252568 | 2.349705 | 18.903685 | 74.1% |
(b) |
Experiment Area | Checkpoint Amount | RMSE(m) | Max(m) | Percentage of Errors within 1 m |
Area 1: “Inner City” | 253125 | 2.164632 | 20.307628 | 66.8% |
Area 2: “High Riser” | 220073 | 3.950138 | 29.812880 | 46.4% |
Area 3: “Residential Area” | 259637 | 2.328481 | 18.713413 | 74.2% |
Because there are a certain number of pseudo-errors that may be very large, when evaluating the accuracy of the obtained 3D dense point cloud, we mainly focus on the percentage of errors within 1 m (for errors within 1 m, the vast majority should be a true error; this has valuable reference meaning) and then the value. However, the value of is a maximum pseudo-error and is not meaningful, and if there is no such pseudo-error, the actual value will be much smaller.
Table 9a shows that the percentages of errors within 1 m by the PMVS algorithm for the three experiment areas (“Area 1”, “Area 2”, and “Area 3”) are 66.7%, 46.1% and 74.1%, respectively, and the corresponding
values are 2.180527 m, 4.032463 m and 2.349705 m, respectively.
Table 9b shows that the percentages of errors within 1 m by the proposed algorithm for the three experiment areas (“Area 1”, “Area 2”, and “Area 3”) are 66.8%, 46.4% and 74.2%, respectively, and the corresponding
values are 2.164632 m, 3.950138 m and 2.328481 m, respectively. It can be seen that the accuracy of the proposed algorithm is slightly higher than that of the PMVS algorithm for the three experiment areas. Additionally, it can be seen that the matching accuracy from high to low is “Area 3”, “Area 1”, and “Area 2”. Such an experimental result is reasonable because there are mainly low residential buildings in “Area 3”, the buildings in “Area 1” are more complex, and the buildings in “Area 2” are very tall; thus, the matching difficulty is gradually increased.
To evaluate the actual accuracy of the PMVS and the proposed algorithm more precisely, we need to delete some of the large pseudo-errors when calculating the
value. We can take a simple approach inspired by [
68,
69,
70,
71]: For “Area 3” and “Area 1”, if the value
of a 3D point is greater than the Elevation Error Threshold 1
, we consider the value of
as a pseudo-error and delete the 3D point (do not use it as a checkpoint), while for “Area 2”, if the value
of a 3D point is greater than the Elevation Error Threshold 2
, we consider the value of
as a pseudo-error and delete the 3D point (do not use it as a checkpoint). The actual and more precise quantitative evaluation results of the two algorithms are shown in
Table 10 and
Table 11.
Table 10.
The actual quantitative evaluation accuracy of the obtained 3D dense-point cloud by removing the large pseudo-errors (, ). (a) PMVS; (b) The proposed method.
Table 10.
The actual quantitative evaluation accuracy of the obtained 3D dense-point cloud by removing the large pseudo-errors (, ). (a) PMVS; (b) The proposed method.
(a) |
Experiment Area | Point Amount | Checkpoint Amount | RMSE(m) | Percentage of Errors within 1 m |
Area 1: “Inner City” | 245752 | 243179 | 1.312648 | 67.5% |
Area 2: “High Riser” | 213679 | 212151 | 3.402339 | 46.5% |
Area 3: “Residential Area” | 252568 | 250463 | 1.587426 | 74.7% |
(b) |
Experiment Area | Point Amount | Checkpoint Amount | RMSE(m) | Percentage of Errors within 1 m |
Area 1: “Inner City” | 253125 | 250488 | 1.301095 | 67.5% |
Area 2: “High Riser” | 220073 | 218508 | 3.352631 | 46.7% |
Area 3: “Residential Area” | 259637 | 257470 | 1.571167 | 74.8% |
Table 10a shows that after we deleted the large pseudo-errors (
,
), the
values of the PMVS algorithm for the three experiment areas (“Area 1”, “Area 2”, and “Area 3”) are 1.312648 m, 3.402339 m and 1.587426 m, respectively.
Table 10b shows that after we deleted the large pseudo-errors (
,
), the
values of the proposed algorithm for the three experiment areas (“Area 1”, “Area 2”, and “Area 3”) are 1.301095 m, 3.352631 m and 1.571167 m, respectively. It can be seen that the accuracy of the proposed algorithm is almost equal to that of the PMVS algorithm for the three experiment areas. In fact, the pseudo-errors that still remain act as a kind of constraint on the precision of the two algorithms.
Table 11.
The actual quantitative evaluation accuracy of the obtained 3D dense point cloud by removing the pseudo-errors (, ). (a) PMVS; (b) The proposed method.
Table 11.
The actual quantitative evaluation accuracy of the obtained 3D dense point cloud by removing the pseudo-errors (, ). (a) PMVS; (b) The proposed method.
(a) |
Experiment Area | Point Amount | Checkpoint Amount | RMSE (m) | Percentage of Errors within 1 m |
Area 1: “Inner City” | 245752 | 240826 | 0.880695 | 68.2% |
Area 2: “High Riser” | 213679 | 173487 | 1.351428 | 57.1% |
Area 3: “Residential Area” | 252568 | 248728 | 0.898527 | 75.3% |
(b) |
Experiment Area | Point Amount | Checkpoint Amount | RMSE (m) | Percentage of Errors within 1 m |
Area 1: “Inner City” | 253125 | 248113 | 0.870425 | 68.1% |
Area 2: “High Riser” | 220073 | 178565 | 1.316283 | 57.2% |
Area 3: “Residential Area” | 259637 | 255674 | 0.886161 | 75.3% |
Table 11a shows that after we deleted almost all the pseudo-errors (
,
), the
values by the PMVS algorithm for the three experiment areas (“Area 1”, “Area 2”, and “Area 3”) are 0.880695 m, 1.351428 m and 0.898527 m, respectively.
Table 11b shows that after we deleted almost all the pseudo-errors (
,
), the
values by the proposed algorithm for the three experiment areas (“Area 1”, “Area 2”, and “Area 3”) are 0.870425 m, 1.316283 m and 0.886161 m, respectively. It can be seen that the accuracy of the proposed algorithm is almost equal to that of the PMVS algorithm for the three experiment areas. In fact, because the building coverage of the three experimental regions is high, the overall accuracy of the dense matching algorithm is bound to decrease. In addition, the precision of the image orientation elements may act as a small type of constraint on the precision of the proposed algorithm.