Next Article in Journal
Improved Electrochemical–Mechanical Parameter Estimation Technique for Lithium-Ion Battery Models
Previous Article in Journal
Phenology-Based Maize and Soybean Yield Potential Prediction Using Machine Learning and Sentinel-2 Imagery Time-Series
Previous Article in Special Issue
Mamba-DQN: Adaptively Tunes Visual SLAM Parameters Based on Historical Observation DQN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Quantitative Evaluation of UAV Flight Parameters for SfM-Based 3D Reconstruction of Buildings

1
Department of Architectural Engineering, Hanyang University, Seoul 04763, Republic of Korea
2
Department of Digital Architecture and Urban Engineering, Hanyang Cyber University, Seoul 04763, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 7196; https://doi.org/10.3390/app15137196
Submission received: 23 May 2025 / Revised: 23 June 2025 / Accepted: 25 June 2025 / Published: 26 June 2025
(This article belongs to the Special Issue Applications in Computer Vision and Image Processing)

Abstract

This study aims to address the critical lack of standardized guidelines for unmanned aerial vehicle (UAV) image acquisition strategies utilizing structure-from-motion (SfM) by focusing on 3D building exterior modeling. A comprehensive experimental analysis was conducted to systematically investigate and quantitatively evaluate the effects of various shooting patterns and parameters on SfM reconstruction quality and processing efficiency. This study implemented a systematic experimental framework to test various UAV flight patterns, including circular, surface, and aerial configurations. Under controlled environmental conditions on representative building structures, key variables were manipulated, and all collected data were processed through a consistent SfM pipeline based on the SIFT algorithm. Quantitative evaluation results using various analytical methodologies (multiple regression analysis, Kruskal–Wallis test, random forest feature importance, principal component analysis including K-means clustering, response surface methodology (RSM), preference ranking technique based on similarity to the ideal solution (TOPSIS), and Pareto optimization) revealed that the basic shooting pattern ‘type’ has a significant and statistically significant influence on all major SfM performance metrics (reprojection error, final point count, computation time, reconstruction completeness; Kruskal–Wallis p < 0.001). Additionally, within the patterns, clear parameter sensitivity and complex nonlinear relationships were identified (e.g., overlapping variables play a decisive role in determining the point count and completeness of surface patterns, with an adjusted R2 ≈ 0.70; the results of circular patterns are strongly influenced by the interaction between radius and tilt angle on reprojection error and point count, with an adjusted R2 ≈ 0.80). Furthermore, composite pattern analysis using TOPSIS identified excellent combinations that balanced multiple criteria, and Pareto optimization explicitly quantified the inherent trade-offs between conflicting objectives (e.g., time vs. accuracy, number of points vs. completeness). In conclusion, this study clearly demonstrates that hierarchical strategic approaches are essential for optimizing UAV-SfM data collection. Additionally, it provides important empirical data, a validated methodological framework, and specific quantitative guidelines for standardizing UAV data collection workflows, thereby improving existing empirical or case-specific approaches.

1. Introduction

Inspecting and maintaining the exterior walls of buildings on a regular basis is essential for ensuring structural safety, increasing energy efficiency, and maintaining aesthetics [1]. There are a growing number of aging buildings worldwide, and with increasing social demands for building safety and stricter regulations, the need for reliable exterior inspections is becoming even more critical [2]. In addition, these diagnostic and related maintenance activities form a significant economic scale, with related costs showing a continuous upward trend [3]. These strong economic and social demands highlight the need for alternative methods that can overcome the potential risks and inefficiencies of traditional manual methods. As a result, the sustained growth of the building exterior wall inspection market is serving as a key driver for the adoption of new technologies in this field [4].
The introduction of automated technologies for efficiently and objectively collecting information on the condition of building exteriors is actively underway, with particular attention being paid to 3D data acquisition methods [5]. Traditional exterior wall inspection methods require workers to physically access the exterior walls, resulting in significant time and cost expenditures, as well as safety hazards during high-rise operations and subjective results [6]. As an alternative to address these issues, among various automated technologies being explored, those focusing on non-contact 3D data acquisition methods such as laser scanning and photogrammetry are emerging [7]. Notably, photogrammetry using unmanned aerial vehicles (UAVs) enables rapid imaging of large areas and facilitates data acquisition in hard-to-reach high-rise or complex structures, leading to a rapid increase in their application [8].
In this technological transition, the Structure from Motion (SfM) method provides the core component of the process for generating 3D models from UAV images [9]. This computer vision algorithm automatically extracts and matches feature points in image sequences captured from multiple angles to simultaneously estimate the camera’s external appearance features and the target’s 3D structure [10]. Generally, the camera information and sparse point cloud estimated with high precision through SfM are used as input for subsequent Multi-View Stereo (MVS) algorithms, directly enabling more accurate and complete dense point cloud generation and thus contributing to the final generation of a dense 3D model [11]. Therefore, SfM can be defined as an essential method element responsible for initial data processing and geometric information restoration in UAV image-based 3D modeling pipelines.
However, there is currently a lack of quantified standards or specific guidelines for image capture procedures and related parameter settings for SfM-based 3D building modeling using UAVs. The absence of standardized procedures or quantitative guidelines can exacerbate various practical problems during UAV photogrammetry work on large-scale buildings or structures with complex shapes. First, it is difficult to select the most appropriate shooting strategy based on objective evidence, considering the project objectives and the characteristics of the target building, thereby hindering rational decision-making. Second, inconsistently collected data increase the likelihood of errors in subsequent processing and reduce the reliability of key quality indicators such as the accuracy and completeness of the final 3D model. Third, it impairs the reproducibility of the data collection and processing processes, which manifests as issues such as failed alignment between upper and lower viewpoints of high-rise buildings [7], the necessity of a control system when operating multiple drones [12], and difficulties in establishing flight paths suitable for complex shapes [13]. These practical challenges act as constraints in ensuring the practical applicability and scientific rigor of UAV photogrammetry. Therefore, the establishment of systematic and standardized data collection methodologies is urgently required to enhance the reliability and utility of UAV photogrammetry.
This study aims to address the lack of standardized guidelines in the process of collecting image data using UAVs and applying SfM for diagnosing building exteriors. It systematically experiments with and quantitatively verifies the effects of various UAV shooting patterns (Circular, Close-up, Aerial, etc.), key variables (Distance, Overlap Rate, Camera Angle, etc.), and their combinations on the quality and processing efficiency of SfM results. Specifically, image data of actual buildings were collected for various shooting scenarios according to the designed experimental framework, and all collected data were processed using a specified and consistent SfM pipeline (e.g., utilizing the SIFT algorithm). Quantitative indicators such as reprojection error, final point count, computation time, and reconstruction completeness were calculated from the processing results, and the obtained data were analyzed using multiple regression analysis, nonparametric statistical tests, variable importance analysis, dimension reduction, clustering, response surface analysis, multi-criteria decision-making, and Pareto optimization. While this study focuses on common facade types under typical daylight conditions using standard commercial UAVs, its findings require careful consideration for highly reflective surfaces or extreme weather. The optimal data acquisition strategy derived from this study will provide concrete and reliable quantitative evidence that supports rational decision-making by balancing the quality of results required in practice with practical constraints (time, budget, etc.), surpassing the limitations of existing empirical methods.

2. Literature Review

2.1. Acquisition of Building Shape Information

Building geometry information is used as core data throughout the entire building lifecycle management process, including structural stability checks, material deterioration assessments, identification of exterior damage and defects, and maintenance plan development [14]. In particular, damage frequently occurs due to long-term use, environmental changes, and material deterioration, and accurate shape information is essential for effective management [11]. Traditional surveying methods include high-altitude work, photography, and aerial photography. These methods have issues such as difficulty in accessing high-rise exterior walls, safety concerns in operating equipment, and reduced economic efficiency due to long working hours [15].
Various 3D surveying and restoration technologies can be applied to collect building shape information. These include 3D scanning, photogrammetry, and 360-degree camera photography, each with its own advantages and limitations. First, 3D scanning is a technology that uses sensors such as laser scanners (LiDAR) or structured light to acquire high-precision, high-density point cloud data [16]. It is widely used in cultural heritage restoration, industrial plant measurement, and precise architectural structure measurement [17]. This method ensures high precision but requires expensive equipment, installation, and operation personnel, and faces measurement constraints in accessing high-rise exteriors [18]. Photogrammetry is a technology that analyzes multiple overlapping photographs to restore 3D information [19]. It uses general cameras, resulting in low equipment costs and flexibility for various scales and shapes of objects. Photogrammetry has already been applied in pilot studies for small-scale buildings or civil engineering structures, and it is able to easily obtain images even in locations difficult to access using UAVs [20,21]. However, when applied to large-scale structures, issues such as securing shooting points and blind spots arise, and image quality management is crucial for obtaining high-precision and high-density point clouds. Additionally, a well-planned shooting path is necessary to cover complex shapes. In this context, 360-degree cameras can record the entire surrounding environment in a single shot, making them useful for virtual tours and real estate presentations [22]. However, due to resolution limitations, they often lack sufficient detail and struggle to reproduce high-precision 3D data, making them unsuitable for precise structural analysis or defect detection [23].
An approach that simultaneously achieves measurement accuracy, efficiency, and minimization of blind spots by comparing and integrating these individual methodologies is necessary for rapid and precise modeling of building exteriors. Considering the significant cost and access limitations of 3D scanning for frequent, large-scale assessments and the precision drawbacks of 360-degree cameras for detailed analysis, photogrammetry emerges as a promising candidate. Therefore, when photogrammetry, which offers cost-effectiveness and ease of use, is combined with UAV-based image acquisition, significantly improved accessibility and data collection capabilities compared to existing methods can be expected.

2.2. Structure from Motion (SfM)

Three-dimensional reconstruction is essential in a wide range of fields, including computer vision, robotics, and cultural heritage preservation [24]. Structure from Motion (SfM) and Multi-View Stereo (MVS) are core image-based approaches that complement each other. SfM estimates the camera pose and sparse 3D structure of a scene from multiple 2D images, based on the principles of motion parallax and projection geometry [25]. This process typically involves stable feature detection and matching (e.g., SIFT, ORB), camera relative/absolute pose estimation using epipolar geometry and PnP algorithms, 3D point location calculation via triangulation, and global optimization through bundle adjustment [26]. SfM ultimately generates a sparse point cloud but inherently suffers from scale ambiguity and assumes static scenes, relying on robust geometric verification procedures such as RANSAC for outlier removal [27]. As a result, SfM serves the role of providing camera parameters and an initial geometric skeleton essential for subsequent dense reconstruction stages [28]. The quality and completeness of the results produced in the SfM stage directly influence the performance of the MVS stage, which aims to achieve detailed surface reconstruction of the scene [11].
SfM implementation strategies are broadly categorized into incremental and global approaches, each with distinct advantages and disadvantages in terms of accuracy, robustness, and scalability. These two primary strategies, incremental and global SfM, present distinct trade-offs as summarized in Table 1. Meanwhile, MVS leverages the principle of photometric consistency based on SfM results. Incremental SfM starts with a small number of images, sequentially adds views, and pursues high accuracy through iterative local optimization (bundle adjustment), but it has issues such as error drift and increased computational costs when processing large datasets [29]. On the other hand, global SfM simultaneously estimates all camera poses, offering superior computational efficiency and scalability, but it is sensitive to initial relative pose estimation errors and outliers, and ensuring stability in the translation averaging stage remains a technical challenge [30]. MVS utilizes precisely estimated camera parameters from SfM to estimate dense geometric information by evaluating pixel value similarity across multiple views based on specific depth or surface location assumptions [31]. It primarily estimates 3D coordinates by searching for corresponding points based on photometric consistency between features within a scene. Stathopoulou and Remondino [32] emphasized that this principle of photometric consistency forms the foundation of MVS methodologies. Therefore, MVS goes beyond sparse feature-based Structure from Motion (SfM) results to enable detailed 3D shape reconstruction of surfaces. The following discussion will examine the actual performance evaluation and technical limitations of the SfM-MVS pipeline in detail. Therefore, the choice of the SfM strategy determines the quality of the input information for MVS, and MVS itself fundamentally relies on the assumption that corresponding surface points appear similar across different views [33]. However, this core assumption of MVS is often violated in real-world environments, particularly with common building materials featuring reflective surfaces or large areas of uniform, textureless appearance necessitating advanced algorithms to handle such complexities during the dense reconstruction process [34].
One of the important limitations inherent in many MVS approaches is the difficulty of accurately restoring so-called weakly supported surfaces, which lack texture or have high reflectance. These types of surfaces do not provide sufficient photogrammetric information for reliable correspondence matching at multiple points, causing difficulties in the restoration process. This issue significantly impacts the reconstruction of planar structures commonly found in artificial environments, potentially resulting in incomplete or inaccurate models. Yan et al. [35] proposed a method to improve the completeness of reconstruction in such cases by interpolating points within the detected 3D plane boundaries. However, this solution is limited to planar geometric structures and often requires manual intervention for object plane identification, thereby limiting its generalizability and automation potential. Therefore, developing methods to robustly and automatically restore various types of weakly supported surfaces remains an important research challenge.
Previous research on SfM-MVS methodology has driven significant algorithmic advancements aimed at improving the quality and robustness of 3D reconstruction. These advancements are evident across various domains, ranging from feature detection and matching strategies to dense reconstruction algorithms that incorporate both traditional and learning-based paradigms. Algorithm improvements and comparative evaluations have been continuously conducted to optimize various stages of the reconstruction pipeline [32,36]. Nevertheless, challenges such as handling surfaces with insufficient texture or high reflectivity, ensuring scalability and preserving fine details remain ongoing challenges that highlight the difficulties still present [32,35]. While algorithmic advancements have mitigated some of the limitations from the past, consistently obtaining precise and complete 3D geometric information across diverse real-world environments remains a challenge, highlighting the need for novel approaches, potentially integrating learning-based priors or advanced data fusion techniques, to overcome these persistent limitations, especially in the context of building modeling.

2.3. Previous Research on SfM-Based 3D Reconstruction

Previous studies have extensively demonstrated the usefulness of combining unmanned aerial vehicle (UAV) platforms and the SfM-MVS pipeline for 3D reconstruction in various fields, particularly in the areas of cultural heritage preservation and building modeling. These studies have explored various platform types, onboard sensors, and data processing workflows, recognizing that careful planning is crucial for successful reconstruction [37]. Additionally, efforts have been made to improve specific algorithm components, such as enhancing the robustness of the MVS stage through learning-based approaches or developing multiple drone systems to increase the efficiency of large-scale data acquisition [12,20]. However, many studies have primarily focused on the application of the technology itself or algorithm improvements after data acquisition and often use flight paths that are manually designed or limited to specific situations without undergoing rigorous optimization processes. As a result, there is a significant gap in systematic verification and optimization studies on the impact of data acquisition strategies, particularly the influence of various flight patterns and parameters on the reconstruction quality of complex structures. To address these issues, it is necessary to shift the focus of research toward quantitatively evaluating the impact of flight design on the entire SfM-MVS workflow.
The absence of a systematic flight planning and quantitative evaluation framework is hindering the establishment of reliable and optimized UAV data acquisition protocols for SfM-MVS-based 3D reconstruction. Some studies have attempted model-based path planning methods that consider surface coverage or “reconstructability” metrics, but these methods do not necessarily guarantee globally optimal solutions, and thorough quantitative evaluation of the resulting SfM-MVS outputs is often lacking [13]. Other studies point out issues arising from inappropriate viewpoint distributions, such as registration failures due to large viewpoint differences or performance degradation of new viewpoint image synthesis algorithms, indirectly emphasizing the need for well-designed data acquisition patterns [38]. Crucially, the existing literature generally lacks comprehensive and comparative analyses of quantitative metrics such as feature point density distribution, reprojection error statistics, model completeness, processing time, and memory usage across various flight scenarios (e.g., grid-based, spiral, oblique angle capture patterns, overlap ratio, and flight altitude changes). The absence of such a systematic and metric-driven validation process makes it challenging to generalize research results or establish standardized best practices for acquiring high-quality data tailored to the complexity and reconstruction objectives associated with specific objects.
Therefore, this study aims to establish standardized guidelines for image data collection using UAV and the application of SfM for building exterior diagnosis. To achieve this, we will systematically experiment and quantitatively verify the effects of various UAV shooting patterns, key variables (distance, overlap rate, camera angle, etc.), and their combinations on the quality and processing efficiency of SfM results.

3. Methodology

This section describes in detail the research methodology designed to address the lack of standardized guidelines for the collection of building exterior image data using UAVs and the application of SfM. To systematically conduct the research, an analysis framework was established, UAV shooting patterns and variables were defined based on this framework, and a specific implementation environment was set up to ensure consistent experimentation. This methodology covers the entire process from data collection to result analysis, with a focus on ensuring the reliability and reproducibility of the research results. This study was conducted based on a systematic analysis framework to quantitatively clarify the relationship between UAV shooting variables and SfM processing results.
This framework consists of sequential steps: data collection planning, actual data acquisition, SfM processing, and result indicator evaluation (Figure 1). At each stage, core variables were defined, a consistent data processing pipeline was applied, and objective performance evaluation metrics were selected to achieve the research objectives. For example, in the data collection stage, scenarios were designed by combining various shooting paths, separation distances, overlap rates, and camera tilts, and all collected images were processed using the same SfM algorithm (SIFT-based) and parameter settings. This framework provides a systematic basis for comparing and analyzing the impact of various shooting conditions on the accuracy, completeness, and efficiency of SfM results. In this context, ‘Reconstruction Completeness’ is quantitatively defined as the percentage of the facade’s surface area that is successfully represented in the final 3D point cloud, measured against a high-density ground truth model.
Next, to effectively collect 3D shape information of building envelopes, a combination of various drone shooting patterns and key variables was systematically designed and tested. The main variables considered included shooting path type, distance from the target (A-offset, B-Radius), image overlap rate (forward and lateral overlap), and camera tilt angle. The imaging patterns applied in the experiments were defined into three main types: Circular, Surface, and Aerial (Figure 2, Table 2). Specifically, the circular shooting method was performed by adjusting the tilt angle toward the center of the building within a height range of 20 to 80 m from the building roof and a radius range of 10 to 50 m from the center point. The frontal shooting method was performed by setting the distance from the target to 10 to 30 m, the forward and lateral overlap distances to 1, 2, and 4 m, and the tilt angle to 0°, 15°, 30°, 45°, and the aerial photography method was conducted at altitudes of 20 and 40 m above the rooftop with an overlap rate of 80% and a tilt angle of 60–75°. Additionally, to ensure precise location information for each shooting point, GNSS-RTK was utilized to manage horizontal and vertical position errors within 2–3 cm. Prior to actual flight, simulations were conducted to verify the safety and appropriateness of the planned flight path. This comprehensively defined combination of capture patterns and variables enables data collection under various conditions, serving as a foundation for enhancing the reliability of subsequent analyses. In addition to analyzing these single patterns individually, this study also investigated composite shooting strategies, where data from two different patterns were combined. This was done to evaluate whether hybrid approaches could leverage the strengths of each pattern to produce a more comprehensive 3D model.
To ensure consistency and data quality in collecting building envelopes using UAVs, the experimental implementation environment was specifically defined and controlled (Table 3). In this study, a commercial UAV model, DJI Mavic 3 Enterprise (DJI, Shenzhen, China), was used, and the camera mounted on it had the following specifications: effective pixel count of 5280 × 3956 (~20.9 MP), 35 mm equivalent focal length of 24 mm (original focal length of 12.3 mm), and a fixed aperture value of F/2.8. To minimize image blur during shooting, the shutter speed was set to 1/2000 s, the ISO sensitivity was fixed at 400 to manage noise levels, and the color space was set to sRGB to ensure consistent color reproduction. White balance was set to manual, exposure mode was set to manual, and exposure compensation was maintained at 0. In addition, to minimize data bias caused by changes in light conditions, all shots were taken on a cloudy day with no direct sunlight (measured illuminance of 20,000 lux or less). Buildings with architectural features similar to those of apartment complexes were selected as the experimental subjects, and a total of 13,416 images were collected through 126 different flight path designs with an average flight time of 10 min (number of images per path: circular 30–41, front 18–411, aerial 40–125) (Figure 3). All collected images were saved in JPEG format with an average size of 7.8 MB, and each image was accompanied by detailed metadata, including the shooting time, GPS coordinates, and camera and lens information.

4. Validation

Following the research flowchart (Figure 1), multifaceted analysis of single and multiple UAV shooting pattern data was conducted to quantitatively verify the influence of shooting variables on Structure from Motion (SfM) result indicators (Table A1). Initially, the Kruskal–Wallis test was applied to establish the overall statistical significance of the basic shooting pattern ‘type’. Subsequently, the influence of individual parameters within each pattern was quantified; Multiple Regression was used to model linear trends, while Random Forest Feature Importance assessed non-linear contributions. The interactive effects between key variable pairs were then visualized using Response Surface Methodology (RSM). To further characterize the performance space, Principal Component Analysis (PCA) and K-Means Clustering were employed to identify distinct outcome profiles. Finally, to identify the most efficient shooting strategies amidst conflicting objectives, TOPSIS was used to rank composite patterns, and Pareto Optimization was applied to delineate the optimal trade-off frontiers. The key findings and implications derived from the analysis are presented for each analytical methodology, and the study concluded by synthesizing these findings to discuss the contributions and limitations of the research.

4.1. Single Shooting Pattern

4.1.1. Multiple Regression

Multiple Regression analysis assumes a linear relationship between each shooting variable and SfM result indicators (final number of points, reconstruction completeness, reprojection error, calculation time, etc.), and aims to determine the magnitude of the influence of individual variables on changes in result indicators (regression coefficients), statistical significance (p-value), and the explanatory power of the model as a whole (adjusted R-squared).
The results of the surface capture pattern analysis showed that the variables related to the overlap between images, Forward Overlap and Side Overlap, exhibited a close linear relationship with the quality indicators of SfM results (Table 4). Specifically, these two variables showed statistically highly significant (p < 0.001) negative regression coefficients for both the number of points generated and the model reconstruction completeness. This quantitatively demonstrates that as the values of these variables decrease, i.e., as the physical overlapping between images increases, the number of points and reconstruction completeness increases linearly. These results align precisely with the fundamental principle of SfM that sufficient feature matching between multi-view images is essential for successful 3D reconstruction and statistically reaffirm that ensuring overlap is a critical factor for quality improvement. Data analysis results show that the Forward Overlap and Side Overlap variables accounted for approximately 70% (adjusted R2 ≈ 0.70) of the variability in Final Points and approximately 68% (adjusted R2 ≈ 0.68) of the variability in Reconstruction Completeness. Additionally, these variables had a significant negative impact on reducing projection errors; however, the model’s explanatory power was relatively low, at approximately 21% (adjusted R2 ≈ 0.21). On the other hand, the camera tilt angle variable did not show statistically significant linear effects on most of the result indicators in this analysis model.
In the case of the Circular shooting pattern, the influence of variables different from those in the Surface pattern was prominent. An increase in the radius when shooting in a circular pattern around the subject suggested a significant (p ≈ 0.052) positive relationship with an increase in the reprojection error. This implies that as the shooting distance increases, the geometric accuracy of the final model may decrease due to changes in image scale and increased likelihood of feature point matching errors. The explanatory power of the model was high, at approximately 80% (adjusted R2 ≈ 0.80). In contrast, an increase in the camera tilt angle showed a statistically significant (p ≈ 0.003) positive relationship with an increase in the final number of points (adjusted R2 ≈ 0.69). This suggests that capturing objects from more diverse angles can reduce occlusion areas and enhance the model’s detail. However, high condition numbers were observed in some regression models used in the Circular pattern analysis, raising the possibility of multicollinearity. Multicollinearity implies strong correlations among independent variables, meaning that changes in one variable (e.g., Diameter) may be associated with changes in another variable (e.g., Tilt angle). Therefore, a cautious approach is required to interpret the independent pure effects of each variable on the outcome metric. Similarly to how specific tilt angle settings may impose constraints on the minimum shooting radius in real-world scenarios, these variables may be difficult to control independently.
In conclusion, multiple regression analysis clearly demonstrated that the key variables influencing SfM results can vary depending on the shooting pattern. In the Surface pattern, ensuring image overlap plays a decisive role in improving result quality, while in the Circular pattern, the shooting radius and camera tilt angle can have opposite effects on reprojection error and point count, respectively, and the potential for interaction between these variables must be considered. Furthermore, the relatively low adjusted R2 values observed in some result metrics (e.g., Reprojection Error in the Surface pattern model) suggest that these result metrics cannot be perfectly explained by simple linear relationships with the shooting variables considered in this study.

4.1.2. Kruskal–Wallis Test

Next, considering the possibility that the distribution characteristics of SfM result data generated under various shooting conditions, particularly the assumptions of normality and homoscedasticity, may not be satisfied, we applied the Kruskal–Wallis test, a nonparametric statistical test. The purpose of this analysis was to rigorously verify whether there were statistically significant differences in the median distributions of the main SfM result indicators according to the three main shooting pattern types (‘Circular’, ‘Surface’, ‘Aerial’) as categorical variables.
The results of the Kruskal–Wallis test analysis confirmed that statistically significant differences existed in the median distributions of all major SfM result metrics investigated in this study, namely reprojection error, final number of points generated, total computation time, and reconstruction completeness, across shooting pattern types. This demonstrates that the Kruskal–Wallis test is less stringent in its assumptions about data distribution, making it flexible for application in real-world data analysis scenarios, and clearly shows that the ‘type’ of shooting pattern itself is an important factor explaining the variability in the result metrics in this dataset. The specific Kruskal–Wallis test results for each performance metric are summarized in Table 5. For the reprojection error, the H statistic was 20.23 (p < 0.001), for the final number of points, H = 7.50 (p = 0.024), for computation time, H = 25.84 (p < 0.001), and for reconstruction completeness, H = 15.71 (p < 0.001). All p-values were significantly lower than the conventional significance level of 0.05, strongly supporting that the observed differences in the distribution of outcome metrics between shooting pattern types were not due to chance.
These results provide important insights beyond the parameter influences identified in the previous multiple regression analysis. Specifically, they provide strong experimental evidence that the choice of shooting ‘type’ itself has a decisive influence on the overall result quality (model accuracy and completeness) and processing efficiency (computation time) of SfM-based 3D modeling projects.
Therefore, when conducting actual 3D modeling projects, it is essential to comprehensively consider various factors such as the geometric shape of the target building or structure (e.g., vertically tall buildings, horizontally wide structures, presence of complex curved surfaces or corners, etc.), accessibility constraints during on-site photography, and the intended use of the final modeling results (e.g., focus on overall shape restoration or detailed diagnosis and precise analysis of specific parts) to select the most appropriate basic shooting pattern type (e.g., Surface pattern for wide area coverage, Circular pattern for specific objects or complex shapes) as a priority.

4.1.3. Random Forest Feature Importance

To address the inherent limitations of multiple regression analysis, which assumes linear relationships, and to more comprehensively understand the complex interactions and nonlinear relationships between shooting variables and their impact on SfM result metrics, we performed feature importance analysis using Random Forest, a machine learning technique. This analysis aims to evaluate the relative importance of each shooting variable in predicting specific SfM result metric values by quantifying their contribution as importance values (Table 6).
The results of the Random Forest analysis for surface capture patterns showed that the two variables related to image overlap, Forward overlap and Side overlap, had a combined importance of 0.9 or higher in predicting the Final Points and Reconstruction Completeness metrics, indicating that they exert an absolute influence. This trend aligns with the regression analysis results, reaffirming that ensuring overlap is the most critical factor for the number of points and completeness of the model. On the other hand, in the Computation Time prediction model, the three variables Forward overlap, Side overlap, and Tilt angle all showed relatively similar and high importance levels between 0.3 and 0.4, suggesting that computation time is influenced by a combination of multiple variables. Interestingly, in the Reprojection Error prediction, Tilt angle was found to be significantly more important than Forward overlap and Side overlap
In the case of the Circular shooting pattern, the importance of variables showed a distinct difference depending on the target result indicator. Camera tilt angle was the most important variable, with an importance of approximately 0.71 in the prediction of the final number of points and approximately 0.59 in the prediction of computation time. On the other hand, shooting radius showed the highest importance, with approximately 0.82 in reprojection error prediction and approximately 0.60 in reconstruction completeness prediction. The importance of camera height offset was evaluated as relatively low or moderate, ranging from 0.06 to 0.30, in most of the analyzed result indicators. It is particularly noteworthy that the Tilt angle variable of the Surface pattern, which did not show a statistically significant linear relationship with Reprojection Error and Computation Time in multiple regression analysis or had a small influence, was evaluated as an important variable in the random forest analysis. This strongly suggests that Tilt angle may influence the result indicators in a non-linear manner rather than a simple linear relationship, or that it may contribute to the prediction performance through interactions with other variables (e.g., overlap). This emphasizes that complex relationships between variables that are difficult to capture with linear models alone may exist.
In conclusion, Random Forest variable importance analysis partially confirms the results of multiple regression analysis (e.g., the importance of the overlap variable in the Surface pattern) while providing a clearer understanding of the relative importance of variables from a nonlinear model perspective. Additionally, it offers specific insights into how the influence of each shooting variable varies depending on the shooting pattern and the target outcome metric. For example, when applying the Circular pattern, if improving geometric accuracy or completeness of reconstruction is the top priority, controlling the shooting radius is essential, but if maximizing the model’s detail (number of points) or processing efficiency management is important, it is necessary to focus more on optimizing the camera tilt angle.

4.1.4. Principal Component Analysis and K-Means Clustering

To mitigate the complexity of the multidimensional performance space composed of multiple disparate SfM result metrics and effectively identify the intrinsic structure of the data, we sequentially applied principal component analysis (PCA) and k-means clustering techniques. First, PCA was used to reduce the multivariate performance data into a smaller set of core principal components (PCs), making visualization and interpretation easier. Subsequently, the k-means clustering algorithm was applied to group shooting settings (data points) with similar performance characteristics, aiming to classify overall performance patterns and identify the unique characteristics of each cluster. To ensure the robustness of the analysis, rows containing outliers defined based on missing values and the interquartile range (IQR) were preemptively removed from the original data before performing PCA and clustering (Table 7).
For the analysis, we standardized six key performance metrics (Reprojection Error, Final Points, Mean Track Length, Point Density, Computation Time, Reconstruction Completeness) and then performed PCA. The results showed that the top three principal components (PC1, PC2, PC3) explained approximately 82.8% of the total data variance (individual explanatory power: PC1 43.5%, PC2 22.6%, PC3 16.8%). This indicates that a relatively small number of principal components effectively summarize a significant portion of the information in the original data without loss, successfully achieving dimension reduction.
PCA is useful for interpreting complex multivariate data, and by analyzing the correlations (loading values) between each principal component axis and the original variables, one can understand their meaning. For example, if the loading value analysis shows that PC1 has a high positive correlation with indicators related to ‘overall reconstruction quality and detail’ and a negative correlation with Reprojection Error, then the PC1 axis can be interpreted as a comprehensive indicator of model quality. Similarly, PC2 may show a high correlation with variables related to ‘computational efficiency’. Next, we applied the Elbow method to determine the optimal number of clusters (k) for cluster analysis. By observing the changes in the within-cluster sum of squares, the ‘elbow’ point where the slope of the graph curve becomes the most gradual was clearly identified when the number of clusters was 2, and the optimal number of clusters was finally determined to be 2.
Using the determined optimal number of clusters (k = 2), k-means clustering was performed, and the results were visualized in a two-dimensional space with PC1 and PC2 as axes (Figure 4). The visualization results showed that the two clusters (Cluster 0 and Cluster 1) tended to be relatively clearly distinguished based on the PC1 axis. Specifically, Cluster 0 was mainly distributed in the negative region of PC1, while Cluster 1 was primarily located in the positive region of PC1. However, some data points were observed to be mixed in the boundary region between the two clusters.
In conclusion, the unsupervised learning analysis combining PCA and k-means clustering strongly suggests that the extensive UAV-SfM shooting settings tested in this study do not simply form a continuous performance spectrum but naturally divide into two distinct performance profile groups when considering multiple performance metrics comprehensively.

4.1.5. Response Surface Methodology (RSM)

RSM approximates the relationship between selected independent variable pairs and the dependent variable using a quadratic polynomial regression model and presents the results as a visual response surface or contour plot. This technique is useful for intuitively identifying patterns of relationships between variables and local regions of optimal variable combinations that optimize specific outcome metrics. Therefore, to explore the combined effects of two shooting variables changing simultaneously on specific SfM outcome metrics and potential interactions beyond the influence of individual variables, this study employed RSM (Table 8).
For the analysis, a quadratic polynomial regression model was constructed for five predefined combinations of (independent variable pairs × dependent variable), and the explanatory power (Adjusted R2) of each model varied from approximately 0.42 to 0.80 depending on the analyzed combination, confirming that the model’s suitability varies depending on the case. For example, in the Circular pattern, the model predicting Reprojection Error using Radian and Tilt angle as variables showed an adjusted R2 value of approximately 0.80, indicating relatively high explanatory power and suggesting that the quadratic relationship between the variable combination and the result indicator is well explained.
The prediction results of each RSM model were visualized as contour plots showing the trend of changes in the predicted values of the dependent variable according to changes in the two independent variables. The analysis of the Circular pattern showed that Final Points increased as both offset and Tilt angle increased, with the contour lines forming a gentle ridge shape where the influence of Tilt angle appeared to be greater (Figure 5). The optimal region that maximizes the number of points in this model is interpreted as being in the upper right of the plot, i.e., the region where both variables are high. On the other hand, Reprojection Error exhibited a distinct elliptical valley-shaped response surface, minimizing when both Radian and Tilt angle were within specific ranges (as described in the text, Radian ≈ 10–15 m, Tilt ≈ 55–65°). When the optimal region is exceeded, the reprojection error increases, and a tendency for the error to increase more prominently as the shooting radius increases was observed.
The results of the surface pattern analysis showed a different pattern (Figure 6). Computation Time increased linearly as Forward overlap and Side overlap decreased (i.e., as the overlapping area between images increased), and the contour lines appeared as straight lines that were almost parallel. Final Points depended mainly on Forward overlap, increasing sharply as this value decreased (i.e., as the overlap increased), while the influence of Tilt angle was negligible, with contour lines appearing almost vertical. Similarly, Reconstruction Completeness also primarily depends on Side overlap, showing an increasing trend as the overlap increases, while the influence of Tilt angle is negligible, resulting in contour lines that are nearly vertical.

4.1.6. Pareto Optimization of Single Shooting Patterns

Based on previous analyses, it is common for various performance goals pursued in UAV-SfM-based 3D modeling tasks, such as reducing computation time, minimizing reprojection error, maximizing the number of points, and maximizing reconstruction completeness, to be in conflict with each other (trade-offs). In this section, a Pareto optimization analysis was performed by pairing the main performance objectives (e.g., computational time–reprojection error, computational time–number of points, reprojection error–number of points, computational time–reconstruction completeness). Pareto optimization is a multi-objective optimization methodology that aims to identify a set of “non-dominated” solutions, where improving one specific objective necessarily results in a loss in one or more other objectives, thereby forming the Pareto Frontier. Among the entire set of shooting configurations (data points) analyzed for each goal combination, different configurations were identified as Pareto optimal solutions, and Table 9 summarizes the analysis results for each goal combination.
Visualizing the Pareto frontier for each goal combination clearly revealed the trade-off characteristics between goals (Figure 7). The frontier for the ‘Error_vs._Points’ combination exhibited a concave shape toward the upper left, indicating low error and high point count simultaneously. This suggests that there is an efficient range where the number of points can be significantly increased by slightly allowing (increasing) the error, while there is a range where the number of points must be significantly sacrificed to reduce the error below a certain level. On the other hand, the frontier for the ‘Time-vs-Completeness’ combination is in the upper left corner, indicating short computation time and high reconstruction completeness, and exhibits a nearly horizontal shape. This indicates a ‘diminishing returns’ phenomenon, meaning that even with significantly more time invested beyond a certain point, the improvement in completeness is minimal, while it is possible to achieve a fairly high level of reconstruction completeness (0.985 or higher) within a very short computation time (e.g., 3–8 min). The frontier of the ‘Time_vs._Error’ combination is located in the lower left (short time, low error) and exhibits a slightly convex shape, indicating that investing a little more time initially (e.g., 3–6 min) can significantly reduce errors, but the error reduction effect gradually diminishes with further time investment. Finally, the frontier of the ‘Time_vs._Points’ combination shows a convex upward trend from the lower left to the upper right, indicating that initial time investment (e.g., 3–14 min) is highly effective in increasing the number of points, but additional time investment beyond that point contributes significantly less to further increases in the number of points.
Considering specific requirements or constraints unique to each project (e.g., limited total working time, required minimum accuracy level, required model detail), this study provides quantitative grounds for reasonably selecting the most efficient shooting strategy (parameter combination) from among multiple alternatives on the Pareto frontier that best fits the given conditions. For example, in a project with a very tight deadline, the solution located farthest to the left on the time-related axis among the analyzed frontiers (the fastest time) can be prioritized. Conversely, if the final model’s detail is the top priority, the optimal balance point can be sought around the solutions with the highest number of points on the ‘Time_vs._Points’ or ‘Error_vs._Points’ frontier.

4.2. Analysis of Multiple Shooting Patterns

The effects of individual shooting variables on SfM result metrics were analyzed from various angles within different single shooting patterns (Surface, Circular, Aerial), and the importance of multifaceted performance evaluation beyond single target optimization was confirmed. However, single shooting patterns alone may not be sufficient for complete 3D modeling of complex structures, and a composite shooting strategy combining shooting patterns with different characteristics may be required. Therefore, in this section, we comprehensively evaluate the performance of such composite shooting patterns and analyze their characteristics.

4.2.1. Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)

Composite shooting pattern strategies offer the potential to leverage the advantages of various individual patterns, but at the same time, the performance criteria that need to be evaluated become more complex. In this study, we applied the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), one of the multi-criteria decision-making (MCDM) techniques, to comprehensively consider multiple performance criteria and evaluate the overall effectiveness or preference of various composite shooting pattern combinations, thereby determining their relative rankings. The analysis considered a total of seven performance criteria, including input-related indicators such as computation time, number of images used, and reprojection error, as well as output-related indicators such as final point count, reconstruction completeness, point density, and average track length.
In this analysis, equal weights were assigned to all performance criteria to calculate the comprehensive performance score for each composite shooting pattern combination (Table 10). The naming convention for each analysis is “[Shooting Type]_O[Distance Value (m)]” (e.g., C_O20: Circular pattern offset distance 20 m). According to the TOPSIS analysis results, the composite shooting pattern named C_O40_S_O30 (Circular pattern offset distance 40 m and Surface pattern offset distance 30 m) achieved the highest overall efficiency score (TOPSIS Score: 0.691) and was selected as the top-ranked combination with the best performance under the evaluated criteria.
Following closely behind were C_O20_S_O30 (Score: 0.687), S_O30_A_O40 (Score: 0.686), and C_O40_A_O20 (Score: 0.683), which formed the top ranks with minimal score differences, demonstrating high overall performance. On the other hand, the S_O10_S_O20 combination recorded the lowest score (Score: 0.330) and was evaluated as the least efficient composite shooting pattern under the same weighting conditions as the criteria considered in this analysis. In summary, this result indicates that a complex shooting path plan that maximizes the estimation of architectural features is more beneficial for 3D reconstruction than a method that collects as many detailed images as possible from nearby shooting points.
In conclusion, by applying the TOPSIS method, we were able to integrate multiple conflicting performance criteria into a single quantitative score, enabling an objective comparison of the ‘overall performance’ of various composite shooting patterns and assigning relative priorities. The composite shooting pattern combinations ranked at the top in this analysis (C_O40_S_O30, C_O20_S_O30, etc.) can be considered as promising candidate strategies for achieving overall effective 3D modeling results under resource-constrained environments (e.g., time, cost).

4.2.2. Pareto Optimization of Multiple Shooting Patterns

The TOPSIS analysis comprehensively considered multiple performance criteria with equal weights to rank the overall efficiency of composite shooting patterns. This confirmed that combinations such as C_O40_L_O30 are excellent strategies that satisfy multiple criteria in a balanced manner. However, certain objectives (e.g., fast processing time or highest accuracy) may be more important than others. Therefore, in addition to the overall ranking from TOPSIS, it is important to identify which composite shooting strategies are the most efficient alternatives for performance goal pairs with significant trade-offs (e.g., time vs. error). To this end, we performed a Pareto optimization analysis on the composite shooting pattern data, similarly to the single pattern analysis, to identify the Pareto frontier.
The analysis was conducted using composite pattern data targeting the same key goal combinations considered in the single pattern analysis, and the sets of Pareto optimal solutions and frontier forms for each combination are shown below (Figure 8). The analysis results show that different composite shooting pattern combinations form a Pareto optimal frontier for each goal combination, indicating an efficient trade-off between the goals. As a specific example, only three composite pattern combinations were identified as Pareto optimal solutions for the ‘Time_vs._Error’ objective combination. These combinations are located in the efficient region in the lower left corner of the visualization results, achieving relatively short computation times (estimated to be in the range of 0–35 min) and low reprojection errors (approximately 1.08–1.09) simultaneously. Additionally, for the ‘Time_vs._Points’ combination, a total of 13 composite pattern combinations formed the Pareto frontier. This frontier exhibited a convex upward trend from the lower left (short time, few points) to the upper right (long time, many points), indicating that the initial time investment (estimated to be in the range of 0–70 min) is highly effective for increasing the number of points, but the efficiency of additional time investment decreases sharply thereafter, demonstrating the ‘diminishing returns’ trend in composite pattern analysis. Similar Pareto optimal sets were identified for other key goal combinations such as ‘Error_vs._Points’ and ‘Time_vs._Completeness’, with each frontier presenting the most efficient trade-offs between the respective goals.
In conclusion, even when using composite shooting patterns, non-dominated strategies exist that demonstrate superior or at least equivalent efficiency compared to other combination strategies under specific performance goal combinations. Therefore, when selecting the final composite shooting strategy, it is reasonable to prioritize the comprehensive performance rankings derived from the TOPSIS analysis (Table 10) while selecting the solution that provides the most appropriate balance on the Pareto frontier (Figure 7) for the relevant goal combinations based on the specific requirements of the project (e.g., time constraints, quality goals).

4.3. Discussion

The individual variable effects on analyzed single and composite shooting patterns, performance comparisons between pattern types, and multi-criteria and multi-objective optimization evaluations clearly demonstrated that the UAV-based SfM 3D reconstruction process is a multifaceted optimization problem that goes beyond a simple input–output relationship. From the fundamental selection of capture patterns to the fine-tuning of individual parameters and the inevitable trade-offs between different performance metrics (e.g., accuracy, completeness, efficiency), the quantitative results emphasize the necessity of a systematic and data-driven strategic approach to maximize the efficiency and quality of SfM-based 3D modeling.
Most importantly, the fundamental and significant influence of the basic shooting pattern “type” (e.g., Circular, Surface, Aerial) on the quality and efficiency of subsequent results was statistically clarified through Kruskal–Wallis tests. The results of this non-parametric test proved that the median distributions of key SfM result metrics, such as reprojection error, final point count, computation time, and reconstruction completeness, differ statistically significantly (p < 0.05) depending on the shooting pattern type (Table 5). This suggests that determining the most appropriate macro-level shooting approach that aligns with the project’s objectives (e.g., geometric characteristics of the subject, required model precision, available resources) is a prerequisite before optimizing individual variables. In other words, the pattern type itself acts as a critical design factor that primarily constrains or defines the result space, serving as the foundation for determining the direction and effectiveness of subsequent parameter optimization.
Within a specific selected shooting pattern type, detailed parameters such as overlap, shooting distance/radius, and camera tilt function as key variables that control the result metrics. The results of the multiple regression analysis, Random Forest variable importance evaluation, and response surface analysis (RSM) in this study consistently demonstrated that the influence of these variables varies depending on the target performance metric and the selected pattern type, and includes not only simple linear relationships but also complex nonlinear relationships and interactions between variables. In the Surface pattern, the overlap degree variable exerted a dominant influence on the number of points and reconstruction completeness (regression analysis R2 ≈ 0.70, RF importance sum ≈ 0.9), In the Circular pattern, the shooting radius and tilt angle each played distinct yet important roles in the projection error and number of points (regression analysis R2 ≈ 0.80 and 0.69, RF importance ≈ 0.82 and 0.71, respectively). Notably, some variables (e.g., Tilt angle in the Surface pattern) were not significant in the linear model but showed high importance in the nonlinear model, warning of the risk of bias that may arise when relying on a single analysis method for evaluating variable effects. The RSM results visually confirm these interaction effects and suggest the possibility of a ‘sweet spot’ where optimal performance is achieved under specific variable combinations (Figure 4 and Figure 5). This suggests that SfM optimization cannot be achieved through independent adjustments of individual variables alone, and that a multivariate optimization approach is required to understand and consider the complex functional relationships between variables depending on the target metrics and pattern types.
Finally, the potential of composite shooting patterns was confirmed as an alternative to overcome the inherent limitations of single shooting patterns and improve performance. According to the TOPSIS analysis, a multi-criteria decision-making technique, a specific composite pattern combination (e.g., C_O40_L_O30) demonstrated the highest overall efficiency (TOPSIS Score: 0.691) when considering multiple performance metrics comprehensively (Table 10). This suggests that the complementary nature of information provided by different shooting paths can contribute to improving overall model quality. Additionally, Pareto optimization analysis of composite pattern data identified efficient alternative sets for key conflicting goal pairs, such as time-error and time-point count, providing useful information for finding optimal trade-offs under specific constraints (Figure 7 and Figure 9).
It is also important to contextualize our findings. The quantitative guidelines presented herein were derived from a specific, controlled experimental setup: a mid-rise, concrete-facade building and a single commercial UAV under consistent overcast lighting. This controlled environment, while necessary for isolating variable effects, means that the direct applicability of our numerical results—such as specific optimal overlap ratios or ‘sweet spot’ coordinates from RSM—may be limited in more challenging, real-world scenarios involving, for instance, high-rise glass facades or variable lighting. Therefore, our findings should be regarded as a foundational baseline that requires calibration and validation when applied to projects with significantly different characteristics.

5. Conclusions

This study was initiated to address the lack of standardized guidelines for UAV-based image data collection and the application of Structure from Motion (SfM) techniques for diagnosing building exteriors. Previous studies have primarily focused on specific cases or algorithm improvements, resulting in a lack of systematic and quantitative evidence for establishing a shooting strategy applicable to various field conditions. Therefore, this study aimed to derive an optimal methodology for collecting shape information by comprehensively analyzing the influence of UAV shooting patterns, shooting variables, and their complex combinations on SfM result metrics. To this end, an experimental framework was designed, and actual data were collected according to various single and composite shooting pattern scenarios. The collected data were processed using an SfM pipeline (based on the SIFT algorithm), and quantitative evaluations of the results were conducted using multiple regression analysis, nonparametric tests (Kruskal–Wallis), random forest variable importance analysis, principal component analysis (PCA) and K-means clustering, response surface analysis (RSM), multi-criteria decision-making (TOPSIS), and Pareto optimization. The analysis yielded the following key conclusions.
The analysis results provided strong experimental evidence that the selection of the basic shooting pattern “type” has a primary and decisive influence on the quality and efficiency of SfM results. Second, within the selected patterns, the influence of detailed shooting variables such as overlap ratio, shooting radius, and camera tilt varied depending on the target performance metrics, and this relationship exhibited a complex pattern that included nonlinearity and interactions between variables rather than a simple linear relationship. Third, we confirmed that composite shooting patterns, which complement the limitations of single patterns, have the potential to achieve multiple performance goals in a balanced manner under specific conditions. Finally, through Pareto optimization analysis, we quantitatively identified the inevitable trade-offs between key performance objectives and demonstrated that it is possible to support rational decision-making by identifying the set of most efficient shooting strategy alternatives (Pareto frontier) under given constraints.
This study contributes as follows. We conducted a systematic and quantitative comparison and verification study of shooting patterns and variables, which were lacking in the field of UAV-SfM-based 3D data collection of building exteriors, and provided empirical data. We also demonstrated the possibility of multifaceted interpretation of complex data through the integrated application of various analytical methodologies. Furthermore, the study provides specific quantitative grounds for selecting the optimal shooting pattern type (e.g., overlap rate management in the Surface pattern, consideration of radius/tilt combination in the Circular pattern) and detailed parameters tailored to practical conditions. The Pareto optimization results clearly present the most efficient set of shooting strategy alternatives amid various conflicting relationships, providing a practical tool to support rational decision-making. These results can be utilized as basic data for input data quality management and optimization module design in the future development of UAV-based automatic building exterior wall diagnosis systems.
However, this study was conducted on a specific building typology (a mid-rise apartment-like structure) using a particular hardware and software configuration (a DJI Mavic 3 Enterprise and a SIFT-based SfM pipeline), so caution is required when generalizing the results. Additionally, statistical verification of the interaction effects of complex patterns is insufficient, and the analysis primarily focused on result indicators at the SfM stage. Therefore, future studies should expand verification research under a wider variety of conditions including different building types (e.g., high-rise, industrial plants), facade materials (e.g., reflective glass), and sensor technologies, conduct experimental designs including repeated measurements to quantitatively clarify the interaction effects of complex patterns, and perform integrated research on optimizing shooting strategies that consider the quality of the final MVS results. Through such follow-up studies, it is anticipated that the achievements of this study will be further developed and generalized, contributing to enhancing the reliability and applicability of UAV-SfM technology.

Author Contributions

Conceptualization, I.J. and N.H.; methodology, I.J. and J.-J.K.; validation, I.J. and Y.L.; formal analysis, I.J.; investigation, N.H. and J.K.; resources, J.K.; data curation, Y.L. and J.K.; writing—original draft preparation, I.J.; writing—review and editing, N.H. and Y.L.; visualization, Y.L.; supervision, J.-J.K. and J.K.; project administration, I.J.; funding acquisition, N.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (IRIS RS-2025-00557724).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. SfM results for single flight pattern.
Table A1. SfM results for single flight pattern.
PatternOffset DistanceRadianForward Overlap (m)Side Overlap (m)Tilt AngleReprojection ErrorFinal PointsRegistered ImagesMean Track LengthPoint DensityReconstruction CompletenessComputation Time (min)Total Features
Circular2010-163.4124.393933,2103590.320.12670.94593.63666
Circular2020-145132.68527,28034−17.63510.94444.16618
Circular4010-176126.648535,12036−10.17140.94744.53703
Circular4020-163.4126.05830,30334−11.30260.94444.47630
Circular4030-153.1136.333829,45434−15.23060.94444.31630
Circular4040-145136.333829,45434−15.23060.94444.31630
Circular4050-138.7156.228425,03934−15.02110.94448.32549
Circular6010-180.5132.811334,0303453.720.05710.94446.93630
Circular6020-171.6129.700134,58434728.140.87840.94446.31630
Circular6030-163.4129.700134,58434728.140.87840.94446.31630
Circular6040-156.3143.367233,52434−16.97190.94445.08630
Circular6050-150.2152.498331,64534−14.51980.94444.78622
Circular8020-176130.346936,15534144.440.55530.94447.7630
Circular8030-169.4135.73837,65934568.941.30560.94445.54630
Circular8040-163.4145.792935,09234−14.16290.94444.78630
Circular8050-158145.792935,09234−14.16290.94444.78630
Surface10-110126.2504244,95638611459.980.02660.994830.1247,431
Surface10-1115126.2504244,95638611,459.980.02660.994830.1247,431
Surface10-1130116.0709260,80538510,790.980.02580.994832.9549,122
Surface10-1145103.7865250,4603816348.130.0240.994823.6253,334
Surface10-120136.0838136,2742057161.620.04790.990325.5813,152
Surface10-1215132.4321130,5942004092.170.02110.990129.4712,335
Surface10-1230122.4061139,1541993241.230.02780.9933.6212,739
Surface10-1245108.9702142,0871972155.590.02660.9924.2414,054
Surface10-140139.875570,1661092547.650.03360.98225.253696
Surface10-1415138.270264,1261061702.080.01340.981529.533441
Surface10-1430138.270264,1261061702.080.01340.981529.533441
Surface10-1445112.504580,870109210.860.04030.98226.64193
Surface10-210134.116119,4381922777.120.01870.989730.1511,767
Surface10-2115134.116119,4381922777.120.01870.989730.1511,767
Surface10-2130122.886129,4851921215.990.02190.989733.6112,224
Surface10-2145108.5725134,3001901124.010.04490.989624.9113,288
Surface10-220143.986956,2311021204.690.02490.980826.843251
Surface10-2215142.618450,242971016.290.00740.979830.622939
Surface10-2230129.087455,60899437.280.01210.980235.093188
Surface10-2245112.25465,82498393.110.01380.9825.183479
Surface10-240148.538923,57654313.270.02630.964326.57923
Surface10-2415150.36518,56552325.120.00520.981130.89804
Surface10-2430188.207833265−11.21150.981124.01918
Surface10-2445114.812428,82054−10.03250.964327.651016
Surface10-410145.315857,3071011974.90.04860.980625.253185
Surface10-4115141.863550,91995242.250.00850.979431.212910
Surface10-4130128.474655,60595−10.01470.979434.813044
Surface10-4145112.270264,45694−10.05220.979225.683322
Surface10-420151.647317,17746−10.01190.938833.56720
Surface10-4215151.647317,17746−10.01190.938833.56720
Surface10-4230136.013718,47947−10.01020.9438.04778
Surface10-4245113.464726,66550−10.01720.961526.38907
Surface10-440157.4342651824−10.00790.888933.18199
Surface10-4415158.6881488024−10.00470.888937.25204
Surface10-4430164.559127930−10.25220.888926.62216
Surface10-4445114.8897968727−10.17670.93128.06261
Surface20-110117.4807215,94637037,253.420.04770.99468.7162,637
Surface20-1115110.8154230,87636935,045.160.05230.994613.1461,923
Surface20-113091.6698228,50837226,412.670.03470.994721.6958,153
Surface20-114576.4481205,33337012,493.580.05790.994613.5459,541
Surface20-120122.3159136,40919219,439.120.04090.98979.3416,750
Surface20-1215116.0804145,10819417,347.840.03820.989813.9817,057
Surface20-123096.3049143,97219418,102.060.07540.989822.3315,841
Surface20-124581.1682128,7091925803.540.12790.989713.6916,014
Surface20-140126.827682,4411048368.590.04830.981110.374833
Surface20-1415119.855188,1411067969.870.03050.981514.75044
Surface20-1430100.44588,6801047972.340.03150.981123.044495
Surface20-144585.620579,1421031971.810.06340.98114.114619
Surface20-210122.694912,482818415,036.380.0310.98929.1315,618
Surface20-2115122.694912,482818415,036.380.0310.98929.1315,618
Surface20-213096.243213,42371856888.160.06520.989321.7814,467
Surface20-214596.243213,42371856888.160.06520.989321.7814,467
Surface20-220128.888772,343955084.250.02440.9794104155
Surface20-2215122.011874,507962119.560.02140.979614.064251
Surface20-2230109.80353679159−11.56830.97968.193907
Surface20-224585.136172,99195−10.05140.979414.243967
Surface20-240136.533536,370523465.170.01120.96311.531229
Surface20-2415128.404138,392521159.090.05250.96315.381247
Surface20-2430110.10343558114−11.61570.9634.941101
Surface20-244589.199639,46651−10.02530.962315.251155
Surface20-410130.384169,471914512.230.07870.97859.493859
Surface20-4115122.344770,516914758.610.02850.978513.883848
Surface20-4130100.515677,388921705.850.06360.978722.623613
Surface20-414584.597470,65691−10.05340.978514.63690
Surface20-420136.594135,24247714.780.07110.959210.631041
Surface20-4215128.371633,18846−10.0310.958314.94997
Surface20-4230128.371633,18846−10.0310.958314.94997
Surface20-424588.128739,46248−10.09350.9615.41026
Surface20-440147.647412,5802514.040.03490.925914.69297
Surface20-4415139.520212,65025−10.01080.925919.01303
Surface20-4430111.592613,40624544.540.03990.923127.23256
Surface20-4445111.592613,40624544.540.03990.923127.23256
Surface30-11099.8357140,08122914,082.350.03180.99133.0826,561
Surface30-111599.5791133,38722918,462.120.02180.99134.6426,454
Surface30-113095.0395135,6522299239.270.04540.991310.8724,852
Surface30-114591.6557123,57022813,476.390.12310.99135.8224,920
Surface30-120103.971588,7341197106.930.01180.98353.27260
Surface30-1215104.119485,16311914,956.720.02120.98354.847231
Surface30-1230104.119485,16311914,956.720.02120.98354.847231
Surface30-1245104.119485,16311914,956.720.02120.98354.847231
Surface30-140108.049651,2596419.770.02360.96973.732145
Surface30-1415109.49849,515647069.70.01380.96975.792126
Surface30-1430109.49849,515647069.70.01380.96975.792126
Surface30-1445100.073344,627631800.060.0850.96926.721929
Surface30-210103.333883,0671145171.950.02830.98283.196670
Surface30-2115103.691879,6771145787.550.03810.98284.826644
Surface30-213098.317780,4411142173.110.11690.982811.356229
Surface30-214595.122972,8721133914.180.03550.98266.066184
Surface30-220107.124856,04164869.590.0440.96973.92145
Surface30-2215107.547754,533644926.830.08080.96975.962129
Surface30-2230101.097353,22164564.330.08520.969713.671936
Surface30-224598.069742,136552619.230.09930.96496.231502
Surface30-240112.979127,05734813.640.03930.94445.08630
Surface30-2415112.979127,05734813.640.03930.94445.08630
Surface30-2430107.30525,4753436.110.09340.944413.86565
Surface30-2445101.730121,24929743.060.0450.93558.09433
Surface30-410109.482742,376563125.720.03010.96553.931653
Surface30-4115109.789542,050561420.880.08420.96555.671641
Surface30-4130102.095441,935561342.170.02460.965512.341541
Surface30-414598.470839,761561560.790.01260.96556.681557
Surface30-420113.993923,81431793.30.03780.93945.57528
Surface30-4215113.993923,81431793.30.03780.93945.57528
Surface30-4230106.315923,28131−10.04150.939414.87473
Surface30-4245100.960119,53927725.380.07120.9318.12383
Surface30-440117.225411,80316−10.0160.888911.52153
Surface30-4415120.227211,64816−10.03450.888916.56152
Surface30-4430110.288212,75616−10.02050.888924.48138
Surface30-4445102.013112,64416−10.07630.888918.72139
Aerial20-116095.211480,012118632.210.24560.975210.194108
Aerial40-117595.628931,75544−13.07710.95654.57904

References

  1. Madureira, S.; Flores-Colen, I.; de Brito, J.; Pereira, C. Maintenance planning of facades in current buildings. Constr. Build. Mater. 2017, 147, 790–802. [Google Scholar] [CrossRef]
  2. Yang, D.Y.; Frangopol, D.M. Risk-based inspection planning of deteriorating structures. Struct. Infrastruct. Eng. 2021, 18, 109–128. [Google Scholar] [CrossRef]
  3. Fregonara, E.; Ferrando, D.G. The Stochastic Annuity Method for Supporting Maintenance Costs Planning and Durability in the Construction Sector: A Simulation on a Building Component. Sustainability 2020, 12, 2909. [Google Scholar] [CrossRef]
  4. Dias, I.S.; Flores-Colen, I.; Silva, A. Critical Analysis about Emerging Technologies for Building’s Façade Inspection. Buildings 2021, 11, 53. [Google Scholar] [CrossRef]
  5. Yoon, J.; Shin, H.; Kim, K.; Lee, S. CNN- and UAV-Based Automatic 3D Modeling Methods for Building Exterior Inspection. Buildings 2024, 14, 5. [Google Scholar] [CrossRef]
  6. Cho, S.-H.; Lee, K.-T.; Kim, S.-H.; Kim, J.-H. Image Processing for Sustainable Remodeling: Introduction to Real-time Quality Inspection System of External Wall Insulation Works. Sustainability 2019, 11, 1081. [Google Scholar] [CrossRef]
  7. Dixit, I.; Dunne, C.; Blumer, P.; Logan, C.; Prakitpong, R.; Krebs, C. The best of each capture—the combination of 3D laser scanning with photogrammetry for optimized digital anatomy specimens. FASEB J. 2020, 34, 1. [Google Scholar] [CrossRef]
  8. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  9. Jiang, S.; Jiang, C.; Jiang, W. Efficient structure from motion for large-scale UAV images: A review and a comparison of SfM tools. ISPRS J. Photogramm. Remote Sens. 2020, 167, 230–251. [Google Scholar] [CrossRef]
  10. Qu, C.-X.; Jiang, J.-Z.; Yi, T.-H.; Li, H.-N. Computer vision-based 3D coordinate acquisition of surface feature points of building structures. Eng. Struct. 2024, 300, 117212. [Google Scholar] [CrossRef]
  11. Gao, L.; Zhao, Y.; Han, J.; Liu, H. Research on Multi-View 3D Reconstruction Technology Based on SFM. Sensors 2022, 22, 4366. [Google Scholar] [CrossRef] [PubMed]
  12. Filatov, A.; Zaslavskiy, M.; Krinkin, K. Multi-Drone 3D Building Reconstruction Method. Mathematics 2021, 9, 303. [Google Scholar] [CrossRef]
  13. Zhang, S.; Zhang, W.; Liu, C. Model-Based Multi-Uav Path Planning For High-Quality 3d Reconstruction Of Buildings. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLVIII-1-W2-2023, ISPRS Geospatial Week 2023, Cairo, Egypt, 2–7 September 2023; pp. 1923–1928. [Google Scholar] [CrossRef]
  14. Abuhussain, M.A.; Waqar, A.; Khan, A.M.; Othman, I.; Alotaibi, B.S.; Althoey, F.; Abuhussain, M. Integrating Building Information Modeling (BIM) for optimal lifecycle management of complex structures. Structures 2024, 60, 105831. [Google Scholar] [CrossRef]
  15. Carrera-Hernández, J.J.; Levresse, G.; Lacan, P. Is UAV-SfM surveying ready to replace traditional surveying techniques? Int. J. Remote Sens. 2020, 41, 4820–4837. [Google Scholar] [CrossRef]
  16. Al-Temeemy, A.A.; Al-Saqal, S.A. Laser-based structured light technique for 3D reconstruction using Extreme Laser stripes extraction method with global information extraction. Opt. Laser Technol. 2021, 138, 106897. [Google Scholar] [CrossRef]
  17. Vacca, G. 3D Survey with Apple LiDAR Sensor—Test and Assessment for Architectural and Cultural Heritage. Heritage 2023, 6, 1476–1501. [Google Scholar] [CrossRef]
  18. Rocchini, C.; Cignoni, P.; Montani, C.; Pingi, P.; Scopigno, R. A low cost 3D scanner based on structured light. Comput. Graph. Forum. 2001, 20, 299–308. [Google Scholar] [CrossRef]
  19. Eulitz, M.; Reiss, G. 3D reconstruction of SEM images by use of optical photogrammetry software. J. Struct. Biol. 2015, 191, 190–196. [Google Scholar] [CrossRef]
  20. Li, Q.; Yang, G.; Gao, C.; Huang, Y.; Zhang, J.; Huang, D.; Zhao, B.; Chen, X.; Chen, B.M. Single drone-based 3D reconstruction approach to improve public engagement in conservation of heritage buildings: A case of Hakka Tulou. J. Build. Eng. 2024, 87, 108954. [Google Scholar] [CrossRef]
  21. Wudunn, M.; Zakhor, A.; Touzani, S.; Granderson, J. Aerial 3d building reconstruction from rgb drone imagery. Geospat. Inform. X 2020, 11398, 9–19. [Google Scholar] [CrossRef]
  22. Argyriou, L.; Economou, D.; Bouki, V. Design methodology for 360° immersive video applications: The case study of a cultural heritage virtual tour. Pers. Ubiquitous Comput. 2020, 24, 843–859. [Google Scholar] [CrossRef]
  23. Anwar, M.S.; Wang, J.; Ullah, A.; Khan, W.; Ahmad, S.; Fei, Z. Measuring quality of experience for 360-degree videos in virtual reality. Sci. China Inf. Sci. 2020, 63, 202301. [Google Scholar] [CrossRef]
  24. Yu, J.; Yin, W.; Hu, Z.; Liu, Y. 3D Reconstruction for Multi-view Objects. Comput. Electr. Eng. 2023, 106, 108567. [Google Scholar] [CrossRef]
  25. Gong, Y.; Zhou, P.; Liu, C.; Yu, Y.; Yao, J.; Yuan, W.; Li, L. A cluster-based disambiguation method using pose consistency verification for structure from motion. ISPRS J. Photogramm. Remote Sens. 2024, 209, 398–414. [Google Scholar] [CrossRef]
  26. Li, D.; Wang, H.; Liu, N.; Wang, X.; Xu, J. 3D Object Recognition and Pose Estimation From Point Cloud Using Stably Observed Point Pair Feature. IEEE Access 2020, 8, 44335–44345. [Google Scholar] [CrossRef]
  27. Bao, Y.; Lin, P.; Li, Y.; Qi, Y.; Wang, Z.; Du, W.; Fan, Q. Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes. Sensors 2021, 21, 3939. [Google Scholar] [CrossRef]
  28. Wang, Y.; Yuan, Y.; Lei, Z. Fast SIFT Feature Matching Algorithm Based on Geometric Transformation. IEEE Access 2020, 8, 88133–88140. [Google Scholar] [CrossRef]
  29. Tahri, O.; Boutat, D.; Mezouar, Y. Brunovsky’s Linear Form of Incremental Structure From Motion. IEEE Trans. Robot. 2017, 33, 1491–1499. [Google Scholar] [CrossRef]
  30. Zhang, R.; Zhu, S.; Shen, T.; Zhou, L.; Luo, Z.; Fang, T.; Quan, L. Distributed Very Large Scale Bundle Adjustment by Global Camera Consensus. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 291–303. [Google Scholar] [CrossRef]
  31. Qi, Y.; Su, W.; Xu, Q.; Tao, W. Sparse prior guided deep multi-view stereo. Comput. Graph. 2022, 107, 1–9. [Google Scholar] [CrossRef]
  32. Stathopoulou, E.K.; Remondino, F. A survey on conventional and learning-based methods for multi-view stereo. Photogramm. Rec. 2023, 38, 374–407. [Google Scholar] [CrossRef]
  33. Liu, L.; Zhang, F.; Su, W.; Qi, Y.; Tao, W. Geometric Prior-Guided Self-Supervised Learning for Multi-View Stereo. Remote Sens. 2023, 15, 2109. [Google Scholar] [CrossRef]
  34. Vogiatzis, G.; Hernández, C. Video-based, real-time multi-view stereo. Image Vis. Comput. 2011, 29, 434–441. [Google Scholar] [CrossRef]
  35. Yan, S.; Peng, Y.; Wang, G.; Lai, S.; Zhang, M. Weakly Supported Plane Surface Reconstruction via Plane Segmentation Guided Point Cloud Enhancement. IEEE Access 2020, 8, 60491–60504. [Google Scholar] [CrossRef]
  36. Stathopoulou, E.-K.; Welponer, M.; Remondino, F. Open-source image-based 3d reconstruction pipelines: Review, comparison and evaluation. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2-W17, ISPRS TC II 6th International Workshop LowCost 3D—Sensors, Algorithms, Applications, Strasbourg, France, 2–3 December 2019; Volume XLII-2/W17, pp. 331–338. [Google Scholar] [CrossRef]
  37. Pepe, M.; Alfio, V.S.; Costantino, D. UAV Platforms and the SfM-MVS Approach in the 3D Surveys and Modelling: A Review in the Cultural Heritage Field. Appl. Sci. 2022, 12, 12886. [Google Scholar] [CrossRef]
  38. Ham, Y.; Michalkiewicz, M.; Balakrishnan, G. DRAGON: Drone and Ground Gaussian Splatting for 3D Building Reconstruction. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Lausanne, Switzerland, 22–24 July 2024; pp. 1–12. [Google Scholar] [CrossRef]
Figure 1. Research flowchart.
Figure 1. Research flowchart.
Applsci 15 07196 g001
Figure 2. Shooting patterns list.
Figure 2. Shooting patterns list.
Applsci 15 07196 g002
Figure 3. Collected Data Sample. (a) Target Building, (b) Circular Dataset, (c) Surface Dataset, (d) Aerial Dataset.
Figure 3. Collected Data Sample. (a) Target Building, (b) Circular Dataset, (c) Surface Dataset, (d) Aerial Dataset.
Applsci 15 07196 g003
Figure 4. PCA K-means scatter plot and cluster.
Figure 4. PCA K-means scatter plot and cluster.
Applsci 15 07196 g004
Figure 5. Circular pattern RSM visualization. (a) Offset and Tilt angle. (b) Radian and Tilt angle.
Figure 5. Circular pattern RSM visualization. (a) Offset and Tilt angle. (b) Radian and Tilt angle.
Applsci 15 07196 g005
Figure 6. Surface pattern RSM visualization. (a) Forward overlap and Side overlap. (b) Forward overlap and Tilt angle. (c) Side overlap and Tilt angle.
Figure 6. Surface pattern RSM visualization. (a) Forward overlap and Side overlap. (b) Forward overlap and Tilt angle. (c) Side overlap and Tilt angle.
Applsci 15 07196 g006
Figure 7. Pareto optimization visualization. (a) Error_vs._Points, (b) Time_vs._Completeness, (c) Time_vs._points, (d) Time_vs._Error.
Figure 7. Pareto optimization visualization. (a) Error_vs._Points, (b) Time_vs._Completeness, (c) Time_vs._points, (d) Time_vs._Error.
Applsci 15 07196 g007
Figure 8. Pareto optimization visualization of multiple shooting patterns. (a) Error_vs._Points, (b) Time_vs._Completeness, (c) Time_vs._Error, (d) Time_vs._Points.
Figure 8. Pareto optimization visualization of multiple shooting patterns. (a) Error_vs._Points, (b) Time_vs._Completeness, (c) Time_vs._Error, (d) Time_vs._Points.
Applsci 15 07196 g008
Figure 9. Results of 3D reconstruction with optimized parameters.
Figure 9. Results of 3D reconstruction with optimized parameters.
Applsci 15 07196 g009
Table 1. SfM Approach Comparison.
Table 1. SfM Approach Comparison.
FeatureIncremental SfMGlobal SfMLatest Trends
AccuracyGenerally highInitially low,
improving
Aims for incremental SfM level or higher
RobustnessHigh
(Iterative RANSAC/BA)
Low
(Outlier sensitive, Weak translation averaging)
Aims for incremental SfM robustness
ScalabilityLow
(Sequential, Iterative BA)
High
(Parallelizable)
Maintains high
scalability
CostHigh
(Repeated BA)
Low
(Fewer BA)
Maintains low cost
(Faster than incremental)
Error Accum.Possible
(Drift)
Relatively lowSuppressed by global approach
ChallengesCost, DriftTranslation avg. instability, Outlier sensitivitySolving translation avg., Ensuring robustness
Implement.COLMAPTheia, OpenMVGGLOMAP
Table 2. Defined shooting patterns and parameters.
Table 2. Defined shooting patterns and parameters.
Pattern TypeKey VariablesParameter Range/ValuesNotes
CircularAltitude (AGL)20~80 m (relative to rooftop)Tilt angle oriented towards building center
Radius (B-Rad)10~50 m (from center point)
Tilt Angleadjusted to face center
SurfaceDistance (A-Offset)10~30 mDesigned for facade scanning
Forward Overlap1 m, 2 m, 4 m
Side Overlap1 m, 2 m, 4 m
Tilt Angle0°, 15°, 30°, 45°
AerialAltitude (AGL)20 m, 40 mNadir/Oblique grid pattern
Overlap (Forward/Side)Fixed at 80%
Tilt Angle60°~75°
All PatternsPositioning Accuracy (GNSS-RTK)Horizontal and Vertical < 2~3 cmVerified via pre-flight simulation
Table 3. Implementation Environment Definition.
Table 3. Implementation Environment Definition.
CategoryParameterSpecification/ValuePurpose/Note
HardwareUAV ModelDJI Mavic 3 EnterpriseCommercial-grade drone
Sensor Resolution5280 × 3956 pixels (~20.9 MP)High-resolution imaging
Focal Length12.3 mm (24 mm equiv.)Wide-angle lens suitable for facade mapping
ApertureF/2.8 (fixed)Balance light intake and depth of field
Camera SettingsShutter Speed1/2000 sMinimize motion blur during flight
ISO400Fixed to maintain consistent noise levels
Color SpacesRGBStandard color representation
White BalanceManualEnsure color consistency
Exposure ModeManualConsistent exposure across images
Exposure Compensation0 EVNo automatic brightness adjustment
Environmental
Conditions
LightingOvercast sky, <20,000 luxMinimize shadows and harsh lighting variations
WeatherNo direct sunlight, cloudyConsistent ambient lighting
Data Acquisition SummaryTarget BuildingSimilar characteristics to apartment buildingsRepresentative structure
Total Flight Paths126Covering diverse scenarios
Avg. Flight Time per Path~10 min
Total Images Collected13,416Sufficient data for analysis
Image FormatJPEGCommon image format
Average File Size~7.8 MB per image
Metadata RecordedTimestamp, GPS Coordinates, Camera Model, Exposure Info, etc.Essential for processing and analysis reproducibility
Table 4. Summary of multiple regression analysis of the effect of shooting variables on SfM result metrics.
Table 4. Summary of multiple regression analysis of the effect of shooting variables on SfM result metrics.
Shooting PatternSurfaceCircular
Dependent VariableFinal PointsReconstruction CompletenessReprojection ErrorComputation TimeFinal PointsReconstruction CompletenessReprojection ErrorComputation Time
Adj R-squared0.69600.76200.37800.01500.77500.49300.7520−0.0400
Prob0.00000.00000.00000.21100.00010.01050.00020.5140
Coef Intercept211,30011121214,25011127
Prob Intercept0.0000.0000.0000.0000.1040.0000.0000.353
Coef Forward Overlap−30,560−0.01503.81761.26860.4770−0.0001−0.05890.0471
Prob Forward Overlap0.00000.00000.00200.107050.6370.00700.75700.4010
Coef Side Overlap−27,830−0.01264.21590.58050.65500.00010.6943−0.0399
Prob Side Overlap0.00000.00000.00100.4580236.210.04200.05200.6770
Coef Tilt angle−11.8860.0000−0.60570.06830.10200.00010.1200−0.0429
Prob Tilt angle0.95200.73200.00000.242047.0580.01100.76200.7100
Table 5. Summary of Kruskal–Wallis test on the effect of shooting variables on SfM result metrics.
Table 5. Summary of Kruskal–Wallis test on the effect of shooting variables on SfM result metrics.
Dependent VariableKruskal–Wallis (H)Kruskal–Wallis (p)
Reprojection Error20.23380.0001
Final Points7.49920.0235
Computation Time25.84980.0001
Reconstruction Completeness15.70810.0004
Table 6. Random forest feature importance ranking and distribution by shooting type and result indicator.
Table 6. Random forest feature importance ranking and distribution by shooting type and result indicator.
Shooting TypeDependent Variable1st Feature (Importance≈)2nd Feature (Importance≈)3rd Feature (Importance≈)Importance Distribution
CircularComputation TimeTilt angle (0.59)Distance (0.30)Radian (0.12)High-Mid-Low
Final PointsTilt angle (0.71)Distance (0.22)Radian (0.07)High-Mid-Low
Reconstruction CompletenessRadian (0.60)Distance (0.28)Tilt angle (0.14)High-Mid-Low
Reprojection ErrorRadian (0.82)Tilt angle (0.12)Distance (0.06)Very High Dominance
SurfaceComputation TimeTilt angle (0.39)Forward overlap (0.31)Side overlap (0.30)Similar High-Mid
Final PointsForward Overlap (0.52)Side overlap (0.45)Tilt angle (0.03)Two High, One Low
Reconstruction CompletenessForward Overlap (0.50)Side overlap (0.46)Tilt angle (0.04)Two High, One Low
Reprojection ErrorTilt angle (0.59)Side overlap (0.21)Forward overlap (0.20)High, Two Mid
Importance Scale: High (>0.40), Mid (0.10~0.40), Low (<0.10).
Table 7. Explanation of top principal components.
Table 7. Explanation of top principal components.
Principal ComponentIndividual Explained Variance (%)Cumulative Explained Variance (%)
PC143.543.5
PC222.666.1
PC316.8≈82.8
Table 8. Summary of Key RSM Results.
Table 8. Summary of Key RSM Results.
Analysis CombinationAdjusted R2Key Feature of Response Surface/ContourKey Implication
Circular: Final Points vs. (Offset, Tilt angle)0.876Gentle ridge shape, points tend to increase towards top-right (high values)Tilt angle influence relatively large; optimum for max points found
Circular: Reprojection Error vs. (Radian, Tilt angle)0.856Elliptical valley shape, error minimum in specific zone (Brad ≈ 10–15 m, Tilt ≈ 55–65°)Interaction effect is important; identifies ‘sweet spot’ for min error
Surface: Computation Time vs. (Forward Overlap, Side Overlap)0.003Parallel straight contours, time increases linearly as overlap decreases (overlap area increases)Increased overlap causes more computation time; model fit very low
Surface: Final Points vs. (Forward Overlap, Tilt angle)0.422Nearly vertical contours, points increase sharply as Forward Overlap decreasesForward Overlap is decisive for points; Tilt angle effect minimal
Surface: Reconstruction Completeness vs. (Side Overlap, Tilt angle)0.286Near-vertical contours, completeness increases as Side Overlap decreasesSide Overlap mainly affects completeness; Tilt angle effect negligible
Table 9. Summary of Pareto Optimization Analysis for Key Objective Pairs.
Table 9. Summary of Pareto Optimization Analysis for Key Objective Pairs.
Objective Pair AnalyzedN Points on FrontierKey Frontier Characteristics and Implication
Time vs. Error4Weakly concave shape (bottom-left); diminishing returns for error reduction over time
Time vs. Points5Concave rising shape; diminishing returns for point increase over time
Error vs. Points4Concave shape (top-left); clear trade-off exists, optimal balance selection needed
Time vs. Completeness4Nearly horizontal shape (top-left); diminishing returns for completeness improvement over time
Table 10. Performance indicators of composite shooting patterns ranked by TOPSIS.
Table 10. Performance indicators of composite shooting patterns ranked by TOPSIS.
Topsis RankPattern1Pattern2Topsis DistIdealTopsis DistNegIdealTopsis Score
1C_O40L_O300.0396440.0885160.69067
2C_O20L_O300.0418110.0919290.687373
3L_O30P_O400.0419340.091670.68613
4C_O40P_O200.0461170.0994550.683203
5P_O20P_O400.0468880.0995120.679727
6L_O30P_O200.0398960.0846610.679699
7C_O20P_O200.0477190.0997430.6764
8C_O20C_O400.0510080.1040720.671084
9C_O40P_O400.0507650.1035080.670941
10C_O20P_O400.0529280.1036640.661999
11C_O20L_O100.0414510.0808580.661095
12C_O40L_O100.0413810.0803430.660041
13L_O10P_O400.0419640.0805440.657458
14L_O10P_O200.0464250.0771170.624216
15C_O20L_O200.0487990.0709430.592463
16L_O20P_O400.0489340.0707580.591167
17C_O40L_O200.04940.0697850.585517
18L_O20P_O200.0530780.0663450.555547
19L_O10L_O300.0624540.0645120.508104
20L_O20L_O300.0743860.0523790.413198
21L_O10L_O200.1043040.0512670.329539
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jo, I.; Lee, Y.; Ham, N.; Kim, J.; Kim, J.-J. A Quantitative Evaluation of UAV Flight Parameters for SfM-Based 3D Reconstruction of Buildings. Appl. Sci. 2025, 15, 7196. https://doi.org/10.3390/app15137196

AMA Style

Jo I, Lee Y, Ham N, Kim J, Kim J-J. A Quantitative Evaluation of UAV Flight Parameters for SfM-Based 3D Reconstruction of Buildings. Applied Sciences. 2025; 15(13):7196. https://doi.org/10.3390/app15137196

Chicago/Turabian Style

Jo, Inho, Yunku Lee, Namhyuk Ham, Juhyung Kim, and Jae-Jun Kim. 2025. "A Quantitative Evaluation of UAV Flight Parameters for SfM-Based 3D Reconstruction of Buildings" Applied Sciences 15, no. 13: 7196. https://doi.org/10.3390/app15137196

APA Style

Jo, I., Lee, Y., Ham, N., Kim, J., & Kim, J.-J. (2025). A Quantitative Evaluation of UAV Flight Parameters for SfM-Based 3D Reconstruction of Buildings. Applied Sciences, 15(13), 7196. https://doi.org/10.3390/app15137196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop