1. Introduction
A building inspection is an essential practice in civil engineering that evaluates a building’s structural, functional, safety, and conservation conditions. This process aims to identify pathologies, which, according to [
1], are manifestations resulting from natural or anthropogenic actions that potentially compromise the performance of civil, urban, or rural infrastructure. Its application ranges from inspections of residential properties for purchase and sale purposes to inspections of complex infrastructures such as bridges and tunnels [
1,
2,
3,
4,
5].
The façades of a building, due to exposure to weathering agents such as rain and wind, undergo accelerated degradation compared to the more protected parts of the structure [
6,
7,
8,
9]. The construction systems comprising the façades are subjected to various stresses and phenomena, including movements caused by mechanical and thermal forces, fatigue, thermal shocks, expansion due to moisture in ceramic elements, infiltration, and hygrothermal stresses. These conditions can affect the overall structure and specific parts, leading to the emergence of pathologies [
10]. The analysis of changes in civil infrastructure is essential for engineering, and structural pathology focuses on the study of the origins, manifestations, consequences, and mechanisms of failure and degradation systems [
1,
11,
12,
13,
14,
15].
The study of pathologies in buildings involves analyzing pathological manifestations observed on façades and their interaction with the environment, linking the factors causing these manifestations to the degradation phenomena identified during visual inspections [
16,
17,
18]. In Brazil, the inspection of medium- and high-rise structures is often conducted using the traditional method involving industrial climbers [
19]. This procedure involves technicians descending along the façade using climbing and rappelling equipment, allowing for a detailed visual assessment of cracks, displacements, and other signs of wear. However, in addition to presenting high costs and accident risks, this method is limited to smaller investigation teams capable of inspecting large buildings. It is a long, labor-intensive process [
1,
20,
21].
Modern technologies, such as Unmanned Aerial Vehicle (UAV), have become a viable and safe alternative for building inspections. These tools enable capturing high-resolution images and videos, covering large areas in less time and with more safety, as they eliminate the need for professionals to work at heights [
21,
22]. These aircraft can be classified as fixed-wing, identical to conventional airplanes but smaller, or rotary-wing, including helicopters and multi-rotors. With technological advances in UAVs, improvements in battery life, image transmission, and GPS systems have expanded their applications across various engineering fields, as seen in works such as [
22,
23,
24,
25,
26,
27,
28,
29,
30].
In addition to being versatile tools, UAVs are highly effective in building pathology, mainly due to their ability to access hard-to-reach locations and the wide range of sensors they can carry onboard to perform flights requiring more accurate assessments of structural elements. According to [
31], these sensors convert energy captured from objects into images or graphics, enabling the correlation of a surface’s physical, chemical, biological, or geometric properties with the detected radiance, emissivity, or backscattering. For instance, optical devices like RGB and multispectral cameras have been effectively employed to detect surface discoloration, cracks, and vegetation growth indicative of moisture infiltration. On the other hand, thermal sensors have proven helpful in identifying thermal anomalies related to insulation defects or water intrusion. LiDAR sensors, an active technology, have been used to generate high-resolution 3D models, enabling the detection of structural deformations or façade displacements with sub-centimeter accuracy. In the context of UAVs, these devices can be classified as passive, such as optical devices, and active, such as LiDAR [
2,
21,
22,
32,
33,
34].
Passive optical sensors capture light reflected or emitted from the surface of an object. Among them, RGB sensors are the most common, capturing images in three bands of the visible spectrum: red, green, and blue. The color of each pixel is determined by the combination of red, green, and blue intensities stored in each color plane at the pixel location. Depending on the type of shutter—global or rolling—cameras may be susceptible to distortion. These sensors are widely used in façade inspections to identify pathologies such as ceramic detachment, moisture stains, and cracks, providing images with high photographic detail. However, they have limitations, such as sensitivity to light and difficulty detecting minor anomalies [
21,
35,
36]. The use of these sensors is demonstrated in [
37,
38,
39].
Multispectral and hyperspectral sensors are also passive, but they differ in that they capture images at multiple wavelengths, extending beyond the visible spectrum. Due to this characteristic, pictures obtained by multispectral sensors provide information on how electromagnetic radiation interacts with surface objects at different wavelengths [
31,
40,
41]. Most space programs currently in operation use multispectral imaging systems with bands in the visible (VIS), near-infrared (NIR), shortwave infrared (SWIR), and thermal infrared (TIR) spectra [
31]. Multispectral sensors capture a smaller number of specific bands, whereas hyperspectral sensors capture several contiguous bands, enabling more detailed analysis. This technology helps analyze radiation’s interaction with surfaces, identify material properties, and detect damage not visible to the naked eye. Despite the advantages, such as high precision and non-destructive analysis, these sensors require significant investment and expertise to handle the complex data they generate. Their use can be observed in [
21,
42,
43].
Another category of passive sensors is thermal sensors, which capture images of infrared radiation emitted by objects based on their temperature. These sensors are used to identify anomalies related to energy efficiency, infiltration, and thermal insulation, and they can operate even in low-light conditions. However, the interpretation of their images can be affected by environmental factors, requiring rigorous calibration [
44]. Infrared radiation is a type of electromagnetic energy that travels at the speed of light, with all objects with temperatures above absolute zero (−273 °C) emitting and absorbing it. The greater the radiation emitted, the higher the surface temperature [
45]. The use of these sensors is evident in [
25,
46,
47].
On the other hand, active sensors, such as LiDAR (Light Detection and Ranging), work by emitting laser pulses and measuring the time it takes for the light to return, mapping the three-dimensional structures of the environment. In recent years, this technology has been used on UAVs, enabling the production of high-density point clouds and their use in developing 3D products [
48]. The LiDAR instrument consists of a control system, a transmitter, and a receiver. As the aircraft moves along the flight path, pulses of laser light are directed at the terrain by a mirror that scans perpendicular to the trajectory. Most LiDAR systems for topographic mapping utilize near-infrared laser light in the 1040 to 1060 nm range [
49]. This technology is ideal for detailed mapping, enabling accurate inspections even in adverse weather conditions or low light. Despite its advantages, such as high precision and the ability to generate 3D point clouds, LiDAR is expensive and has a limited range. The use of these sensors can be seen in [
50,
51,
52].
Despite the great potential of using sensors to identify pathological manifestations, there is still a significant gap in understanding the true capabilities of each piece of equipment to detect various anomalies, as well as their advantages and limitations. This article analyzes the capabilities and limitations of RGB, thermal, and multispectral sensors embedded in UAV for detecting pathologies on building façades. This study evaluates the performance of these sensors under different conditions, including environmental variations, material characteristics, and degradation contexts. Furthermore, this article seeks to provide a critical perspective to support the selection and application of these technologies, enhancing building inspection practices. In doing so, it aims to contribute to developing more effective and integrated approaches to analyzing structural pathologies and supporting the definition of preventive maintenance strategies.
2. Materials and Methods
The study area for this article encompasses the building housing the Institute of Chemistry (IQ), located on the Darcy Ribeiro Campus of the University of Brasília (UnB). The building is situated at the geographic coordinates 15°46′6.77″ S and 47°51′53.52″ W.
Figure 1 provides a detailed representation of the building’s location, highlighting its integration within the university’s urban and natural context.
The Chemistry Institute building is distinguished by its bold and innovative architecture, which combines elements of a mixed structure comprising reinforced concrete and metal components. This combination provides resistance, durability, flexibility, and aesthetics, resulting in a project that meets the functional and architectural requirements of a building intended for teaching, research, and extension activities. With a total built area of 6387 square meters, the building features spacious and thoughtfully designed facilities, including laboratories, classrooms, administrative offices, and common areas. The building’s design adheres to contemporary standards of functionality and sustainability, reflecting the university’s commitment to high-quality academic infrastructure and efficient resource utilization.
2.1. Climatic Assessment to Define the Optimal Flight Period
During this study’s building mapping, a detailed climatic analysis was conducted to ensure the safety and efficiency of operations using the Unmanned Aerial Vehicle (UAV), DJI Matrice 350 RTK model, made in China. This analysis focused on two key aspects: rainfall, given the equipment’s limitations in operating under rainy conditions, and average wind speed, considering the maximum flight resistance specified by the manufacturer.
To achieve this objective, a five-year historical analysis (2019–2023) of rainfall and average wind speed data in the study region was conducted. Rainfall data were obtained from the Hidroweb platform, maintained by the National Water Agency (ANA). In contrast, wind speed data were sourced from measurements provided by the National Institute of Meteorology (INMET). A statistical analysis of the historical data series was performed, considering the average monthly values of both variables throughout the year. The results of these analyses are presented in
Figure 2.
The average wind speed data analysis indicated no significant weather-related impediments to conducting flights with the UAV used throughout the year. The highest average wind speed recorded in the region was 9.5 km/h, well within the maximum resistance limits specified by the equipment manufacturer, which range from 28.8 km/h to 36 km/h. These results confirm the feasibility of flights under the prevailing wind conditions.
Regarding rainfall, two well-defined climatic seasons were identified: a rainy season, spanning from October to April, and a dry season, spanning from May to September. Given the equipment’s restrictions on operating in rainy conditions, it was concluded that the optimal period for conducting aerial operations corresponds to the dry season, from May to September.
Based on the analysis of average wind speed and rainfall, it was determined that field flights will be conducted between May and September. This time frame provides the ideal conditions for UAV operations, minimizing operational risks and ensuring the quality of the data collected.
2.2. Methodology for Selecting Buildings to Perform Façade Pathology Mapping
A preliminary visual field inspection was conducted to define the pathological manifestations on the façades to be analyzed and the selection of the building for complete inspection using UAV. This survey encompassed all buildings on the Darcy Ribeiro Campus of the University of Brasília (UnB), primarily identifying the most significant façade pathologies in each structure. The procedure involved a systematic walkthrough of the campus, during which visual observations were made to recognize signs of degradation, such as cracks, discoloration, or material detachment. In addition, a photographic survey was performed using a high-resolution camera to ensure detailed documentation of the observed conditions.
Each building was assigned a code for identification, and the location, dimensions, and severity of the detected anomalies were recorded in field logs. These records were then compiled into a database to compare façade conditions across different buildings. The collected data served as the basis for selecting the structure with the most representative pathologies for a complete inspection using UAV, ensuring that the chosen building offered valuable insights for further analysis.
As a result of this field survey, 96 buildings were identified, representing various structural types, including reinforced concrete, metallic structures, mixed structures, and wood. Among the most recurrent pathological manifestations, moisture and infiltration stains, cracks, fissures, and the degradation of the paint coating (peeling) were prominent, as illustrated in
Figure 3. In addition, other pathologies were also identified, including the detachment of ceramic coatings, the degradation of mortar coatings, damage to opening elements (doors and windows), roof damage, and graffiti. The analysis of this information provided an initial understanding of the predominant pathologies on the campus, supporting the selection of the building to be mapped.
Simultaneously identifying pathologies, extensive vegetation near the buildings was assessed, considering the potential impact of this vegetation on the façade mapping using Unmanned Aerial Vehicle (UAV). As shown in
Figure 4, approximately 77% of the buildings had significant vegetation in their surroundings, which could hinder or prevent flight for data capture.
Based on the types of pathological manifestations observed and the presence of significant vegetation, the Chemistry Institute building was selected to undergo the mapping procedure. This choice was motivated by at least five distinct types of pathologies on its façade and the absence of significant vegetation that could interfere with the survey using the UAV.
Finally, the decision to limit the analysis to a single building was driven by the high density of data to be collected, considering the use of three distinct types of sensors. This approach enabled a detailed analysis of the selected building’s pathologies, ensuring greater accuracy and depth in this study.
2.3. Equipment and Sensors Used for Façade Mapping
For mapping pathological manifestations present in the selected building, two models of UAVs were used: Matrice 300 RTK and Matrice 350 RTK (
Figure 5). It is essential to highlight that both devices are equipped with an RTK (Real-Time Kinematic) system, enabling centimeter-level positional accuracy, thus ensuring the capture of even the thinnest pathologies, such as cracks. These aircraft are certified with an IP55 rating and can operate from −20 °C to 50 °C. Additionally, both aircraft have sufficient payload capacity to carry RGB, thermal, and multispectral sensors, thus enabling the use of all sensors on each building façade.
Regarding sensor usage, the RGB data were obtained using the DJI P1 photogrammetric sensor, which features a 45 MP full-frame CMOS sensor. This sensor is compatible with various fixed lenses (24, 35, and 50 mm), adapting to different mapping needs and area coverage, and is equipped with a triaxial stabilized gimbal system. It is worth noting that this sensor operates integrated with Matrice 300 RTK or higher platforms, enabling the acquisition of products with centimeter-level positional accuracy due to the RTK system.
The DJI Zenmuse H20T sensor, which includes optical zoom and wide-angle cameras, a thermal camera, and a laser rangefinder, was designed for inspection operations and used to obtain thermal data. The zoom camera offers up to 23× magnification, while the wide-angle camera provides a broader view to situate the scene better. The thermal camera has a resolution of 640 × 512, allowing for the detection of temperature variations, and the laser rangefinder has a range of up to 1200 m.
The Sequoia multispectral sensor was used to acquire the multispectral data. This sensor enables the capture of four spectral bands: green, red, red-edge, and near-infrared (NIR). It also features a 16 MP RGB camera, which allows for high-resolution captures for visual context, with a 70.6° field of view and fast capture rate.
Figure 6 shows images of each sensor used in the inspection procedure.
2.4. Adopted Executive Procedure for Façade Inspection
The methodological procedure for acquiring the data from the sensors used in the identification of pathological manifestations on the building façades was divided into three fundamental stages: (i) flight path definition, (ii) data acquisition, and (iii) obtaining the processed products.
In the first stage, a vertical flight plan was executed approximately 6 m from each façade to ensure the precise and representative capture of the areas of interest. The flight path was configured in parallel lines, with an equidistant spacing of 1 m between them. This distance was chosen to ensure sufficient image overlap, translating into the most accurate and detailed representation of the mapped façades. This flight approach was planned to optimize the quality and comprehensiveness of the data collected during the capture process.
Figure 7 shows each façade used to inspect their respective flight path lines.
Image capture was configured at one shot per second for each sensor to maximize the data obtained. This configuration was standardized for all three sensors in the field, ensuring that images from each sensor type were acquired at the same temporal frequency. This enhanced the consistency of the data collected during the inspection process. In the second stage, the photogrammetric images obtained by each sensor were processed using specialized software, resulting in orthophotos of each façade of the mapped building. This processing was essential to transform the raw data captured by the sensors into precise geospatial products, which served as the basis for analyzing pathological manifestations on the façades.
In the final stage of the methodological procedure, the identification of pathological manifestations on each façade began with the generated orthophotos. The pathologies were analyzed both individually, considering the characteristics of each type of manifestation, and collectively, to compare the information generated by the different sensors used. The analysis was essential for evaluating the characteristics of each sensor, considering image quality and the ability to detect and represent the various pathologies. A schematic representation of the inspection methodological procedure can be seen in
Figure 8, where the three stages of the process are illustrated, from flight definition to the analysis of the results obtained.
2.5. Processing of Data Obtained Through Sensors
Image processing to obtain the orthomosaics in a flat projection of each surface was performed using the Metashape software by Agisoft Version 2.2.0, using a license acquired by the Topography, Cartography, and Geodesy Laboratory (LATOGEO) at the Institute of Geosciences, University of Brasília. A high-performance computer was used to ensure efficient processing, featuring an Intel
® Core™ i9-14900K processor (3.20 GHz), 64 GB of RAM, and an RTX 4060 graphics card. The Metashape software is known for its user-friendly interface, adherence to well-defined methodological steps, and streamlining workflow operations. Its use in processing images from UAVs is extensive, and it was applied in [
54,
55,
56,
57].
This study applied a specific method to generate orthomosaics using a flat projection, widely used for vertical surfaces such as façades. This workflow follows the main steps of its 3D model construction: creating masks (Create Masks), photo alignment (Align Photos), generating a dense point cloud (Build Point Cloud), building the 3D model (Build Model), and, finally, constructing the orthomosaic in flat projection (Build Orthomosaic). In
Figure 9, a schematic flowchart of the adopted processing steps can be observed.
Creating masks was essential to delimit the areas of interest in the original images and reduce potential noise during processing. During photo alignment, the software identifies matches between the captured images and calculates their relative positions in three-dimensional space. For this purpose, key points (features) were detected, which, after comparison, enabled the generation of a sparse point cloud representing the basic structure of the analyzed surface. This step was crucial for establishing the geometric foundation for subsequent products.
After the alignment step, a dense point cloud (
Figure 10a) was generated, consisting of millions of points that accurately detail the geometric characteristics of the surface. This cloud was the foundation for creating the 3D model and the orthomosaic. In the 3D model construction step (
Figure 10b), the point cloud data were interpolated to create a continuous surface composed of polygons, enabling a faithful representation of the façade’s geometry. This model was then used as the basis for generating the orthomosaics.
For the flat projection, projection planes were defined using markers that delimited the area to be extracted. Three markers were applied to each façade, ensuring an accurate and distortion-free representation of the vertical surfaces (
Figure 11).
Data processing varied according to the type of sensor. For thermal images, preprocessing was required in external software, where radiometric “.jpeg” files were converted into the single-band “.tiff” format. Each pixel in this format represents floating-point temperature values, ensuring greater accuracy. This conversion was performed using the open-source software IRimage, which was used to prepare the thermal data to be integrated into the Metashape workflow based on the methodological procedure described in [
58].
For multispectral images, processing followed a workflow similar to RGB images. Nevertheless, it included spectral band analysis using Metashape’s band calculator to generate the Normalized Difference Vegetation Index (NDVI). The NDVI is particularly relevant for façade pathology mapping as it helps to identify areas affected by biological growth, such as moss, algae, or mold, which are common signs of water infiltration or poor maintenance. By analyzing the reflectance differences between the red and near-infrared bands, the NDVI can highlight areas with abnormal vegetation growth, which may indicate underlying moisture issues or building material deterioration. The Parrot Sequoia multispectral sensor captured four leading bands (green—550 nm, red—660 nm, red edge—735 nm, and near-infrared—790 nm) and RGB images. With an integrated sunlight sensor, Sequoia ensured compensation for ambient light variations, improving the consistency of the collected data.
The described workflow proved highly efficient in generating orthomosaics and derived products, meeting the technical requirements for analyzing pathological manifestations on vertical surfaces. The adaptations made, particularly in processing thermal and multispectral data, were essential to ensure the accuracy and quality of the results. Thus, the proposed method proved robust and adaptable to various vertical mapping demands.
3. Results and Discussions
This section presents the results from the models generated for the four lateral façades of the Institute of Chemistry at the University of Brasília (UnB), as illustrated in
Figure 2. The façades were classified into two categories based on the lighting conditions during the drone flights: façades under direct solar irradiation and façades in shaded areas. For each façade, data were collected using three distinct sensors: the P1, responsible for capturing RGB images; the H20T, used for thermal imaging; and the Sequoia, employed for multispectral data collection.
3.1. Data Collection and Generation of Orthophotos
The methodological procedure adopted for the flights allowed for the acquisition of orthophotos with a Ground Sample Distance (GSD) ranging from 0.867 mm/pixel to 9.8 mm/pixel, depending on the sensor used. This resolution enabled the identification of smaller pathologies, such as cracks and fissures, which are essential for a detailed analysis of the façade conditions.
For all the mapped façades, more than 230 images were collected per sensor, resulting in highly accurate orthophotos. However, this process generated considerable amounts of data, requiring more storage and processing capacity. It is essential to highlight that the multispectral sensor (Sequoia) was responsible for the most significant volume of data collected due to its ability to capture images in four different spectral bands.
Table 1 presents the number of images and the GSD values recorded for each sensor considering the four façades analyzed.
Figure 12,
Figure 13,
Figure 14 and
Figure 15 present the results of the orthophotos generated after processing the images and data collected during the field inspections for each type of sensor used. The results highlight that, in the case of the P1 sensor, it was possible to obtain orthomosaics with adequate geometric accuracy without significant distortions, making it the most consistent for creating the damage map on the façades. In contrast, the Sequoia and H20T sensors showed visual deformations in the geometry of the generated orthomosaic models. However, despite these limitations, the individual images captured by the sensors mounted on the UAV allow for a detailed analysis of the pathological manifestations on the façades.
The distortions identified in the orthophotos generated by the multispectral (Sequoia) and, especially, the thermal (H20T) sensors can be attributed to the intrinsic characteristics of these sensors and the specific conditions under which the data were captured. First, compared to the RGB sensor (P1), the lower spatial resolution of these sensors may compromise the precise identification of standard features during the image alignment process in photogrammetry. This factor is especially critical because the lower density of visual details makes it difficult to automatically match points in adjacent images, resulting in geometric failures in the orthomosaic.
Additionally, environmental factors play a significant role in the data quality collected. In the case of the multispectral sensor, variations in lighting, such as shadows or differences in sunlight incidence, can interfere with the uniformity of the images, reducing the precision of the alignment process [
59]. On the other hand, the thermal sensor is susceptible to the emissivity characteristics of the materials captured, which can vary substantially between different surfaces of a façade. This variation can lead to inconsistencies in the thermal patterns recorded, complicating geometric processing [
60]. This condition is observed when comparing the orthophotos obtained from façades with direct sunlight exposure to those without such characteristics. Another aspect that may have contributed to this geometric condition could be associated with intrinsic distortions in the sensors’ lenses, which are more pronounced in specialized cameras, such as those used for multispectral and thermal captures.
Despite these limitations, it is essential to highlight that the individual images captured by these sensors still provide valuable data for specific analyses, such as identifying spectral and thermal patterns related to pathological manifestations. These sensors serve as a viable alternative to complement the results generated by the RGB sensor. Their ability to capture additional spectral and thermal information enhances the overall diagnostic potential, offering insights that might be overlooked in traditional visual analysis. As such, while the orthophotos may exhibit some geometric distortions, the data from these sensors remain crucial for a comprehensive understanding of the façade’s condition.
3.2. Identification and Analysis of Pathological Manifestations
Based on the orthomosaics generated during the mapping process, combined with the individually captured images, it was possible to perform a detailed analysis of the pathological manifestations on each façade of the building. This procedure allowed for a thorough evaluation of the surfaces considering the different aspects captured by each sensor used.
3.2.1. RGB Sensor
The products generated from processing images captured by the RGB sensor enabled the identification of several pathological manifestations displayed in
Figure 16,
Figure 17,
Figure 18 and
Figure 19. As shown in these figures, it was possible to map pathologies such as cracks and fissures, moisture stains, mold and fungi, the degradation of the plaster covering, woodlice, the detachment of ceramic tiles, and efflorescence. These anomalies were detected, except for efflorescence, on all the façades analyzed, demonstrating the broad applicability of this sensor for detailed visual inspections.
One of the significant advantages of the analysis was the high spatial resolution provided by the RGB sensor, with a Ground Sampling Distance (GSD) ranging from 0.867 to 0.985 mm/pixel. This resolution was crucial for identifying small geometric dimension pathologies, such as fine cracks and fissures, allowing for a detailed visual diagnosis. Additionally, the quality of the generated orthomosaics contributed to the continuous and precise mapping of the surfaces of the façade.
However, when evaluating cases of ceramic tile detachment, it was observed that the RGB sensor could only identify situations where the tiles had completely fallen off, exposing the substrate. In contrast, tiles that were still adhered but exhibited the “hollow” sound characteristic of partial detachment were not detected. This behavior was expected, as the RGB sensor only provides a surface-level view of the elements and has limitations in identifying pathologies that develop inside materials or those that do not have visible manifestations to the naked eye.
These limitations highlight the importance of combining complementary sensors for more comprehensive and detailed inspections. However, the RGB sensor proved to be a robust tool for the initial detection of visible pathological manifestations, providing essential information for diagnosis and intervention planning on the façades.
3.2.2. Thermal Sensor
The images generated through the survey conducted with the thermal sensor enabled a detailed analysis of pathological manifestations on the façades, as illustrated in
Figure 20,
Figure 21,
Figure 22 and
Figure 23. Among the identified pathologies, the following stand out: the detachment of ceramic pieces without falling (with the characteristic “hollow” sound) and areas with infiltration, cracks, and plaster-coating degradation. These results highlight the thermal sensor’s ability to identify thermal variations in different materials and surface conditions.
The thermal orthophotos showed fewer detected pathologies than the RGB sensor results. This is due to the thermal sensor’s specificity, which only captures thermal changes and does not provide visual details about surfaces. If not significant, pathologies such as stains or superficial cracks may not alter the thermal behavior noticeably, making them invisible in thermal images.
In the case of cracks and fissures, the limitation in identifying all occurrences is directly related to the GSD (Ground Sampling Distance) obtained during the flight for the thermal sensor, which ranged from 6.13 to 8.72 mm/pixel. Since fissures typically have openings of up to 1 mm, the resolution was insufficient to capture these more minor features. Sensors with a GSD of less than 1 mm could improve this detection, allowing for a more detailed analysis.
On the other hand, the thermal sensor efficiently identified hollow ceramic tiles. This result is attributed to its ability to detect variations in heat transfer between tightly adhered tiles and those with adhesion failures. While RGB and multispectral sensors rely on optical properties for identification, the thermal sensor detects thermal differences that reveal internal changes not visible on the surface, such as cavities or voids beneath coatings.
Additionally, the thermal images allowed for the delimitation of structural elements, such as beams and columns, on the façades. This identification is associated with differences in the thermal conductivity coefficient between structural materials (such as reinforced concrete) and enclosure materials. This feature is especially relevant for practical applications, such as structural integrity inspections and energy efficiency diagnostics. Thermal mapping of structural elements can, for example, assist in planning predictive maintenance, identifying thermal bridges, and analyzing building energy performance.
3.2.3. Multispectral Sensor
Based on the images collected by the multispectral sensor, orthophotos of the façades were generated, focusing on the identification of pathological manifestations related to the formation of moss and mold, considering the application of this type of sensor for this purpose, as presented in [
1]. For this purpose, the Normalized Difference Vegetation Index (NDVI) spectral index was used, and the results are shown in
Figure 24,
Figure 25,
Figure 26 and
Figure 27. The NDVI is widely used in vegetation analysis due to its ability to detect photosynthesizing organisms, such as moss and mold, which contain chlorophyll in their composition. This index proved suitable because it takes advantage of the high reflectance in the near-infrared (NIR) and the absorption in the red spectrum (Red). These characteristics differentiate these organisms from inert materials.
From the generated orthophotos, it was possible to identify moisture stains associated with mold and grasses, highlighting the efficiency of the index in delineating these pathologies. The ability to identify moss and mold through the NDVI is related to the detection of spectral differences that reflect the presence of chlorophyll, even in the early stages of growth. This feature is handy for anticipating preventive interventions before these manifestations cause more severe damage to the structure.
However, the choice of the NDVI as the sole spectral index was influenced by the GSD values obtained for the multispectral sensor, which ranged from 6.49 to 9.80 mm/pixel. While these values were suitable for detecting more significant elements, such as vegetation areas, they did not allow for the identification of smaller pathologies, such as cracks and fissures, which typically have openings smaller than 1 mm. This limitation emphasizes the importance of considering higher resolutions in studies focused on surface details.
Despite these limitations, the multispectral sensor excelled in identifying moss and mold, allowing for the precise delineation of the affected areas. The higher accuracy in detecting these pathologies can be attributed to the spectral specificity of the NDVI, which effectively maps variations related to the presence of living organisms. This level of detail is valuable for prioritizing interventions on façades and optimizing resources for preventive and corrective maintenance.
3.3. Comparative Analysis Between Sensors
Despite the limitations of the RGB, thermal, and multispectral sensors in identifying specific types of pathological manifestations (as shown in
Table 2), the integration and complementarity of these devices can be of the utmost importance for more comprehensive and accurate analyses. For example, it can be observed that the RGB sensor effectively captures surface-level pathologies. In contrast, the thermal sensor identifies alterations not visible to the naked eye and related to thermal variations. Meanwhile, using indices such as the NDVI, the multispectral sensor complements the analysis by mapping pathologies associated with living organisms such as moss and mold. It allows for identifying areas affected by vegetation.
A practical example of this synergy is the detachment of ceramic tiles. While the RGB sensor only detects elements that are already detached or missing, the thermal sensor identifies hollow tiles due to differences in thermal conductivity. Similarly, while the RGB sensor does not detect signs related to living organisms, the multispectral sensor accurately identifies areas colonized by mold or grasses. The combination of this information generates a multifaceted mapping, allowing for greater accuracy in interventions.
In addition, the integration of analyses allows data collected by different sensors to be compared, resulting in a more detailed understanding of the underlying causes of the pathologies. For example, the comparison between thermal and RGB images can assist in correlating infiltrations with surface damage, while multispectral data can confirm the presence of mold associated with moisture stains. These cross-references increase the reliability of the diagnosis and enhance the ability to predict new failures by identifying recurring patterns.
Another significant benefit of integration is the adaptation to different contexts and the specific needs of each project. In pathologies that do not present apparent signs, such as early infiltrations or microscopic cracks, the combined use of sensors enhances early identification, enabling less costly interventions. Additionally, integrated use can address various objectives, such as the conservation of historical heritage, the preventive maintenance of large structures, or environmental impact assessments.
Despite the advantages, it is essential to consider that sensor integration requires proper planning and technical expertise for data processing and interpretation. The fusion of the collected information demands robust software and advanced geoprocessing and photogrammetry techniques to align and correlate the results from each sensor. The process may involve challenges, such as the need for intersensory calibration, adjustments in flight planning, and standardization of capture conditions, ensuring that the data are consistent and comparable.
In addition, other factors also require attention. The financial costs associated with acquiring sensors, drones, and specialized software are high, making this approach restrictive for some organizations. Furthermore, data processing requires high-performance computers due to the large volume of high-resolution images and the complexity of calculations. Storage also presents a challenge, as the number of images collected and final products, such as orthomosaics and three-dimensional models, take up significant memory capacity. Therefore, careful planning is essential to balance costs and the necessary technological infrastructure with the benefits of using these tools.
4. Conclusions and Recommendations for Future Research
Using different sensors mounted on UAV to identify various pathological manifestations represents a promising approach in building pathology, particularly in the context of façade inspections. Regarding the use of the RGB sensor, it is evident that it performs excellently in identifying visually apparent pathologies, such as cracks, fissures, moisture stains, efflorescence, and the detachment of ceramic coatings. This is due to the high spatial resolution achieved, with a GSD ranging from 0.867 to 0.985 mm/pixel, which enabled the capture of precise details essential for mapping geometrically and visually evident pathologies. However, the sensor showed limitations in identifying hidden defects, such as ceramic pieces that have detached without falling, as its analysis is restricted to the surface conditions of materials.
On the other hand, the thermal sensor proved to be an effective tool for detecting subsurface pathologies, such as the detachment of ceramic tiles with a “hollow” sound, and for identifying infiltrations due to variations in thermal emissivity. Despite these advantages, its lower spatial resolution (GSD ranging from 6.13 to 8.72 mm/pixel) limited the identification of microcracks, a critical aspect in detailed assessments. Additionally, when comparing its results with those from the RGB sensor, it was observed that the number of identified pathologies was lower, which can be explained by the specific nature of the thermal sensor. It does not detect purely visual characteristics and relies on thermal contrasts not always present in manifestations.
The multispectral sensor should be revised for clarity. In identifying pathologies related to living organisms, such as moss, mold, and grasses, spectral indices such as the NDVI are used to exploit the reflectance of chlorophyll. This capability allowed for the precise delineation of affected areas, although the achieved GSD (ranging from 6.49 to 9.80 mm/pixel) was unsuitable for identifying cracks or fissures. However, this limitation does not undermine the sensor’s effectiveness for identifying vegetation or micro-organisms on façade surfaces.
It was also concluded that integrating RGB, thermal, and multispectral sensors, despite the challenges, offers invaluable potential for façade mapping and diagnostics. This would enable a detailed and holistic analysis that meets a wide range of needs in engineering projects. In conclusion, future studies could investigate methodologies to integrate data from RGB, thermal, and multispectral sensors, using machine learning and artificial intelligence for data fusion and predictive analysis.
Expanding the use of RGB, thermal, and multispectral sensors to other types of structures, such as bridges, tunnels, and dams, offers considerable potential but requires adaptations to the specific challenges of each environment. For bridges, the sensors can be applied to assess surface conditions of decks, beams, and supports, with particular attention to corrosion, cracking, and material degradation, which are crucial for structural integrity. Tunnels, with their confined spaces and variations in humidity and temperature, would benefit from thermal and multispectral sensors for detecting water infiltration, mold growth, and hidden cracks in the tunnel lining. A combination of these technologies could be used for dams to monitor the stability of both the surface and submerged areas, helping to detect erosion, leaks, and shifts in material composition. In each case, optimizing sensor configurations, such as adjusting spatial resolution and sensor calibration, will be necessary to address the distinct characteristics of these structures and the specific types of pathologies they may present.
Another point suggested for future studies concerns incorporating vibrational data into UAV-based diagnostics, representing a promising comprehensive structural-health-monitoring approach. By integrating lightweight accelerometers or vibration sensors with UAVs, it would be possible to assess the dynamic responses of structures such as bridges, dams, and tall buildings to various stress factors, including wind loads, traffic, and seismic activity. Vibrational analysis can help identify issues such as resonance, fatigue, or material degradation that are not visually apparent but critically impact structural integrity. Future studies should explore the feasibility of synchronizing vibrational data with imaging sensors, enabling a combined analysis of surface conditions and dynamic performance. This approach could significantly enhance the capabilities of UAV-based systems in detecting and predicting structural vulnerabilities, contributing to more proactive maintenance and safety protocols.
Additionally, exploring techniques to improve the spatial resolution of RGB and multispectral sensors is recommended, allowing for detecting microcracks and other small-scale pathologies. Finally, it is suggested that these technologies be expanded to different types of structures, such as bridges, tunnels, and dams, adapting the sensors to the specific conditions of each environment.