Next Article in Journal
A Distributed Task Allocation Method for Multi-UAV Systems in Communication-Constrained Environments
Previous Article in Journal
Rapid Initialization Method of Unmanned Aerial Vehicle Swarm Based on VIO-UWB in Satellite Denial Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

UAV-Embedded Sensors and Deep Learning for Pathology Identification in Building Façades: A Review

by
Gabriel de Sousa Meira
*,
João Victor Ferreira Guedes
and
Edilson de Souza Bias
Applied Geosciences and Geodynamics (Geoprocessing and Environmental Analysis) Institute of Geosciences, Campus Universitário Darcy Ribeiro, University of Brasília, Brasília 70919-970, Brazil
*
Author to whom correspondence should be addressed.
Drones 2024, 8(7), 341; https://doi.org/10.3390/drones8070341
Submission received: 3 April 2024 / Revised: 8 June 2024 / Accepted: 11 June 2024 / Published: 22 July 2024

Abstract

:
The use of geotechnologies in the field of diagnostic engineering has become ever more present in the identification of pathological manifestations in buildings. The implementation of Unmanned Aerial Vehicles (UAVs) and embedded sensors has stimulated the search for new data processing and validation methods, considering the magnitude of the data collected during fieldwork and the absence of specific methodologies for each type of sensor. Regarding data processing, the use of deep learning techniques has become widespread, especially for the automation of processes that involve a great amount of data. However, just as with the increasing use of embedded sensors, deep learning necessitates the development of studies, particularly those focusing on neural networks that better represent the data to be analyzed. It also requires the enhancement of practices to be used in fieldwork, especially regarding data processing. In this context, the objective of this study is to review the existing literature on the use of embedded technologies in UAVs and deep learning for the identification and characterization of pathological manifestations present in building façades in order to develop a robust knowledge base that is capable of contributing to new investigations in this field of research.

1. Introduction

The use of technologies based on Geographic Information Systems (GISs) in the field of diagnostic engineering has become ever more present in the visual inspection of pathologic manifestations in civil buildings [1,2,3,4,5,6]. These technologies facilitate the identification and characterization of structural elements, among other things, considering aspects such as temperature, anomalies, geometry, and topography [7]. Their main objective is to support the adoption of maintenance measures, both preventive and corrective in nature.
The degree of deterioration of different structures significantly influences the performance of a building, considering that structures directly exposed to environmental agents develop an acute tendency towards degradation over time. Therefore, the execution of periodic inspections is a fundamental component in guaranteeing the physical integrity of buildings.
In the Brazilian context, the inspection of pathologic manifestations in mid-sized to tall structures is routinely conducted through the traditional method of industrial climbing [8]. This method of visual inspection is executed through rope access, with individual technicians using working seats to climb down a building in order to fully consider the building’s façade. A crucial element of this type of inspection is the substantial risk faced by professionals, particularly the risk of falls. Additionally, due to the low productivity of this activity—given the limited field of vision that a professional has when inspecting a façade—this type of inspection not only leads to inefficient interventions but can also be quite costly.
Considering this scenario, the use of Unmanned Aerial Vehicles (UAVs) provides an extremely interesting solution [9]. Due to their compact size, good flexibility, and low cost, UAVs offer a promising alternative for high-rise building inspection. In this context, the use of this technology provides a valuable tool, particularly for analyzing structures in hard-to-reach areas, with the added advantage of their capability to integrate various sensors, such as RGB, thermal, and multispectral sensors, significantly contributing to more in-depth studies on this matter.
At present, the use of UAVs, as well as embedded sensors, demands the use of software, procedures, and machinery that offer more robust data collection and processing capabilities, considering the vast magnitude of information that can be obtained through these technologies. However, the lack of specific methodologies in this area has made the selection and use of sensors difficult when considering the process of mapping pathologic manifestations, as well as hindering their implementation for structural inspections, which necessitate a more accurate degree of precision.
In this line, the search for more robust and precise information processing methodologies—which allow not only for the intensive management of a great volume of data but also the validation of new technologies applied to the identification and characterization of pathologic manifestations in civil infrastructure—has become the biggest request in current scenarios. In this context, Ref. [10] emphasized the use of deep learning techniques, which have arisen in prominence in the field of engineering due to their capability to automate processes, as well as the viability of processing and validating a great amount of data through deep convolutional neural networks (CNNs).
Deep learning methods have been extensively used in various scenarios related to building pathology, including the inspection of façades. These approaches range from the visual inspection of residential penthouses to the thermal inspection of art installations, as well as bridges and viaducts [11,12,13,14,15,16]. This gamut of applications demonstrates the potential of deep learning in promoting new methodological procedures for building inspections, permitting a more detailed and less costly level of analysis and identification of pathologies.
When compared to conventional methods, deep learning endeavors to optimize and enhance the precision and efficiency of the inspection process, enabling swifter and more accurate identification of anomalies in a structure’s façade. These techniques can swiftly and accurately process extensive data sets, facilitating the early detection of structural failures that might evade manual inspections. These models continuously learn and improve with new data, increasing their accuracy over time. For instance, CNNs facilitate the detailed analysis of high-resolution images, identifying cracks, infiltrations, and other issues more precisely than the human eye [17,18,19,20,21,22,23]. Another significant advantage is the reduction in costs and time, as automation reduces the need for specialized labor and enables continuous monitoring [24,25,26,27,28]. The use of such an approach provides the potential to develop more effective methods for the pathologic inspection of buildings.
When contemplating the implementation of deep learning in a scientific domain, such as civil engineering, several aspects merit attention to fully harness the potential of this technology. These include the best network architecture to be used in a mapping activity (which affects processing time and accuracy), the best procedure in order to train CNNs, the identification of neural networks capable of extracting information in order to classify pathologic manifestations, and the most efficient structuring of databases to feed a CNN. When considering the field of building pathology, selecting the best sensors and equipment, as well as identifying their potential uses and limitations, are crucial factors to consider when developing a deep learning implementation [10,29,30,31].
In this manner, the objective of this study is to offer a broad literature review on the usage of embedded technologies in UAVs and deep learning in order to identify and characterize the pathologic manifestations displayed in a building’s façade. The main idea of this study is to offer a solid knowledge base that can assist future researchers in this field. This review thus considers the historical evolution of such themes as building pathologies, the use of UAVs for façade inspection, and deep learning in a concise manner, with the intention of providing a more comprehensive understanding of the contributions and advances in each researched theme.

2. Building Pathology

Building pathology is a greatly relevant object of study in the field of engineering, particularly concerning its impact on the performance and durability of civil structures. Throughout its construction process and service life, a structure can be subjected to varied damaging agents, including rain, temperature fluctuations, wind, and anthropic interventions. Depending on the duration of the exposure or the magnitude of these factors, these agents can promote the degradation of constructive elements, which, if not treated adequately, can evolve to further stages and may even ruin the structure [32].
The concept of pathology was taken from the health sciences, described by [33] as “the science that studies the origin, the mechanisms, the symptoms, and the nature of diseases”. However, throughout the decades, this term has been used in different knowledge fields, embracing areas such as engineering and architecture and always being adapted to suit each field’s context. Therefore, the term building pathology, according to [34], is defined as “a systematic treatment of constructing defects and its corrections”. Considering its historical evolution, this concept has been subject to modifications; however, it still maintains its original essence.
For [35], building pathology can be described as “the engineering field responsible for investigating pathologic manifestations that can happen in a construction”. For [33], “it is the science that systematically searches the defects which happen upon constructive materials, components, and elements, or in the building as whole, investigating its origin and trying to understand the start and evolution of the pathologic process, as well as its forms of manifestation”. For [36], “it is the field of engineering responsible for the study of causes and mechanisms of anomalies and problems in structures”.
Considering these definitions and that the term’s original conceptual essence remains preserved to this day, in this study, the concept of building pathology will be defined as the field of engineering that studies the occurrence of pathologic manifestations, resultant from either natural or anthropic actions, which have the potential to compromise the performance of civil, urban, or rural infrastructure (however minimal). Complementarily, Table 1 presents the main terms associated with building pathology, as well as their respective conceptualizations.
Pathologic manifestations, in general, can be classified as simple or compound. The former explains pathologies that can be corrected through simpler surface analysis, such as rebar frame corrosion, render detachment, and paint coating deterioration, which do not require a more robust investigation. The latter concerns manifestations for which diagnosis and treatment require a more thorough analysis, such as problems resulting from water infiltration and foundation settlement, requiring a more in-depth and comprehensive investigation [36].
According to [40], in order to detect and characterize a pathology, it is common to execute a visual or close-range inspection. During this stage, it is important to accumulate information on the building and its respective state of conservation. According to the same author, when regarding a structure, one of the most important procedures for the study of pathologies is façade inspection. This type of procedure is based on a visual analysis and understanding of a catalytic phenomenon (anomaly) from the external structure of a building.
According to [41], the main pathologies that can be detected through a visual façade inspection (as depicted in Figure 1) include the following:
  • Cracks—characterized by the opening of structural and sealing elements, they can occur due to movements originated by thermal expansion, hygroscope, overload, excessive deformation, foundation settlement, material retraction, and chemical alterations. In general, these openings are an important indication of structural damage and can be classified by their size: fissure, up to 1 mm; crack, 1–3 mm; fracture, above 3 mm [42].
  • Humidity stains—characterized by an excess of dampness in a certain point or extension of a construction. In general, this manifestation is associated with a lack of waterproofing or existing deficiencies in drainage and plumbing elements. Regarding the presence of humidity stains in façades, Ref. [41] highlighted that, aside from the causes previously mentioned, they can also occur due to excess dirtiness, the growth of micro-organisms, deposition of calcium carbonate over surfaces, and vandalism.
  • Detachment of ceramic and render—characterized by the separation of a coating from its surface. In cases regarding ceramic pieces, this type of detachment occurs when the system’s adhesive resistance is inferior to the tension acting on it. When an anomaly occurs in the render, its causes can be associated with external agents, execution problems, and the end of the service life of that material [43].
  • Degradation of paint covering—a pathology commonly associated with the end of the service life of material and problems related to paint dosage [40].
  • Damage in opening elements (windows and doors)—manifestations associated with damage to elements such as windows and doors, basically focused on the degradation of the composing element of materials and in the installation portal [41].
  • Damage to the top of the building (exposed slab and roof)—characterized by the deterioration of slabs and roof tiles, which mainly culminates in infiltrations. These pathologies occur through the appearance of cracks, broken tiles, problems in the installation of rain drainage elements and in the waterproofing system, and the action of damaging elements [16].

3. Theory of Building Pathologies

Although the theme of building pathology may seem recent, its existence dates back centuries. Historical registers reveal that the preoccupation with construction defects and problems started in remote times, highlighting the importance of this field of study. One example of this can be found in the Code of Hammurabi, created by the Babylonian empire in Mesopotamia, wherein a collection of laws created around 2200 B.C. is described. In this code, it is possible to note the provision of punishments in the case of constructive errors that could lead to a building’s ruin [34,44].
Still considering this historical context, references regarding the pathology of constructions can be found in the Bible. In the Gospel of Matthew in the evangelical bible (Matt. 7:24-27), there is a parable about builders, in which one of them constructs his house above rocks, while the other does so above sand. In this biblical passage, it is possible to see references to good building practices concerning the foundation of constructions in order to avoid damaging agents (e.g., rain and wind, among others) that could promote the manifestation of pathologies damning a building to ruins. These historical examples show that the theme of building pathology has been a relevant topic for centuries, highlighting the necessity of continuously undertaking studies and developing practices that can guarantee the safety and durability of constructed structures.
In the international sphere, with the advancement of engineering, new enterprises with different degrees of complexity and technical challenges have developed. Therefore, with the idea of guaranteeing the maintenance and durability of not only historical buildings but also newer constructions, there has also been a rising interest in the theme of building conservation and maintenance.
According to [45], in the year 1877, Wiliam Morris—through the Society for the Protection of Ancient Buildings (SBAP)—wrote a document called “The SBAP Manifesto”, with the main objective of establishing better practices to be adopted by architects and engineers in order to guarantee a longer service life of historical monuments, considering those that had been suffering from serious degradation problems due to long periods of exposure to damaging agents.
Later, in 1964, with more studies dedicated to the conservation and maintenance of historical monuments and archeological sites, the International Council on Monuments and Sites (ICOMOS) published “The Venice Charter”. This document’s objective was to introduce the principles of monument conservation in order to establish directives and norms for this specialized area [45].
In 1982, the Building Research Establishment (BRE) developed an encompassing database named Defect Action Sheets, which provided a solid source of information regarding the prevention and correction of possible building anomalies to professionals at the time. According to [46], this database was composed of 144 files that contained information concerning the description and characterization of anomalies, preventive measures to be adopted, and complementary information on particularities that a building could present.
In 1985, after an event called 1° Encontro de Conservação e Reabilitação de Edifícios Habitacionais, the Laboratório Nacional de Engenharia Civil (LNEC) published a report on construction pathologies, and this report established the methodologies used to evaluate pathologic manifestations in buildings. It also offered a series of technical assessments classified into three categories: defects related to structure, defects not related to structure, and problems in systems and equipment [46]. After this period, new methodologies and procedures were developed by different institutions in order to fulfill the demand for new materials and constructions that emerged with the progress of civil engineering. Table 2 provides a brief timeline of the main characteristic methods introduced in the historical context of building pathology.
In Brazil, the field of building pathology has been significantly influenced by international regulations. In this context, the main methodological processes related to this theme have been developed by technical and regulatory institutes, such as Instituto Brasileiro de Avaliações e Perícias de Engenharia (IBAPE) and Associação Brasileira de Normas Técnicas (ABNT). These institutes play an important role in establishing directives and regulations that guide the practice of diagnostic engineering.

4. Inspection of Building Façades

A building inspection is a technical procedure in the field of civil engineering with the objective of evaluating the structural and functional conditions of a building, as well as its security and conservation. This method allows for the identification of possible pathologies or anomalies, with the intention of elaborating a maintenance plan—either preventive or corrective—in order to avoid the worsening of already existing manifestations and guarantee the security and durability of a structure. Building inspection has many applications, from the pre-sale examination of properties to audits of complex structures such as bridges and tunnels [40].
According to [37], building inspection can be classified into three levels, according to the complexity of the object evaluated and the level of specificity in the execution process. The first level of inspection is characterized by the execution of a simple survey of facts and constructive systems, identifying the anomalies and failures that are visibly present. At this level, an isolated or combined verification of the building’s system is conducted, classifying found pathologies regarding their risk level in accordance with users’ security, the building’s habitability, and the preservation of constructed assets.
A Level 2 inspection can occur in buildings of average complexity concerning the maintenance and operation of their construction systems. In general, this type of inspection is conducted in buildings with multiple levels, with or without a maintenance plan. Level 3 refers to an inspection conducted in buildings of high technical complexity, again concerning the maintenance and operation of its construction system, but normally considered to be more sophisticated. In general, this type of inspection is applied to buildings with a great number of levels or automated construction systems.
According to [36], based on the type of inspection conducted, anomalies are categorized as low, moderate, or high risk, depending on the extent of recovery needed and the risk posed to users, the environment, and the asset. They can be described as follows:
  • High risk—significant damage to the health and safety of users and the environment, leading to substantial loss of performance and functionality, possible shutdowns, high maintenance and recovery costs, and noticeable compromise in service life.
  • Moderate risk—partial loss of performance and functionality in the structure without directly impacting system operations, accompanied by early signs of deterioration.
  • Low risk—potential for minor, aesthetic impact or disruption to planned activities, with minimal likelihood of critical or regular risks, as well as little to no impact on real estate value.
Among the various existing types of inspection, façade inspection is widely used for civil constructions, such as buildings. According to [40], this type of inspection refers to the process of evaluating the external condition of constructions in order to identify and correct pathologic manifestations that could eventually hinder the durability of a structure.
Independent of risk level, various methodologies can be used to execute a façade inspection. In this context, the use of conventional methods, such as rappelling down a façade to verify possible points with adherence issues, or more modern alternatives, such as the use of an Unmanned Aerial Vehicle (UAV) and sensors in order to identify detached ceramic or areas of infiltration [46], can be highlighted. Figure 2 depicts both methods.
The process of a conventional inspection involves the act of qualified rope technicians rappelling down a structure and requires a series of prerogatives. Initially, these professionals make use of individual protection equipment and rappelling techniques in order to access different points on the façade of a building. During the inspection, a visual analysis is carried out in order to search for elements such as cracks, detachment of ceramic, and degradation of render. The main advantages of this approach include the possibility of inspecting each specific detail in a precise manner and the ability to identify problems that are not easily noticeable, such as the detachment of render. Despite these benefits, this method works better for small areas of investigation and can demand a longer period of execution when evaluating a mid- to large-sized structure. Furthermore, the process of rappelling down must be executed by experienced professionals with the required qualifications in climbing and rappelling, which makes this sort of professional scarce in the marketplace. Another important issue is the high risk of accidents during the inspection, considering heights and possible falls [47].
Regarding modern methods of inspection, the advancement of technology has facilitated the use of UAVs and sensors for façade inspection. These tools allow high-resolution images and videos to be captured, offering new scenarios and conditions of investigation. In brief, the process of investigation through this method starts with the technical inspection of the aircraft and ends with the configuration of the image/scenery being captured through sensors. According to [41], the definition of this scenery is considered a critical step of the process, defined by the angle of capture, the flight trajectory, the distance for data acquisition, and the density of captured data points.
According to the same author, this flight must be executed in a way that generates a spatial view of the whole building that is both precise and accurate in such a manner that the density of capture points is sufficient to avoid distortions or the absence of information. It is also recommended that the angle of the sensors is situated between 45° and 90° during data acquisition, with an overlap of images of at least 80% [48].
The use of UAVs and sensors permits the identification of the same pathologic manifestations found through the traditional method while providing a more encompassing field of spatial vision with a lower time of execution, thus offering more productivity and less cost. Besides these advantages, it also reduces risks for professionals, considering that all of the inspection process is conducted using the UAV.
However, it is possible to notice that the use of these technologies has limitations, especially concerning aircraft flight autonomy and the possibility of not detecting small pathologies due to the resolution of the embedded sensors [47]. Figure 3 shows a comparative analysis of the advantages and limitations of façade inspections executed with UAVs and sensors and conventional means involving rope technicians.
As depicted in Figure 3, traditional methods allow for a thorough assessment, thus identifying subtle signs of structural issues and deterioration. Moreover, these traditional methods often enable physical tests and sample collection for laboratory analysis, providing more precise data on the building’s health.
However, traditional methods can be time-consuming, expensive, and, in some cases, dangerous. The need for physical access to hard-to-reach areas can increase the risk for inspectors, especially when inspecting elevated or hazardous structures. Additionally, subjective interpretation of the collected data can lead to inaccurate or incomplete assessments of the building’s condition.
On the other hand, the use of drones for building inspection offers several advantages. Drones can easily and safely access hard-to-reach areas, reducing the risk posed to inspectors. Furthermore, they can capture high-resolution images and precise data quickly and efficiently, allowing for a comprehensive assessment of the building’s conditions within a short period of time. This can result in cost and time savings for building owners and managers.
However, drones also have their limitations. Their use is dependent on the weather conditions, as they can be affected by strong winds or rain, which may limit their effectiveness in certain situations. Additionally, interpreting the data collected by drones still requires human skills and expertise in order to ensure an accurate assessment of the building’s conditions.
In summary, both traditional methods and the use of drones have their place in building inspection. The choice between them will depend on the specific needs of the project, site conditions, and resource availability. Ideally, an integrated approach that combines the best of both methods can offer a comprehensive and accurate assessment of the building’s conditions, thus ensuring its safety and durability in the long term.

5. The Use of Embedded Sensors in UAVs for Façade Inspections

Building inspection is a technical procedure in the field of civil engineering, with the objective of UAVs becoming more commonly used in inspections while offering higher productivity and better quality of the developed products. Among their main applications, it is possible to observe the use of these aircraft for inspections in the areas of renewable energy, infrastructure, and environmental monitoring. Table 3 shows examples of research in different engineering fields in which UAVs have been used for inspection.
Due to their wide range of applications and possibilities, the use of UAVs has gained a good reputation as a promising technology concerning the evaluation and monitoring of building façades. Research in this area has shown an expressive rise in their use throughout the period from 2015 to 2022, as can be seen in Figure 4, indicating a future projection of growth. It is important to emphasize that the number of publications in Figure 4 (vertical axis) reflects data obtained from Web of Science, using the following search strings: “((UAV OR unmanned aerial vehicle) AND deep learning) OR ((UAV OR unmanned aerial vehicle) AND inspection) OR ((UAV OR unmanned aerial vehicle) AND (thermal sensor OR ultrasonic sensor))”.
According to [82], sensors are systems responsible for converting energy from objects, registering it either as an image or a graphic, permitting the association of radiance, emissivity, and ambient backscatter distribution with their physical, chemical, biological, or geometric properties. Embedded sensors in UAVs can be classified as optical or “passive”, capturing the “light” reflected from targets through visible light, multispectral, hyperspectral, or thermal cameras. Alternatively, they can be of the laser or “active” type (LiDAR, “light detection and ranging”), emitting laser pulses in order to measure the return time of reflections. In this case, these sensors emit “light” in order to conduct the necessary measurements. According to [83], embedded technologies in UAVs (sensors) have been adopted for inspection and monitoring procedures in order to evaluate anomalies in civil enterprises.
Figure 5 shows both types of sensors mentioned previously, while Figure 6 details the pathologic manifestations that each show a potential for identifying in building façades.
A detailed explanation of the main fields using UAVs and embedded sensors in façade inspection is given in the following.

5.1. Tridimensional Mapping of Buildings

Images acquired using the embedded sensors in UAVs allow not only visual inspection but also the configuration of tridimensional models of the façades of a building (Figure 7). According to [86], in order for this product to be developed, a few guidelines should be followed for mapping the field of investigation: space delimitation and flight height (the recommendation is to follow the cast-shadow principle), the definition of control points throughout the mapped space, and choosing a sensor with a resolution compatible with the object to be identified. For studies that demand a higher level of precision, the use of a LiDAR sensor with an image overlap rate of at least 80% is recommended.
After fieldwork acquisitions, the collected images are processed through specialized software that allows for the use of photogrammetry techniques in order to reconstruct the tridimensional geometry of a structure. This reconstruction offers a more holistic view of the structure, permitting a more in-depth analysis of its constructive elements and patterns of degradation.
LiDAR sensors have gained interest in the field of civil engineering, especially due to their precision and the quality of the generated product. According to [48], besides these characteristics, these sensors capture a great quantity of data samples (point cloud), which permits the reconstitution of structures to the centimeter level. In this manner, it is possible to identify—among other characteristics—the existence of pathologies of smaller dimensions, which would hardly be detected when using sensors with lower resolution, such as Red, Green, and Blue (RGB) sensors.
According to the same authors, another important benefit of LiDAR is the ability to obtain a three-dimensional point cloud even during night-time. As it is an active sensor, the absence of a natural source of sunlight does not hinder its field use, and so, the occurrence of climatic events (e.g., cloudy days) or a low incidence of sunlight are not limiting factors for this type of sensor.
Despite its many advantages, the use of LiDAR for mapping building façades also has limitations, including its high cost and the short range of operation between the sensor and mapped object [88]. Table 4 lists other advantages and limitations regarding the use of LiDAR-embedded UAVs for inspections.
Examples of LiDAR-embedded UAVs for building façade inspection can be found in the studies developed by [86,89,90,91,92].

5.2. Thermal Inspection

Thermal sensors permit the capture of infrared images, which are unnoticeable to the human eye. This approach has been widely used for infrastructure inspection, as it allows for the identification of different layers, highlighting alterations caused by the natural degradation of materials. This method offers significant advantages, not only in terms of thermal evaluation of the urban environment but also through facilitating the inspection of pathologies or thermally isolating different types of buildings [93].
Infrared radiation represents a form of electromagnetic energy moving at the speed of light, emitted and absorbed by all objects with temperatures above absolute zero (−273 °C). According to [94], the bigger the emitted radiation, the higher the surface temperature.
Thermal inspection through the use of UAVs offers an efficient technique for building analysis. In this type of inspection, a thermographic camera is embedded in a UAV, permitting the acquisition of infrared images of a building’s façade. These images illustrate the distribution of temperature over a building’s surface (Figure 8). The identification of pathologic manifestations is then carried out, considering the variations in temperature over this surface [95].
Among the main pathologic manifestations that can be observed in a thermal image, the presence of areas with low energetic efficiency—which can result in energy loss to the environment—can be highlighted. Additionally, thermal inspection permits the inspection of humidity—be it through water infiltration or leaks—which can lead to the proliferation of fungi and mold, potentially damaging the durability of a structure. These manifestations can be seen in thermal images as hot or cold anomalous dots, indicating the necessity of interventions or repairs [4].
As with any other method of investigation, thermal inspection offers advantages and limitations (Table 5). According to [6], the use of a thermal sensor permits the identification of pathologic manifestations in a non-destructive manner. Therefore, it is possible to discover the origin of a structurally internal anomaly that otherwise would not be detected through a visual inspection. The authors also highlighted that, likewise, it is possible to inspect hard-to-access areas, considering the mobility of investigation when using a sensor-embedded UAV.
However, it should be emphasized that thermal sensors in UAVs also have limitations that can hinder the development of an investigation. According to [4], the interpretation of thermal images requires specialized technical knowledge in order to distinguish normal temperature variations from anomalous temperatures in a façade. Another factor worthy of attention concerns aspects connected to environmental issues, such as solar radiation or wind influence. In order to identify the temperature variation indicating a structural anomaly, a strong emission of heat from the evaluated surface is necessary, limiting the use of sensors in periods of intense solar radiation.
Examples of thermal sensors embedded in UAVs for façade inspection can be found in the studies by [86,96,97,98,99].

5.3. Inspection with RGB Sensors

RGB sensors have been widely used for building façade inspection. This type of sensor permits the acquisition of images formed from the three main bands in the visible light spectrum: red, green, and blue (Figure 9). Combining these bands permits detailed photographic images to be captured, which can assist in the identification of different characteristics in each façade [100].
When capturing images, two main categories of RGB sensors can be considered: global shutter and rolling shutter. According to [101], global shutters capture all of the lines in an image in a cohesive and spontaneous way, guaranteeing a precise representation of a scene in a determined instant of time. Due to this characteristic, aside from the capture of fixed objects, its use is also recommended for registering moving elements, as this method of capture lessens the occurrence of distortions. On the other hand, a rolling shutter sensor captures the lines in an image in sequence; that is, in the opposite manner to the previous sensor, the pixels in an image are not captured in a uniform and spontaneous way. Due to this characteristic, its use is not recommended for capturing moving objects, considering that this dynamic action may lead to image distortions.
The use of RGB sensors for façade inspection permits the identification of a gamut of pathologic manifestations, including the detachment of ceramic plates [102], humidity stains [46], paint degradation [103], and cracks [104], among others. When analyzing images obtained through RGB sensors, it is possible to evaluate the severity of pathologic manifestations and propose the necessary corrective measures.
Among the advantages of using RGB sensors is their capacity for capturing clear images at high resolution, which facilitates the identification of façade anomalies. Additionally, this type of sensor can be readily purchased at an accessible price compared with other sensors, such as LiDAR, thermal, and multispectral sensors. However, RGB sensors present limitations that should be taken into consideration during an inspection; for example, they are sensitive to variations and contrast under different light conditions. Another point worthy of attention concerns the resolution of the spatial sensor, which can limit the identification of pathologic manifestations with smaller dimensions [46].

5.4. Inspection with Multispectral and Hyperspectral Sensors

A multispectral sensor captures images in different bands of the electromagnetic spectrum besides visible light (red, green, and blue), permitting the analysis of different wavelength bands. The principle that governs this sensor’s operation is established through the detection and registration of electromagnetic radiation reflected by objects, which is then processed in order to offer information about those inspected objects [99].
According to [83], there are two types of sensors that operate beyond the band of visible light and can be used in UAVs: multispectral and hyperspectral sensors. As they explain, a multispectral sensor acquires images in a limited number of narrow bands separated by spectral segments, while a hyperspectral sensor—while working in a similar manner to the multispectral one—allows for the capture of images with a far superior number of spectral bands, thus facilitating a more detailed analysis of material characteristics. These sensors are capable of identifying subtle changes in reflectance patterns and can be used for more precise and specialized analyses.
Due to these characteristics, images acquired using multispectral sensors offer data on the interaction between an object’s surface and electromagnetic radiation in various wavelengths. Most space programs in operation integrate multispectral imaging systems in the visible light (VIS), near-infrared (NIR), shortwave infrared (SWIR), and thermal infrared (TIR) bands [82]. These sensors permit an encompassing analysis of a target’s spectral characteristics, contributing to the acquisition of detailed information on the properties of an object on Earth’s surface.
According to [105], the use of a multispectral sensor permits the identification of pathologic manifestations in a non-destructive manner. Therefore, it is possible to discover the origin of a structurally internal anomaly that otherwise would not be detected through visual inspection. The authors also highlight that, likewise, it is possible to inspect hard-to-access areas, considering the mobility of investigation when using a sensor-embedded UAV.
When discussing the benefits of utilizing multispectral and hyperspectral sensors, ref. [106] noted that hyperspectral sensors offer a broader range of spectral data compared to traditional sensors. This broader spectrum can be particularly advantageous in identifying pathologies that may not be visible to the naked eye. The ability to capture data across multiple spectral bands allows for a more nuanced and precise analysis when compared to sensors such as RGB sensors.
However, despite their advantages, hyperspectral sensors also come with significant drawbacks. The substantial volume of data generated by these sensors can overwhelm analysis and processing systems, necessitating sophisticated methods to manage this complexity. Additionally, the higher costs associated with hyperspectral sensors may restrict their economic feasibility, particularly in sectors with limited budgets. The complexity of both data acquisition and analysis further complicates matters, requiring users to possess a higher level of training. Moreover, despite the potential of hyperspectral sensors, the limited amount of research in this field suggests that there are still obstacles to be overcome before these technologies can be widely adopted and applied [106,107,108,109].
Table 6 provides a summary of the benefits and drawbacks associated with using these sensors in data acquisition processes related to building pathology.
Examples of the use of multispectral and hyperspectral sensors in UAVs can be found in the studies developed by [110,111,112]. In general, many studies have focused on the theme of façade pathology, which demonstrates an important scientific field to be explored in order to provide opportunities for the development of new façade inspection techniques.

6. Cost Comparison between Conventional and UAV Inspections

The use of UAVs has gained attention in the construction field due to the associated economic advantage over other conventional methods of building inspection. Using previous studies, for example, those executed by [113,114,115,116], it is possible to evaluate the impact of UAV use for inspection when examining topics such as personnel, security, cost, and precision in comparison to traditional methods that use scaffolding and rappelling (i.e., industrial climbing).
According to [117], the process of façade inspection with UAVs contributes to a cost that is at least three times inferior when compared to rappelling, also offering reductions in working time, personnel, and risks of accidents. When comparing precision, the result depends on the model of the UAV and sensor quality; in the case of rope technicians, the precision depends on physical ability and technical qualification.
Furthermore, traditional methods such as scaffolding involve significant labor and material costs. The erection and dismantling of scaffolding can be time-consuming and require a skilled workforce, leading to higher labor costs. Additionally, the materials themselves can be expensive, and their usage impacts the overall project budget. In contrast, while UAVs require an initial investment in technology, the operational costs are relatively low. The reduction in the need for extensive labor and the speed at which UAVs can perform inspections translate into substantial cost savings [113].
In terms of operational efficiency, UAVs can cover large areas in a fraction of the time it takes for traditional methods. This rapid data collection is not only more cost-effective but also allows for more frequent inspections, leading to better maintenance and the early detection of potential issues. The use of advanced sensors and imaging technology further enhances the quality and accuracy of the data collected, which can be processed and analyzed using sophisticated software, thus providing detailed insights that are often superior to those obtained through manual inspections [114].
Moreover, the safety benefits cannot be overstated. Traditional methods pose significant risks to personnel, including falls from height and other physical hazards. UAVs mitigate these risks by removing the need for human workers to operate in dangerous environments. This not only enhances worker safety but also reduces costs associated with insurance and accident-related downtime [115].

7. Artificial Intelligence

The presence of artificial intelligence (AI) in our daily lives has become increasingly evident, whether in the development of more complex technical activities such as vehicle automation or industrial processes or in simpler tasks such as writing texts and creating content. Examples of AI use have been reported in studies developed by [61,118,119,120].
According to [121], the main objective of artificial intelligence is to develop systems capable of executing activities that, in theory, require a human to perform. Thus, the key prerogative is to create and develop machines and systems that can assist in decision-making, allow for self-learning, and autonomously solve problems. Despite this context, ref. [122] emphasized that the idea of creating an AI is not to replace human intelligence but, instead, to apply its problem-solving and task-undertaking characteristics in a more efficient and precise manner, besides making it possible to execute activities with a degree of complexity that would be too high for a human to carry out individually.
Although AI has experienced recent growth, both technological and media-related, artificial intelligence dates back to the 18th century. In this period, the idea of searching for a manner to represent and understand the human way of thinking and acting began, initially, with the studies of George Boole, who, through developing his Boolean logic, attempted to represent human reasoning in mathematical terms. The modern idea of AI emerged around the 1940s, when Warren McCulloch and Walter Pitts demonstrated the possibility of calculating any computable function using a series of neural networks and that any logic gate could be represented through a simple neural network [123].
In the 1950s, Marvin Minsky, co-founder of MIT’s Artificial Intelligence Laboratory, created the first neural network simulator called SNARC. This simulator proved valuable in adjusting its synaptic weights automatically, even though it could not process information considered to be relevant. In the 1980s, there was significant advancement and improvement in the field of neural networks. During this period, the first research on computational neuroscience was conducted by the scientist Skurnick [123].
According to [123], computational neuroscience became especially relevant in 1986 due to a study conducted by psychologist David Rumelhart alongside Professor James McClelland. They proposed a mathematical and computational model that allowed for the supervised training of artificial neurons, creating an unrestricted global optimization algorithm.
From the 1990s onwards, with the advancement of computational neuroscience, many research institutes and companies began to designate specific departments for the development and improvement of artificial intelligence. With its refinement, new concepts and approaches were developed in order to answer the global demand for ever more complex problem-solving [124]. In this context, areas such as deep learning emerged, influencing constant updates in the methodological scenario of fields such as civil engineering.

8. Deep Learning

According to [121], an artificial neural network can be defined as a computational structure connected by simple processing elements (neurons) that attempt to simulate information processing in a similar way to that occurring in the human brain. In general, these networks communicate through synaptic connections and execute operations such as data processing. In terms of computational architecture, a neural network permits restricting the type of problem to be evaluated, characterized by the number of layers used in its architecture. Among the main characteristics inherent to an artificial neural network, the possibility of implementing learning algorithms can be highlighted.
These algorithms, according to [125], allow a network to make new deductions after training executed on a collection of information (database). As the database is complemented, the neural network is refined, resulting in higher accuracy. According to the same authors, deep learning is a subdivision of the artificial intelligence field, which uses a group of artificial neural networks that require a great volume of data in order to carry out the training process. With this type of network, a machine is able to learn its own representative characteristics in an automatic manner; however, a greater computational processing effort is required.
According to [24], deep learning techniques are primarily based on the use of Deep CNNs to process information, such as numeric data and recognized objects. This type of network architecture typically comprises three levels of layers: the first is the input layer, where information is inserted; the second is an intermediate or hidden layer, in which the activation functions of the system are defined; the third and final layer is the output layer, where the final result of the processing is exposed.
Unlike simple neural networks, deep learning allows developers to create networks with a greater number of hidden layers, enabling more complex information processing at different levels of hierarchical abstractions with a larger volume of interactions. Among the main methods, convolutional neural networks (CNNs) can be highlighted. CNNs are particularly effective for tasks involving image and video recognition, classification, and segmentation due to their ability to automatically and adaptively learn spatial hierarchies of features through backpropagation [126].
Figure 10 illustrates the typical architecture of conventional neural networks and deep learning networks, highlighting the differences in layer complexity and depth. This distinction facilitates the enhanced capability of deep learning models to handle intricate and voluminous data, enabling breakthroughs in AI-driven applications.
Furthermore, the success of deep learning approaches can also be attributed to the availability of large data sets and advancements in computational power, particularly with the use of graphics processing units (GPUs). This has enabled the training of very large and deep networks, which would have been computationally prohibitive in the past [106].
In the context of practical applications, deep learning has significantly impacted various industries. For example, in the field of civil engineering, it has been used for infrastructure inspection through image analysis (e.g., identifying cracks in bridges and buildings) [128,129,130,131,132]. In construction projects, deep learning algorithms are critical for interpreting sensor data and predicting material deterioration [133,134,135,136,137,138]. In urban planning, these techniques have been used to enhance traffic management systems by analyzing circulation patterns and optimizing traffic flows [139,140,141,142,143].

9. Convolution Neural Network (CNN)

A convolutional neural network (CNN) is a machine learning model designed to identify objects in images. To ensure its efficiency, a robust database is required. According to [104], CNNs process image input data through a series of filters, determining the similarity between parts of an image and the applied filters. This involves activation functions for non-linear data processing. Figure 11 illustrates the schematic of a convolutional network. Given their ability to process image data, CNNs have been extensively studied in the geoprocessing and remote sensing fields, as noted by [10].
CNNs are a class of deep neural networks that are specialized in processing data with a grid-like topology, such as images. They operate by passing the input image through multiple layers of convolution, pooling, and fully connected layers, each extracting increasingly complex features. Convolution layers apply a set of filters to the input, detecting edges, textures, and patterns. Pooling layers reduce the spatial dimensions, making computation more efficient while preserving essential features. Activation functions, such as ReLU (Rectified Linear Unit), introduce non-linearity, enabling the network to learn more complex representations [130].
Due to their capability to process image data, CNNs have been the subject of extensive study over the years, evolving through various architectural types with the advancement of technology, as depicted in Figure 12.
These machine learning models offer several advantages over traditional machine learning and other AI models. They automatically extract features from raw image data, effectively capture spatial hierarchies in images, and have fewer parameters due to parameter sharing and sparse interactions, making them less prone to overfitting. Due to these characteristics, CNNs are consistently recommended for image and video analysis, geospatial and remote sensing applications, and in various other fields regarding object identification. However, they have limitations, including the need for large data sets and significant computational resources, often requiring GPUs for effective training [145].
In civil engineering, CNNs have been used to detect structural defects by analyzing images of infrastructure to identify cracks, corrosion, and other issues. They continuously monitor structural health using sensor data to predict potential failures and maintenance needs. CNNs also help to predict material deterioration over time, providing insights for preventive maintenance. An example of CNN application in the field of building pathology is presented in Figure 13.

10. Deep Learning Applied to Pathology of Building Façades

According to [146], in the field of civil engineering, deep learning approaches have been used in the context of building pathology, especially concerning the inspection of structures. When associated with the analysis of data from UAVs, deep learning allows for the automation and enhanced efficiency of processes such as the identification of pathologic manifestations (e.g., cracks, humidity stains, spalling, and detachment of ceramic plates). Furthermore, deep learning allows for the continuous analysis of monitoring data, making the detection of early changes or deterioration possible [147]. Table 7 provides examples of scientific research that has referenced the use of deep learning and UAVs for building inspection.
Numerous scientific studies have explored the use of deep learning and UAVs for building inspection. The exploration of this theme has experienced exponential growth from 2015 to 2022, as evidenced by Figure 14. It is worth noting that the quantity of publications depicted in this figure (vertical axis) pertains to data gathered from the Web of Science database.
Concerning the methodological aspect of deep learning used for façade inspection, the images acquired during fieldwork are initially divided in such a way that parts are used for training the neural network, while the others are reserved for network validation. After that, the collected images go through a process of segmentation, in which the input images are divided into smaller segments. These segmented images are then passed through the network’s hidden learning layers, where data processing and the extraction of relevant characteristics occur. Finally, the neural network offers a classification of the type of pathologic manifestation present in each image, permitting efficient and automated analysis of the apparent state of a façade.
In order to elaborate on the application of deep learning in the field of building pathology, the application of this technology regarding the main pathologies present in structures is detailed in the following.

10.1. Deep Learning for Crack Detection

The application of deep learning approaches for crack detection has become increasingly prominent in the field of monitoring the structural health of buildings. By employing advanced neural network architectures, such as convolutional neural networks (CNNs), it is possible to efficiently analyze images and identify patterns indicative of cracks within structural components (Figure 15).
When assessing the automatic identification of cracks in structures such as buildings and bridges using CNNs, Ref. [149] noted that the automation of this process yielded tangible benefits, such as cost and time reductions in assessment, as well as increased safety and objectivity in concrete inspections. However, the associated limitations include reliance on the quality of captured images and the need for extensive data sets to effectively train the deep learning models. Moreover, environmental factors such as humidity and temperature can impact the accuracy of the detection method.
Additionally, Ref. [148]—while evaluating the use of YOLOv7 for automatically identifying various types of pathologies, including cracks, on building façades—found similar advantages to those reported by [149] concerning field execution and processing aspects. The author also emphasized the crucial need for acquiring high-resolution images, with the detection of cracks becoming unfeasible, depending on their thickness, as the resolution decreases. Another point concerns the lack of extensive research on the use of certain CNN architectures, such as YOLOv7, for pathology identification purposes, highlighting an existing gap in the literature.
In other studies, as shown in Table 7, similar aspects regarding both the advantages and limitations of each application have also been made evident. In this context, it is evident that employing CNNs for crack identification in structures provides numerous benefits. These advantages encompass reduced data analysis time, heightened accuracy and precision in identifying pathologies, and diminished costs linked to field execution processes. Implementing CNNs allows for faster and more efficient analyses, providing a quicker response to structural inspection needs. Furthermore, the accuracy of CCNs regarding crack detection contributes to the more precise and early identification of potential structural issues, enabling preventive interventions and reducing costs associated with emergency repairs.
However, utilizing CNNs for crack identification also comes with some limitations. It is crucial to obtain high-quality images for both training and crack identification, as data quality directly impacts the model’s effectiveness. Additionally, the computational analysis of large data sets may require significant computing resources, especially in cases involving complex structures or extensive data. Moreover, the need for a robust and representative data set for CNN training can be a barrier, as obtaining and preparing such data may be labor-intensive and costly.

10.2. Deep Learning for Corrosion Detection

Similar to crack identification, images containing corrosion pathology are segmented and subjected to the hidden layers of the neural network, where data processing and relevant feature extraction occur. Finally, the neural network provides the classification of the type of pathological manifestation present in each image, allowing for efficient and automated analysis of the apparent condition of the façade. Figure 16 illustrates the schematic of this process aimed at identifying corrosion in metallic structures.
While developing a CNN to identify oxidation in industrial metallic structures using UAV images, Ref. [134] found that implementation of the CNN not only reduced operational costs but also improved key aspects such as the safety and maintenance of telecommunications towers. Despite these benefits, the author highlighted the importance of lighting conditions and the image capture angle, as issues in these factors can hinder the identification process.
In other studies, as detailed in Table 7, comparable aspects regarding the advantages and limitations of each application are also apparent. In this context, the benefits parallel those outlined in the discussion on cracks. The integration of AI has led to a decrease in the duration of field activities and has enhanced the accuracy and precision in identifying pathology.
However, the use of CNNs also presents some limitations. The effectiveness of CNNs depends on the quality of the images used, requiring high-resolution images to be captured in order to ensure that accurate results are obtained. The structural complexity of metallic surfaces can negatively influence the network’s performance, making corrosion detection more challenging. Additionally, depending on the magnitude of the object being analyzed, data analysis can impose high computational costs, which can be an obstacle in resource-limited environments. Finally, having a robust and representative data set for effective CNN training is crucial, but it can be difficult and costly to obtain, especially with respect to specific applications [160].

10.3. Deep Learning for Detachment of Ceramic Pieces

To evaluate the displacement of ceramic pieces on building façades, Ref. [157] used aerial images obtained by drones for field data acquisition and deep learning for the automated identification of these elements. The study demonstrated an accuracy of 94% in identifying the displaced elements, highlighting the effectiveness of the proposed approach. Additionally, the application of deep learning significantly reduced the engineering analysis time, resulting in a notable productivity gain.
During the analyses, the authors identified some critical points of attention for the method’s effectiveness. Firstly, there was a limitation regarding the quantity of image data available to perform a more robust training of the convolutional neural network. Secondly, attention was needed regarding the quality of the acquired images, as good resolution is essential for the correct identification of displacements by the convolutional neural network. In Table 7, it is possible to verify other authors who have made similar findings regarding the application of deep learning for the identification of detachment of ceramic pieces on building façades.
In addition, Ref. [167] evaluated the use of deep learning on thermal camera images to verify the identification of displaced ceramic pieces (hollow sound) in a laboratory setting, aiming to simulate a building façade (Figure 17). The use of this technology proved promising as it was able to identify areas with settlement defects. However, the authors emphasized the need to conduct tests on real buildings to verify the effectiveness of the model.
Similar to other façade pathologies analyzed in Section 10.1 and Section 10.2, significant advantages have been observed in the use of deep learning for identifying the displacement of ceramic pieces on building façades. However, there are still some gaps that need to be further explored. These include the availability of adequate quantitative data for training artificial intelligence algorithms, the resolution of collected images, and, in the case of displaced pieces that have not yet fallen, the need for more studies on real façades to verify the accuracy of deep learning usage.

11. Conclusions and Recommendations for Future Research

The combination of using UAV-embedded sensors and machine learning represents a promising approach in the field of building pathology, especially in the context of façade inspection. Although these technologies can be considered relatively recent, their great potential for application and other offered benefits can already be seen. As demonstrated in this article, this approach has gained attention and considerable interest and has thus furthered new research and proved to be an important diagnostic and monitoring tool for pathologic manifestations in building façades.
Although these new technologies, such as UAV-embedded sensors and deep learning, have brought significant advancements in the field of building pathology, there are still areas that require special attention. One of the main challenges is the need for a more robust database for effective training of artificial intelligence algorithms. Additionally, it is crucial to deepen and expand the study of deep learning usage in identifying specific pathologies, such as displacement of ceramic pieces and corrosion. Another important point is the consolidation of a minimum resolution and standard angle for image acquisition for facade assessment. Furthermore, there is a need to expand the use of these coupled technologies to a wider range of building types and different climatic scenarios, ensuring their effectiveness and applicability across various contexts.
It is important to emphasize that the combined use of drones with embedded sensors and deep learning offers a vast field of possibility in which new studies could be conducted. Even though advancements have been made, such as the optimization of segmentation and classification algorithms, the acquisition of encompassing and diversified databases, and the establishment of best practices adopted for each type of sensor, there are still challenges to be overcome. To assist in overcoming these challenges, promoting collaborations among construction companies, research institutions, and universities can be an effective action to enrich the available database and facilitate access to information. Strategies such as crowdsourcing can encourage professionals in the field and the general public to contribute data on observed pathologies, thereby expanding the volume and diversity of available information.
Furthermore, the implementation of open data platforms can be crucial to promote access and collaboration in the scientific and technical community. Shared data repositories and public access can facilitate cross-validation of models and encourage innovation, benefiting the entire community involved in the development of AI for civil engineering and building maintenance.
To conclude, for future studies, it is suggested that advanced image processing techniques, such as computer vision, be explored for automated analysis of pathologies on facades. Furthermore, investigations into the integration of multimodal data, such as visual images and sensor data, can provide a more comprehensive understanding of facade conditions. Additionally, conducting case studies in different geographical and climatic regions is important to assess the generalization and robustness of developed models. These approaches can contribute to the ongoing advancement in the field of identifying pathologies on building façades.

Author Contributions

Conceptualization, G.d.S.M., J.V.F.G. and E.d.S.B.; methodology, G.d.S.M. and J.V.F.G.; validation, G.d.S.M.; J.V.F.G. and E.d.S.B.; formal analysis, G.d.S.M. and J.V.F.G.; investigation, G.d.S.M. and J.V.F.G.; resources, G.d.S.M., J.V.F.G. and E.d.S.B.; data curation, G.d.S.M.; writing—original draft preparation, G.d.S.M. and J.V.F.G.; writing—review and editing, G.d.S.M. and J.V.F.G.; visualization, G.d.S.M.; supervision, E.d.S.B. project administration, E.d.S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to thank the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) and University of Brasília, Institute of Geosciences, Graduate Program in Applied Geosciences and Geodynamics for making this research possible.

Conflicts of Interest

The authors declare no conflict of interest.

Correction Statement

This article has been republished with a minor correction to the Acknowledgments Statement. This change does not affect the scientific content of the article.

References

  1. Kim, H.; Lee, J.; Ahn, E.; Cho, S.; Shin, M.; Sim, S. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing. Sensors 2018, 17, 2052. [Google Scholar] [CrossRef] [PubMed]
  2. Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  3. Sreenath, S.; Malik, H.; Husnu, N.; Kalaichelavan, K. Assessment and Use of Unmanned Aerial Vehicle for Civil Structural Health Monitoring. Procedia Comput. Sci. 2020, 170, 656–663. [Google Scholar] [CrossRef]
  4. Silva, W.P.A.; Lordsleen Júnior, A.C.; Ruiz, R.D.B.; Rocha, J.H.A. Inspection of pathological manifestations in buildings by using a thermal imaging camera integrated with an Unmanned Aerial Vehicle (UAV): A documental research. Rev. ALCONPAT 2021, 11, 123–139. [Google Scholar]
  5. Mandirola, M.; Casarotti, C.; Peloso, S.; Lanese, I.; Brunesi, E.; Senaldi, I. Use of UAS for damage inspection and assessment of bridge infrastructures. Int. J. Disaster Risk Reduct. 2022, 72, 21. [Google Scholar] [CrossRef]
  6. Mayer, Z.; Epperlein, A.; Vollmer, E.; Volk, R.; Schultmann, F. Investigating the Quality of UAV-Based Images for the Thermographic Analysis of Buildings. Remote Sens. 2023, 15, 301. [Google Scholar] [CrossRef]
  7. Bauer, E.; Castro, E.K.; Silva, M.N.B. Estimate of the facades degradation with ceramic cladding: Study of master buildings. Ceramica 2015, 61, 151–159. [Google Scholar] [CrossRef]
  8. Ballesteros, R.D.; Lordsleem Junior, A.C. Veículos Aéreos Não Tripulados (VANT) para inspeção de manifestações patológicas em fachadas com revestimento cerâmico. Ambiente Construído 2021, 21, 119–137. [Google Scholar] [CrossRef]
  9. Almeida, A.S.F.C.E.; Ornelas, A.J.A.; Cordeiro, A.R. Termografia passiva no diagnóstico de patologias e desempenho térmico em fachadas de edifícios através de câmara térmica instalada em drone. Abordagem preliminar em Coimbra (Portugal). Cad. Geogr. 2020, 42, 27–41. [Google Scholar]
  10. Borba, P. Extração Automática de Edificações para a Produção Cartográfica Utilizando Inteligência Artificial. Master’s Thesis, University of Brasília, Brasília, Brazil, 2022. [Google Scholar]
  11. Jiang, S.; Zhang, J. Real-Time Crack Assessment Using Deep Neural Networks with Wall-Climbing Unmanned Aerial System. Comput.-Aided Civ. Infrastruct. Eng. 2020, 35, 549–564. [Google Scholar] [CrossRef]
  12. Calantropio, A.; Chiabrando, F.; Codastefano, M.; Bourke, E. Deep learning for Automatic Building Damage Assessment: Application in Post-Disaster Scenarios Using UAV Data. In Proceedings of the XXIV International Society for Photogrammetry and Remote Sensing Congress, XXIV ISPRS Congress, Nice, France, 5–9 July 2021. [Google Scholar]
  13. Chen, K.; Reichard, D.; Xu, X.; Akanmu, A. Automated Crack Segmentation in Close-Range Building Façade Inspection Images Using Deep Learning Techniques. J. Build. Eng. 2021, 43, 16. [Google Scholar] [CrossRef]
  14. Kumar, P.; Batchu, S.; Swamy, N.S.; Kota, S.R. Real-Time Concrete Damage Detection Using Deep Learning for High Rise Structures. IEEE Access 2021, 9, 112312–112331. [Google Scholar] [CrossRef]
  15. Sousa, A.D.P.; Sousa, G.C.L.; Maués, L.M.F. Using Digital Image Processing and Unmanned Aerial Vehicle (UAV) for Identifying Ceramic Cladding Detachment in Building Facades. Ambiente Construído 2022, 22, 199–213. [Google Scholar] [CrossRef]
  16. Santos, L.M.A.; Zanoni, V.A.G.; Bedin, E.; Pistori, H. Deep learning Applied to Equipment Detection on Flat Roofs in Images Captured by UAV. Case Stud. Constr. Mater. 2023, 18, 18. [Google Scholar] [CrossRef]
  17. Teng, S.; Chen, G. Deep Convolution Neural Network-Based Crack Feature Extraction, Detection and Quantification. J. Fail. Anal. Prev. 2022, 22, 1308–1321. [Google Scholar] [CrossRef]
  18. Ali, R.; Chuah, J.H.; Talip, M.S.A.; Mokhtar, N.; Shoaib, M.A. Structural crack detection using deep convolutional neural networks. Autom. Constr. 2022, 133, 28. [Google Scholar] [CrossRef]
  19. Gonthina, M.; Chamata, R.; Duppalapudi, J.; Lute, V. Deep CNN-based concrete cracks identification and quantification using image processing techniques. Asian J. Civ. Eng. 2022, 24, 727–740. [Google Scholar] [CrossRef]
  20. Fu, R.; Xu, H.; Wang, Z.; Shen, L.; Cao, M.; Liu, T.; Novák, D. Enhanced Intelligent Identification of Concrete Cracks Using Multi-Layered Image Preprocessing-Aided Convolutional Neural Networks. Sensors 2020, 20, 2021. [Google Scholar] [CrossRef] [PubMed]
  21. Słoński, M.; Tekiele, M. 2D Digital Image Correlation and Region-Based Convolutional Neural Network in Monitoring and Evaluation of Surface Cracks in Concrete Structural Elements. Materials 2020, 13, 3527. [Google Scholar] [CrossRef]
  22. Sjölander, A.; Belloni, V.; Anseli, A.; Nordström, E. Towards Automated Inspections of Tunnels: A Review of Optical Inspections and Autonomous Assessment of Concrete Tunnel Linings. Sensors 2023, 23, 3189. [Google Scholar] [CrossRef]
  23. Lee, J.; Kim, H.; Kim, N.; Ryu, E.; Kang, J. Learning to Detect Cracks on Damaged Concrete Surfaces Using Two-Branched Convolutional Neural Network. Sensors 2019, 19, 4796. [Google Scholar] [CrossRef] [PubMed]
  24. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaria, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions. J. Big Data 2021, 53, 74. [Google Scholar] [CrossRef] [PubMed]
  25. Yesilmen, S.; Tatar, B. Efficiency of convolutional neural networks (CNN) based image classification for monitoring construction related activities: A case study on aggregate mining for concrete production. Case Stud. Constr. Mater. 2022, 17, 11. [Google Scholar] [CrossRef]
  26. Taherkhani, A.; Cosma, G.; McGinnity, T.M. A Deep Convolutional Neural Network for Time Series Classification with Intermediate Targets. SN Comput. Sci. 2023, 4, 24. [Google Scholar] [CrossRef]
  27. Hussain, M.; Bird, J.J.; Faria, D.R. A Study on CNN Transfer Learning for Image Classification. Adv. Comput. Intell. Syst. 2018, 840, 191–202. [Google Scholar]
  28. Bahmei, B.; Birmingham, E.; Arzanpour, S. CNN-RNN and Data Augmentation Using Deep Convolutional Generative Adversarial Network for Environmental Sound Classification. IEEE Signal Process. Lett. 2022, 29, 682–686. [Google Scholar] [CrossRef]
  29. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 20. [Google Scholar] [CrossRef] [PubMed]
  30. Tong, Z.; Gao, J.; Yuan, D. Advances of deep learning applications in ground-penetrating radar: A survey. Constr. Build. Mater. 2020, 258, 14. [Google Scholar] [CrossRef]
  31. Qayyum, R.; Ehtisham, R.; Bahrami, A.; Camp, C.; Mir, J.; Ahmad, A. Assessment of Convolutional Neural Network Pre-Trained Models for Detection and Orientation of Cracks. Materials 2023, 16, 826. [Google Scholar] [CrossRef]
  32. Watt, D. Building Pathology: Principles and Practice; Wiley-Blackwell: Malden, MA, USA, 2009. [Google Scholar]
  33. Bolina, F.L.; Tutikian, B.F.; Helene, P. Patologia de Estruturas; Oficina de Textos: São Paulo, Brazil, 2019. [Google Scholar]
  34. International Council for Research and Innovation in Building and Construction (CIB). Building Pathology: A State-of-The-Art Report; Building Pathology Department: Delft, The Netherlands, 1993. [Google Scholar]
  35. Caporrino, C.F. Patologias em Alvenarias; Oficina de Textos: São Paulo, Brazil, 2018. [Google Scholar]
  36. Sena, G.O. Conceitos Iniciais. In Automating Cities. Patologia das Construções; Sena, G.O., Nascimento, M.L.M., Nabut Neto, A.C., Eds.; Editora 2B: Salvador, Bahia, 2020. [Google Scholar]
  37. Instituto Brasileiro de Avaliações e Perícias de Engenharia (IBAPE). Norma de Inspeção Predial–2021; IBAPE: São Paulo, Brazil, 2021; 27p. [Google Scholar]
  38. ABNT NBR 16747; Inspeção Predial–Diretrizes, Conceitos, Terminologia e Procedimento. Associação Brasileira de Normas Técnicas (ABNT): Rio de Janeiro, Brazil, 2021; 14p.
  39. ABNT NBR 14653-5; Avaliação de Bens—Parte 5: Máquinas, Equipamentos, Instalações e Bens Industriais em Geral. Associação Brasileira de Normas Técnicas (ABNT): Rio de Janeiro, Brazil, 2006; 19p.
  40. Petersen, A.B.B. Vida Útil, Manutenção e Inspeção: Foco em Fachadas Revestidas com Reboco e Pintura; Blucher: São Paulo, Brazil, 2022. [Google Scholar]
  41. Melo Júnior, C.M. Metodologia para Geração de Mapas de Danos de Fachadas a Partir de Fotografias Obtidas por Veículo Aéreo Não Tripulado e Processamento Digital de Imagens. Ph.D. Thesis, University of Brasília, Brasília, Brazil, 2016. [Google Scholar]
  42. Thomaz, E. Trincas em Edificações: Causas, Prevenção e Recuperação; Oficina de Textos: São Paulo, Brazil, 2020. [Google Scholar]
  43. Silvestre, J.D.; Brito, J. Ceramic Tiling in Building Facades: Inspection and Pathological Characterization Using an Expert System. Constr. Build. Mater. 2011, 25, 1560–1571. [Google Scholar] [CrossRef]
  44. Harper, R.F. The Code of Hammurabi, King of Babylon, about 2250 B.C.; The University of Chicago Press Callaghan & Company: Chicago, IL, USA, 1904. [Google Scholar]
  45. Macdonald, S. Concrete; Blackwell Science: Washington, DC, USA, 2003. [Google Scholar]
  46. Ferraz, G.; Brito, J.; Freitas, V.P.; Silvestre, J.D. State-of-the-Art Review of Building Inspection Systems. J. Perform. Constr. Facil. 2016, 30, 04016018-1–04016018-8. [Google Scholar] [CrossRef]
  47. Ruiz, R.D.B.; Lordsleem Júnior, A.C.; Rocha, J.H.A. Inspection of Facades with Unmanned Aerial Vehicles (UAV): An Exploratory Study. Rev. ALCONPAT 2022, 11, 88–104. [Google Scholar]
  48. Liang, H.; Lee, S.; Bae, W.; Kim, J.; Seo, S. Towards UAVs in Construction: Advancements, Challenges, and Future Directions for Monitoring and Inspection. Drones 2023, 7, 202. [Google Scholar] [CrossRef]
  49. Chaudhari, S.R.; Bhavsar, A.S.; Ranjwan, H.P.; Yadav, P.S.; Shaikh, S.S. Utilizing Drone Technology in Civil Engineering. Int. J. Adv. Res. Sci. Commun. Technol. 2022, 2, 765–780. [Google Scholar]
  50. Pruthviraj, U.; Kashyap, Y.; Baxevanaki, E.; Kosmopoulos, P. Solar Photovoltaic Hotspot Inspection Using Unmanned Aerial Vehicle Thermal Images at a Solar Field in South India. Remote Sens. 2023, 15, 1914. [Google Scholar] [CrossRef]
  51. Ramírez, I.S.; Chaparro, J.R.P.; Márquez, F.P.G. Unmanned Aerial Vehicle Integrated Real Time Kinematic in Infrared Inspection of Photovoltaic Panels. Remote Sens. 2023, 15, 20. [Google Scholar]
  52. Park, J.; Lee, D. Precise Inspection Method of Solar Photovoltaic Panel Using Optical and Thermal Infrared Sensor Image Taken by Drones. Mater. Sci. Eng. 2019, 6115, 7. [Google Scholar] [CrossRef]
  53. Morando, L.; Recchiuto, C.T.; Calla, J.; Scuteri, P.; Sgorbissa, A. Thermal and Visual Tracking of Photovoltaic Plants for Autonomous UAV Inspection. Drones 2022, 6, 347. [Google Scholar] [CrossRef]
  54. Aghaei, M.; Madukanya, U.E.; de Oliveira, A.K.V.; Ruther, R. Fault Inspection by Aerial Infrared Thermography in A PV Plant after a Meteorological Tsunami. In Proceedings of the VII Congresso Brasileiro de Energia Solar, VII CBENS, Gramado, Brazil, 17–20 April 2018. [Google Scholar]
  55. Beniaich, A.; Silva, M.L.N.; Guimaraes, D.V.; Avalos, F.A.P.; Terra, F.S.; Menezes, M.D.; Avanzi, J.C.; Candido, B.M. UAV-Based Vegetation Monitoring for Assessing the Impact of Soil Loss in Olive Orchards in Brazil. Geoderma Reg. 2022, 30, 15. [Google Scholar] [CrossRef]
  56. Lane, S.N.; Gentile, A.; Goldenschue, L. Combining UAV-Based SfM-MVS Photogrammetry with Conventional Monitoring to Set Environmental Flows: Modifying Dam Flushing Flows to Improve Alpine Stream Habitat. Remote Sens. 2020, 12, 3868. [Google Scholar] [CrossRef]
  57. Ioli, F.; Bianchi, A.; Cina, A.; de Michele, C.; Maschio, P.; Passoni, D.; Pinto, L. Mid-Term Monitoring of Glacier’s Variations with UAVs: The Example of the Belvedere Glacier. Remote Sens. 2022, 14, 28. [Google Scholar] [CrossRef]
  58. Yuan, S.; Li, Y.; Bao, F.; Xu, H.; Yang, Y.; Yan, Q.; Zhong, S.; Yin, H.; Xu, J.; Huang, Z.; et al. Marine Environmental Monitoring with Unmanned Vehicle Platforms: Present Applications and Future Prospects. Sci. Total Environ. 2023, 858, 15. [Google Scholar] [CrossRef] [PubMed]
  59. Næsset, E.; Gobakken, T.; Jutras-Perreault, M.; Ramtvedt, E.N. Comparing 3D Point Cloud Data from Laser Scanning and Digital Aerial Photogrammetry for Height Estimation of Small Trees and Other Vegetation in a Boreal–Alpine Ecotone. Remote Sens. 2021, 13, 2469. [Google Scholar] [CrossRef]
  60. Congress, S.; Puppala, A. Eye in The Sky: Condition Monitoring of Transportation Infrastructure Using Drones. Civ. Eng. 2022, 176, 40–48. [Google Scholar] [CrossRef]
  61. Chai, Y.; Choi, W. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. J. Build. Eng. 2023, 64, 21. [Google Scholar] [CrossRef]
  62. Astor, Y.; Nabesima, Y.; Utami, R.; Sihombing, A.V.R.; Adli, M.; Firdaus, M.R. Unmanned Aerial Vehicle Implementation for Pavement Condition Survey. Transp. Eng. 2023, 12, 11. [Google Scholar] [CrossRef]
  63. Roberts, R.; Inzerillo, L.; Mino, G.D. Using UAV Based 3D Modelling to Provide Smart Monitoring of Road Pavement Conditions. Information 2020, 11, 568. [Google Scholar] [CrossRef]
  64. Pan, Y.; Chen, X.; Sun, Q.; Zhang, X. Monitoring Asphalt Pavement Aging and Damage Conditions from Low-Altitude UAV Imagery Based on a CNN Approach. Can. J. Remote Sens. 2021, 47, 432–449. [Google Scholar] [CrossRef]
  65. Rauhala, A.; Tuomela, A.; Davids, C.; Rossi, P.M. UAV Remote Sensing Surveillance of a Mine Tailings Impoundment in Sub-Arctic Conditions. Remote Sens. 2017, 9, 1318. [Google Scholar] [CrossRef]
  66. Nappo, N.; Mavrouli, O.; Nex, F.; Westen, C.V.; Gambillara, R.; Michetti, A.M. Use of UAV-Based photogrammetry products for semi-automatic detection and classification of asphalt road damage in landslide-affected areas. Eng. Geol. 2021, 294, 29. [Google Scholar] [CrossRef]
  67. Zhang, H.; Li, Q.; Wang, J.; Fu, B.; Duan, Z.; Zhao, Z. Application of Space–Sky–Earth Integration Technology with UAVs in Risk Identification of Tailings Ponds. Drones 2023, 7, 222. [Google Scholar] [CrossRef]
  68. Nasategay, F.F.U. UAV Applications in Road Monitoring for Maintenance Purposes. Master’s Thesis, University of Nevada, Reno, NV, USA, 2020. [Google Scholar]
  69. Khaloo, A.; Lattanzi, D.; Jachimowicz, A.; Devaney, C. Utilizing UAV and 3D Computer Vision for Visual Inspection of a Large Gravity Dam. Front. Built Environ. 2018, 4, 16. [Google Scholar] [CrossRef]
  70. Martinez, J.; Gheisari, M.; Alarcon, L.F. UAV Integration in Current Construction Safety Planning and Monitoring Processes: Case Study of a High-Rise Building Construction Project in Chile. J. Manag. Eng. 2020, 36, 15. [Google Scholar] [CrossRef]
  71. Jiang, Y.; Bai, Y. Estimation of Construction Site Elevations Using Drone-Based Orthoimagery and Deep learning. J. Comput. Civ. Eng. 2020, 146, 04020086-1–04020086-18. [Google Scholar] [CrossRef]
  72. Jacob-Loyola, N.; Rivera, F.M.; Herrera, R.F.; Atencio, E. Unmanned Aerial Vehicles (UAVs) for Physical Progress Monitoring of Construction. Sensors 2021, 21, 4227. [Google Scholar] [CrossRef] [PubMed]
  73. Li, Y.; Liu, C. Applications of Multirotor Drone Technologies in Construction Management. Int. J. Constr. Manag. 2018, 19, 12. [Google Scholar] [CrossRef]
  74. Gopalakrishnan, K.; Gholami, H.; Vidyadharan, A.; Choudhary, A.; Agrawal, A. Crack Damage Detection in Unmanned Aerial Vehicle Images of Civil Infrastructure Using Pre-Trained Deep Learning Model. Int. J. Traffic Transp. Eng. 2018, 8, 14. [Google Scholar]
  75. Ellenberg, A.; Kontsos, A.; Moon, F.; Bartoli, I. Bridge Deck Delamination Identification from Unmanned Aerial Vehicle Infrared Imagery. Autom. Constr. 2016, 72, 155–165. [Google Scholar] [CrossRef]
  76. Sankarasrinivasan, S.; Balasubramanian, E.; Karthik, K.; Chandrasekar, U.; Gupta, R. Health Monitoring of Civil Structures with Integrated UAV and Image Processing System. Procedia Comput. Sci. 2015, 54, 508–515. [Google Scholar] [CrossRef]
  77. Rossi, G.; Tanteri, L.; Tofani, V.; Vannocci, P.; Moretti, S.; Casagli, N. Multitemporal UAV Surveys for Landslide Mapping and Characterization. Landslides 2018, 15, 1045–1052. [Google Scholar] [CrossRef]
  78. Maza, I.; Caballero, F.; Capitan, J.; Martínez-de-Dios, J.R.; Ollero, A. Experimental Results in Multi-UAV Coordination for Disaster Management and Civil Security Applications. J. Intell. Robot. Syst. 2011, 61, 563–585. [Google Scholar] [CrossRef]
  79. Greenwood, F.; Nelson, E.L.; Greenough, P.G. Flying into The Hurricane: A Case Study of UAV Use in Damage Assessment During the 2017 Hurricanes in Texas and Florida. PLoS ONE 2020, 15, 30. [Google Scholar] [CrossRef] [PubMed]
  80. Amaral, A.K.N.; de Souza, C.A.; Momoli, R.S.; Cherem, L.F.S. Use of Unmanned Aerial Vehicle to Calculate Soil Loss. Pesquisa. Agropecuária Trop. 2021, 51, 9. [Google Scholar] [CrossRef]
  81. Xiao, W.; Ren, H.; Sui, T.; Zhang, H.; Zhao, Y.; Hu, Z. A Drone and Field-Based Investigation of The Land Degradation and Soil Erosion at An Opencast Coal Mine Dump after 5 Years Evolution. Int. J. Coal Sci. Technol. 2022, 9, 17. [Google Scholar] [CrossRef]
  82. Novo, E.M.L. Sensoriamento Remoto: Princípios e Aplicações; Blucher: São Paulo, Brazil, 2010. [Google Scholar]
  83. Zhang, Z.; Zhu, L. A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
  84. Types of Loads. DJI. Available online: https://www.dji.com/br/products/enterprise?site=enterprise&from=nav#payloads (accessed on 28 May 2024).
  85. Sentera 6X DJI Skyport. Flying Eye. Available online: https://www.flyingeye.fr/product/sentera-6x-pour-matrice-200-210-300/ (accessed on 28 May 2024).
  86. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  87. Wang, D.; Shu, H. Accuracy Analysis of Three-Dimensional Modeling of a Multi-Level UAV without Control Points. Buildings 2022, 13, 592. [Google Scholar] [CrossRef]
  88. Kaartinen, E.; Dunphy, K.; Sadhu, A. LiDAR-Based Structural Health Monitoring: Applications in Civil Infrastructure Systems. Sensors 2022, 22, 4610. [Google Scholar] [CrossRef]
  89. Peng, K.C.; Feng, L.; Hsieh, Y.C.; Yang, T.H.; Hsiung, S.H.; Tsai, Y.D.; Kuo, C. Unmanned Aerial Vehicle for Infrastructure Inspection with Image Processing for Quantification of Measurement and Formation of Facade Map. In Proceedings of the International Conference on Applied System Innovation, ICASI, Sapporo, Japan, 13–17 May 2017. [Google Scholar]
  90. Jarzabek-Rychard, M.; Lin, D.; Maas, H. Supervised Detection of Façade Openings in 3D Point Clouds with Thermal Attributes. Remote Sens. 2020, 12, 543. [Google Scholar] [CrossRef]
  91. Bolourian, N.; Hannad, A. LiDAR-equipped UAV path Planning Considering Potential Locations of Defects for Bridge Inspection. Autom. Constr. 2020, 117, 16. [Google Scholar] [CrossRef]
  92. Liu, Y.; Lin, Y.; Yeoh, J.K.W.; Chua, D.K.H.; Wong, L.W.C.; Ang Júnior, M.H.; Lee, W.L.; Chew, M.Y.L. Framework for Automated UAV-Based Inspection of External Building Façades. In Automating Cities. Advances in 21st Century Human Settlements; Wang, B.T., Wang, C.M., Eds.; Springer: Singapore, 2021. [Google Scholar]
  93. Meola, C.; Carlomagno, G.M.; Giorleo, L. The Use of Infrared Thermography for Materials Characterization. J. Mater. Process. Technol. 2004, 155–156, 1132–1137. [Google Scholar] [CrossRef]
  94. Silva, F.A.M. Diagnóstico da Envolvente de um Edifício Escolar com Recurso a Análise Termográfica. Master’s Thesis, Instituto Politécnico de Viana do Castelo, Viana do Castelo, Portugal, 2016. [Google Scholar]
  95. Resende, M.M.; Gambare, E.B.; Silva, L.A.; Cordeiro, Y.S.; Almeida, E.; Salvador, R.P. Infrared Thermal Imaging to Inspect Pathologies on Façades of Historical Buildings: A Case Study on the Municipal Market of São Paulo, Brazil. Case Stud. Constr. Mater. 2022, 16, 12. [Google Scholar] [CrossRef]
  96. Carrio, A.; Pestana, J.; Sanchez-Lopoes, J.; Suarez-Fernandez, R.; Campoy, P.; Tendero, R.; García-de-Viedma, M.; González-Rodrigo, B.; Bonatti, J.; Rejas-Ayuga, J.G.; et al. UBRISTES: UAV-Based Building Rehabilitation with Visible and Thermal Infrared Remote Sensing. In Robot 2015: Second Iberian Robotics Conference, Advances in Intelligent Systems and Computing; Reis, L., Moreira, A., Lima, P., Montano, L., Muñoz-Martinez, V., Eds.; Springer: Cham, Switzerland, 2016; Volume 417. [Google Scholar]
  97. Lin, D.; Jarzabek-Rychard, M.; Schneider, D.; Maas, H.G. Thermal Texture Selection and Correction for Building Façade Inspection Based on Thermal Radial Characteristics. Remote Sens. Spat. Inf. Sci. 2018, XLII, 585–591. [Google Scholar]
  98. Ortiz-Sanz, J.; Gil-Docampo, M.; Arza-Garcia, M.; Cañas-Guerrero, I. IR Thermography from UAVs to Monitor Thermal Anomalies in the Envelopes of Traditional Wine Cellars: Field Test. Remote Sens. 2019, 11, 1424. [Google Scholar] [CrossRef]
  99. Luis-Ruiz, J.M.; Sedano-Cibrián, J.; Pérez-Álvarez, R.; Pereda-Garcia, R.; Malagón-Picón, B. Metric contrast of thermal 3D models of large industrial facilities obtained by means of low-cost infrared sensors in UAV platforms. Int. J. Remote Sens. 2022, 43, 457–483. [Google Scholar] [CrossRef]
  100. Wolff, F.; Kolari, T.H.M.; Villoslada, M.; Tahvanainen, T.; Korprlainen, P.; Zamboni, P.A.P.; Kumpula, T. RGB vs Multispectral Imagery: Mapping Aapa Mire Plant Communities with UAVs. Ecol. Indic. 2023, 148, 14. [Google Scholar] [CrossRef]
  101. Rolling Shutter vs Obturador Global Modo de Câmera CMOS. Oxford Instruments. Available online: https://andor.oxinst.com/learning/view/article/rolling-and-global-shutter (accessed on 15 January 2023).
  102. Lisboa, D.W.B.; da Silva, A.B.S.; de Souza, A.B.A.; da Silva, M.P. Utilização do Vant na Inspeção de Manifestações Patológicas em Fachadas de Edificações. In Proceedings of the Congresso Técnico Científico da Engenharia e da Agronomia, CONTECC 2018, Maceió, Brazil, 21–24 August 2018. [Google Scholar]
  103. Jordan, S.; Moore, J.; Hovet, S.; Box, J.; Perry, J.; Kirsche, K.; Lewis, D.; Tse, Z.T.H. State-of-the-art technologies for UAV Inspections. IET Radar Sonar Navig. 2018, 12, 151–164. [Google Scholar] [CrossRef]
  104. Chen, L.; Li, S.; Bai, Q.; Yang, J.; Jiang, S.; Miao, Y. Review of Image Classification Algorithms Based on Convolutional Neural Networks. Remote Sens. 2021, 13, 4712. [Google Scholar] [CrossRef]
  105. Adão, T.; Hruska, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef]
  106. Pandey, M.; Fernandez, M.; Gentile, F.; Isayev, A.; Tropsha, A.; Stern, A.C.; Cherkasov, A. The transformational role of GPU computing and deep learning in drug discovery. Nat. Mach. Intell. 2022, 4, 211–221. [Google Scholar] [CrossRef]
  107. Jia, J.; Wang, Y.; Chen, J.; Guo, R.; Shu, R.; Wang, J. CNN-RNN and Data Augmentation Using Deep Convolutional Generative Adversarial Network for Environmental Sound Classification. Infrared Phys. Technol. 2020, 104, 16. [Google Scholar]
  108. Zhang, H.; Zhang, B.; Wei, Z.; Wang, C.; Huang, Q. Lightweight Integrated Solution for a UAV-Borne Hyperspectral Imaging System. Remote Sens. 2020, 12, 657. [Google Scholar] [CrossRef]
  109. Proctor, C.; He, Y. Workflow for Building a Hyperspectral UAV: Challenges and Opportunities. International Archives of the Photogrammetry. Remote Sens. Spat. Inf. Sci. 2015, XL-1/W4, 415–419. [Google Scholar]
  110. Marcial-Pablo, M.J.; Gonzalez-Sanchez, A.; Jimenez-Jimenez, S.I.; Ontiveros-Capurata, R.E.; Ojeda-Bustamantem, W. Estimation of vegetation fraction using RGB and multispectral images from UAV. Int. J. Remote Sens. 2019, 40, 420–438. [Google Scholar] [CrossRef]
  111. Habili, N.; Kwan, E.; Webers, C.; Oorloff, J.; Armin, M.A.; Petersson, L. A Hyperspectral and RGB Dataset for Building Façade Segmentation. In Computer Vision–ECCV 2022 Workshops. ECCV 2022; Karlinsky, L., Michaeli, T., Nishino, K., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2022; Volume 13807. [Google Scholar]
  112. Olivetti, D.; Cicerelli, R.; Martinez, J.; Almeida, T.; Casari, R.; Borges, H.; Roig, H. Comparing Unmanned Aerial Multispectral and Hyperspectral Imagery for Harmful Algal Bloom Monitoring in Artificial Ponds Used for Fish Farming. Drones 2023, 7, 410. [Google Scholar] [CrossRef]
  113. Rocha, M.S.; Bezerra, M.R.C.S.; da Silva, G.M.; Brito, D.R.N. Inspeção de fachadas utilizando tecnologias da Indústria 4.0: Uma comparação com métodos tradicionais. Braz. J. Dev. 2023, 9, 23835–23848. [Google Scholar] [CrossRef]
  114. Falorca, J.F.; Miraldes, J.P.N.D.; Lanzinha, J.C.G. New trends in visual inspection of buildings and structures: Study for the use of drones. Open Eng. 2021, 11, 734–743. [Google Scholar] [CrossRef]
  115. Oliveira, A.A.; Graças, G.M.; Lopes, L.K.; Rezende, P.H.F. Inspeção de Manifestações Patológicas em Fachadas Utilizando Aeronaves Remotamente Pilotadas. Monography; Mackenzie Presbiterian University: São Paulo, Brazil, 2020. [Google Scholar]
  116. Ciampa, E.; de Vito, L.; Pecce, M.R. Practical issues on the use of drones for construction inspections. J. Phys. Conf. Ser. 2019, 1249, 12. [Google Scholar] [CrossRef]
  117. Wang, J.; Wang, P.; Qu, L.; Pei, Z.; Ueda, T. Automatic detection of building surface cracks using UAV and deep learning-combined approach. Struct. Concr. 2024; early view. [Google Scholar] [CrossRef]
  118. El-Mir, A.; El-Zahab, S.; Sbartai, Z.M.; Homsi, F.; Saliba, J.; El-Hassan, H. Machine Learning Prediction of Concrete Compressive Strength Using Rebound Hammer Test. J. Build. Eng. 2023, 64, 21. [Google Scholar] [CrossRef]
  119. Huang, B.; Zhao, B.; Song, Y. Urban Land-use Mapping Using a Deep Convolutional Neural Network with High Spatial Resolution Multispectral Remote Sensing Imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
  120. Thangavel, K.; Spiller, D.; Sabatini, R.; Amici, S.; Sasidharan, S.T.; Fayek, H.; Marzocca, P. Autonomous Satellite Wildfire Detection Using Hyperspectral Imagery and Neural Networks: A Case Study on Australian Wildfire. Remote Sens. 2023, 15, 720. [Google Scholar] [CrossRef]
  121. Farizawani, A.G.; Puteh, M.; Marina, Y.; Rivaie, A. A Review of Artificial Neural Network Learning Rule Based on Multiple Variant of Conjugate Gradient Approaches. J. Phys. Conf. Ser. 2020, 1529, 13. [Google Scholar] [CrossRef]
  122. Sichman, J.S. Inteligência Artificial e Sociedade: Avanços e Riscos. Estud. Av. 2021, 35, 37–49. [Google Scholar] [CrossRef]
  123. Modelos de Neurônios em Redes Neurais Artificiais. Available online: https://www.researchgate.net/publication/340166849_Modelos_de_neuronios_em_redes_neurais_artificiais (accessed on 15 January 2023).
  124. Kaur, R.; Singh, S. A Comprehensive Review of Object Detection with Deep learning. Digit. Signal Process. 2023, 132, 17. [Google Scholar] [CrossRef]
  125. Emmert-Streib, F.; Yang, Z.; Feng, H.; Tripathi, S.; Dehmer, M. An Introductory Review of Deep learning for Prediction Models with Big Data. Front. Artif. Intell. 2020, 3, 23. [Google Scholar] [CrossRef] [PubMed]
  126. Deep Learning Made Easy with Deep Cognition. Available online: https://becominghuman.ai/deep-learning-made-easy-with-deep-cognition-403fbe445351 (accessed on 15 January 2023).
  127. Kumar, V.; Azamathulla, H.M.; Sharma, K.V.; Mehta, D.J.; Maharaj, K.T. The State of the Art in Deep Learning Applications, Challenges, and Future Prospects: A Comprehensive Review of Flood Forecasting and Management. Sustainability 2023, 15, 10543. [Google Scholar] [CrossRef]
  128. Zhang, G.; Wang, B.; Li, J.; Xu, Y. The application of deep learning in bridge health monitoring: A literature review. Adv. Bridge Eng. 2022, 3, 27. [Google Scholar] [CrossRef]
  129. Jia, J.; Li, Y. Deep Learning for Structural Health Monitoring: Data, Algorithms, Applications, Challenges, and Trends. Sensors 2023, 23, 8824. [Google Scholar] [CrossRef] [PubMed]
  130. Cha, Y.; Ali, R.; Lewis, J.; Büyüköztürk, O. Deep learning-based structural health monitoring. Autom. Constr. 2024, 161, 38. [Google Scholar] [CrossRef]
  131. Xu, D.; Xu, X.; Forde, M.C.; Caballero, A. Concrete and steel bridge Structural Health Monitoring—Insight into choices for machine learning applications. Constr. Build. Mater. 2023, 402, 16. [Google Scholar] [CrossRef]
  132. Bai, Y.; Demir, A.; Yilmaz, A.; Sezen, H. Assessment and monitoring of bridges using various camera placements and structural analysis. J. Civ. Struct. Health Monit. 2023, 14, 321–337. [Google Scholar] [CrossRef]
  133. Nash, W.; Drummond, T.; Birbilis, N. A review of deep learning in the study of materials degradation. NPJ Mater. Degrad. 2018, 37, 12. [Google Scholar] [CrossRef]
  134. Nash, W.; Zheng, L.; Birbilis, N. Deep learning corrosion detection with confidence. NPJ Mater. Degrad. 2022, 26, 13. [Google Scholar] [CrossRef]
  135. Wang, Y.; Oyen, D.; Guo, W.; Mehta, A.; Scott, C.B.; Panda, N.; Fernández-Godino, M.G.; Srinivasan, G.; Yue, X. StressNet—Deep learning to predict stress with fracture propagation in brittle materials. NPJ Mater. Degrad. 2021, 6, 10. [Google Scholar] [CrossRef]
  136. Gao, Y.; Yu, Z.; Yu, S.; Sui, H.; Feng, T.; Liu, Y. Metal intrusion enhanced deep learning-based high temperature deterioration analysis of rock materials. Eng. Geol. 2024, 335, 12. [Google Scholar] [CrossRef]
  137. Gao, Y.; Yu, Z.; Chen, W.; Yin, Q.; Wu, J.; Wang, W. Recognition of rock materials after high-temperature deterioration based on SEM images via deep learning. J. Mater. Res. Technol. 2023, 25, 273–284. [Google Scholar] [CrossRef]
  138. Zhu, J.; Wang, Y. Feature Selection and Deep Learning for Deterioration Prediction of the Bridges. J. Perform. Constr. Facil. 2021, 35, 04021078-1–04021078-13. [Google Scholar] [CrossRef]
  139. Wu, P.; Zhang, Z.; Peng, X.; Wang, R. Deep learning solutions for smart city challenges in urban development. Sci. Rep. 2024, 14, 19. [Google Scholar] [CrossRef] [PubMed]
  140. Zheng, Y.; Lin, Y.; Zhao, L.; Wu, T.; Jin, D.; Li, Y. Spatial planning of urban communities via deep reinforcement learning. Nat. Comput. Sci. 2023, 3, 748–762. [Google Scholar] [CrossRef] [PubMed]
  141. Alghamdi, M. Smart city urban planning using an evolutionary deep learning model. Soft Comput. 2024, 28, 447–459. [Google Scholar] [CrossRef]
  142. Li, Y.; Yabuki, N.; Fukuda, T. Integrating GIS, deep learning, and environmental sensors for multicriteria evaluation of urban street walkability. Landsc. Urban Plan. 2023, 28, 17. [Google Scholar] [CrossRef]
  143. Pan, Z.; Xu, J.; Guo, Y.; Hu, Y.; Wang, G. Deep Learning Segmentation and Classification for Urban Village Using a Worldview Satellite Image Based on U-Net. Remote Sens. 2020, 12, 1574. [Google Scholar] [CrossRef]
  144. Ren, Y.; Yang, J.; Zhang, Q.; Guo, Z. Multi-Feature Fusion with Convolutional Neural Network for Ship Classification in Optical Images. Appl. Sci. 2019, 9, 4209. [Google Scholar] [CrossRef]
  145. Wang, T.; Wen, C.; Wang, H.; Gao, F.; Jiang, T.; Jin, S. Deep learning for wireless physical layer: Opportunities and challenges. IEEE 2017, 14, 92–111. [Google Scholar] [CrossRef]
  146. Lagaros, N.D. Artificial Neural Networks Applied in Civil Engineering. Appl. Sci. 2023, 13, 1131. [Google Scholar] [CrossRef]
  147. Ye, X.W.; Jin, T.; Yun, C.B. A Review on Deep Learning-Based Structural Health Monitoring of Civil Infrastructures. Smart Struct. Syst. 2019, 24, 567–586. [Google Scholar]
  148. Wei, G.; Wan, F.; Zhou, W.; Xu, C.; Ye, Z.; Liu, W.; Lei, G.; Xu, L. BFD-YOLO: A YOLOv7-Based Detection Method for Building Façade Defects. Electronics 2023, 12, 3612. [Google Scholar] [CrossRef]
  149. Moreh, F.; Lyu, H.; Rizvi, Z.H.; Wuttke, F. Deep neural networks for crack detection inside structures. Sci. Rep. 2024, 4439, 15. [Google Scholar] [CrossRef]
  150. Su, P.; Han, H.; Liu, M.; Yang, T.; Liu, S. MOD-YOLO: Rethinking the YOLO architecture at the level of feature information and applying it to crack detection. Expert Syst. Appl. 2024, 237, 16. [Google Scholar] [CrossRef]
  151. Yuan, J.; Ren, Q.; Jia, C.; Zhang, J.; Fu, J.; Li, M. Automated pixel-level crack detection and quantification using deep convolutional neural networks for structural condition assessment. Structures 2024, 59, 15. [Google Scholar] [CrossRef]
  152. Zhu, W.; Zhang, H.; Eastwood, J.; Qi, X.; Jia, J.; Cao, Y. Concrete crack detection using lightweight attention feature fusion single shot multibox detector. Knowl.-Based Syst. 2023, 261, 12. [Google Scholar] [CrossRef]
  153. Tang, W.; Mondal, T.G.; Wu, R.; Subedi, A.; Jahanshahi, M.R. Deep learning-based post-disaster building inspection with channel-wise attention and semi-supervised learning. Smart Struct. Syst. 2023, 31, 365–381. [Google Scholar]
  154. Mohammed, M.A.; Han, Z.; Li, Y.; Al-Huda, Z.; Li, C.; Wang, W. End-to-end semi-supervised deep learning model for surface crack detection of infrastructures. Front. Mater. 2022, 9, 19. [Google Scholar] [CrossRef]
  155. Cumbajin, E.; Rodrigues, N.; Costa, P.; Miragaia, R.; Frazão, L.; Costa, N.; Fernández-Caballero, A.; Carneiro, J.; Buruberri, L.H.; Pereira, A. A Real-Time Automated Defect Detection System for Ceramic Pieces Manufacturing Process Based on Computer Vision with Deep Learning. Sensors 2023, 24, 232. [Google Scholar] [CrossRef] [PubMed]
  156. Wan, G.; Fang, H.; Wang, D.; Yan, J.; Xie, B. Ceramic tile surface defect detection based on deep learning. Ceram. Int. 2022, 48, 11085–11093. [Google Scholar] [CrossRef]
  157. Cao, M. Drone-assisted segmentation of tile peeling on building façades using a deep learning model. J. Build. Eng. 2023, 80, 15. [Google Scholar] [CrossRef]
  158. Nguyen, H.; Hoang, N. Computer vision-based classification of concrete spall severity using metaheuristic-optimized Extreme Gradient Boosting Machine and Deep Convolutional Neural Network. Autom. Constr. 2022, 140, 14. [Google Scholar] [CrossRef]
  159. Arafin, P.; Billah, A.H.M.M.; Issa, A. Deep learning-based concrete defects classification and detection using semantic segmentation. Struct. Health Monit. 2023, 23, 383–409. [Google Scholar] [CrossRef] [PubMed]
  160. Forkan, A.R.M.; Kang, Y.; Jayaraman, P.P.; Liao, K.; Kaul, R.; Morgan, G.; Ranjan, R.; Sinha, S. CorrDetector: A framework for structural corrosion detection from drone images using ensemble deep learning. Expert Syst. Appl. 2022, 193, 116461. [Google Scholar] [CrossRef]
  161. Ha, M.; Kim, Y.; Park, T. Stain Defect Classification by Gabor Filter and Dual-Stream Convolutional Neural Network. Appl. Sci. 2023, 13, 4540. [Google Scholar] [CrossRef]
  162. Goetzke-Pala, A.; Hola, A.; Sadowski, L. A non-destructive method of the evaluation of the moisture in saline brick walls using artificial neural networks. Arch. Civ. Mech. Eng. 2018, 18, 1729–1742. [Google Scholar] [CrossRef]
  163. Hola, A. Methodology of the quantitative assessment of the moisture content of saline brick walls in historic buildings using machine learning. Arch. Civ. Mech. Eng. 2023, 23, 15. [Google Scholar] [CrossRef]
  164. Hola, A.; Sadowski, L. A method of the neural identification of the moisture content in brick walls of historic buildings on the basis of non-destructive tests. Autom. Constr. 2019, 106, 15. [Google Scholar] [CrossRef]
  165. Zhao, F.; Chao, Y.; Li, L. A Crack Segmentation Model Combining Morphological Network and Multiple Loss Mechanism. Sensor 2023, 23, 1127. [Google Scholar] [CrossRef] [PubMed]
  166. Ameli, Z.; Nesheli, S.J.; Landis, E.N. Deep Learning-Based Steel Bridge Corrosion Segmentation and Condition Rating Using Mask RCNN and YOLOv8. Infrastructures 2024, 9, 3. [Google Scholar] [CrossRef]
  167. Garrido, I.; Barreira, E.; Almeida, R.M.S.F.; Laguela, S. Introduction of active thermography and automatic defect segmentation in the thermographic inspection of specimens of ceramic tiling for building façades. Infrared Phys. Technol. 2022, 121, 20. [Google Scholar] [CrossRef]
Figure 1. Main pathologies present in building façades.
Figure 1. Main pathologies present in building façades.
Drones 08 00341 g001
Figure 2. Types of façade inspection.
Figure 2. Types of façade inspection.
Drones 08 00341 g002
Figure 3. Advantages and limitations of different inspection methods: traditional and using embedded sensors (adapted from [47]).
Figure 3. Advantages and limitations of different inspection methods: traditional and using embedded sensors (adapted from [47]).
Drones 08 00341 g003
Figure 4. Number of publications concerning the use of UAVs in façade inspection throughout the period from 2015 to 2022.
Figure 4. Number of publications concerning the use of UAVs in façade inspection throughout the period from 2015 to 2022.
Drones 08 00341 g004
Figure 5. Types of sensors embedded in UAVs: (a) RGB; (b) LiDAR; (c) thermal; (d) multispectral (adapted from [84,85]).
Figure 5. Types of sensors embedded in UAVs: (a) RGB; (b) LiDAR; (c) thermal; (d) multispectral (adapted from [84,85]).
Drones 08 00341 g005
Figure 6. Types of pathologies that can be identified using various sensors embedded in UAVs.
Figure 6. Types of pathologies that can be identified using various sensors embedded in UAVs.
Drones 08 00341 g006
Figure 7. Data acquisition through UAV for the generation of tridimensional models [87].
Figure 7. Data acquisition through UAV for the generation of tridimensional models [87].
Drones 08 00341 g007
Figure 8. Use of UAV for thermal inspection of building façades [6].
Figure 8. Use of UAV for thermal inspection of building façades [6].
Drones 08 00341 g008
Figure 9. Example of an aerial image obtained from a UAV using an RGB sensor (personal archive).
Figure 9. Example of an aerial image obtained from a UAV using an RGB sensor (personal archive).
Drones 08 00341 g009
Figure 10. Schematic representation of a simple neural network and deep learning network [127].
Figure 10. Schematic representation of a simple neural network and deep learning network [127].
Drones 08 00341 g010
Figure 11. Schematic representation of a convolutional network [144].
Figure 11. Schematic representation of a convolutional network [144].
Drones 08 00341 g011
Figure 12. Popular architectures over the years for CNNs [130].
Figure 12. Popular architectures over the years for CNNs [130].
Drones 08 00341 g012
Figure 13. Example of a convolutional neural network-based building pathology application [130].
Figure 13. Example of a convolutional neural network-based building pathology application [130].
Drones 08 00341 g013
Figure 14. Number of publications concerning the use of deep learning and UAVs in building inspection throughout the period from 2015 to 2022.
Figure 14. Number of publications concerning the use of deep learning and UAVs in building inspection throughout the period from 2015 to 2022.
Drones 08 00341 g014
Figure 15. Example of a schematic CNN structure for automatic crack identification [165].
Figure 15. Example of a schematic CNN structure for automatic crack identification [165].
Drones 08 00341 g015
Figure 16. Structure of a convolutional neural network for the detection of corrosion [166].
Figure 16. Structure of a convolutional neural network for the detection of corrosion [166].
Drones 08 00341 g016
Figure 17. Structure of a convolutional neural network for the detachment of ceramic pieces [167].
Figure 17. Structure of a convolutional neural network for the detachment of ceramic pieces [167].
Drones 08 00341 g017
Table 1. Main definitions regarding building pathology.
Table 1. Main definitions regarding building pathology.
TermDefinitionAuthor
AnomalyPhenomenon that hampers the utilization of a system or constructive elements, prematurely resulting in the downgrade of performance due to constructive irregularities or degradation processes.[37]
DegradationDeterioration of construction systems, components, and building equipment due to the action of damaging agents throughout time, considering given periodic maintenance activities.[37]
Building
Performance
The behavior of a building and its systems when subjected to exposure and usage, which normally occurs throughout its service life, considering its maintenance operations as anticipated during its project and construction.[37]
DeteriorationDecomposition or early loss of performance in constructive systems, components, and building equipment due to anomalies or usage, operation, and/or maintenance inaccuracy.[37]
DurabilityThe capacity of a building (or its systems) to perform functions throughout time and under conditions of exposure, usage, and maintenance as anticipated during its project construction, according to its use and maintenance manual.[38]
Pathologic
Manifestation
Signs or symptoms occurring due to existence of mechanisms or processes of degradation for materials, components, or systems that contribute to or influence the loss of performance.[38]
MaintenancePreventive or corrective actions necessary to preserve the normal conditions of a property’s use.[39]
ProphylaxisActions and procedures necessary for the prevention, diminution, or correction of pathologic manifestations based on diagnostics.[37]
Service Life of a
Project
The period of time in which a building and its systems can be used as projected and built, fulfilling its performance requirements as anticipated previously, considering the correct execution of its maintenance programs.[37]
Table 2. International history of building pathology (items described in the works [40,45,46]).
Table 2. International history of building pathology (items described in the works [40,45,46]).
YearMethodAuthor
1877The SPAB ManifestoW. Morris e P. Webb
1964The Venice CharterICOMOS
1982Defect Action SheetsBRE
1985Anomalies Repair Forms (in Portuguese)LNEC
1993Cases of Failure Information Sheet CIBCIB
1995Building Pathology Sheets (in French)AQC
2003ConstrudoctorOZ—Diagnosis
2004Learning from Mistakes (in Italian)BEGroup
2008Severity of DegradationGaspar e Brito
2009Web-Based Prototype SystemP. Fong e K. Wong
2010Maintainability WebsiteY. L. Chew
2013Building Medical RecordC. Chang e M. Tsai
2016Methodologies for Service Life PredictionSilva, Brito e Gaspar
2020Expert Knowledge-Based Inspections SystemsBrito, Pereira, Silvestre e Flores-Cohen
Table 3. Examples of scientific research regarding the use of UAVs in engineering.
Table 3. Examples of scientific research regarding the use of UAVs in engineering.
ApplicationAuthors
Photovoltaic power plant[49,50,51,52,53,54]
Environmental[55,56,57,58,59]
Roads and Highways[60,61,62,63,64]
Dams and Mining[65,66,67,68,69]
Civil Construction[48,70,71,72,73]
Pathologic Manifestations[1,4,74,75,76]
Geological, Hydrological, and Environmental Risks[77,78,79,80,81]
Table 4. Advantages and disadvantages regarding the use of LiDAR-embedded UAVs in engineering.
Table 4. Advantages and disadvantages regarding the use of LiDAR-embedded UAVs in engineering.
AdvantagesLimitations
Easy access to difficult areasDependency on meteorological conditions
Tree canopies are easily overcomeHigh cost of acquisition
Generation of products with centimeter-level resolutionHigh time of processing, considering the volume of generated data
Shorter time of execution and higher level or productivity when inspectingLoss of performance when the height of a scan flight is larger
Ability to identify pathologic manifestations in smaller dimensionsRequires specialized personnel for data execution and interpretation
Table 5. Advantages and disadvantages regarding the use of thermal sensor-embedded UAVs in engineering.
Table 5. Advantages and disadvantages regarding the use of thermal sensor-embedded UAVs in engineering.
AdvantagesLimitations
Easy access to harder-to-reach areasLimited flight autonomy
Real-time data accessDependency on ideal conditions of surface heat emission
Reduction in risk to technician’s lifeImpossibility of measuring depth and thickness of pathologies
Faster inspection timeNecessity of specific software for thermal imaging processing
Detection of non-apparent pathologies such as infiltration, loose ceramic, and so onLow accuracy when utilized to evaluate mirrored surfaces
Identification of mortar render with adherence issuesNecessity of qualified personnel for interpreting thermal data
Table 6. Advantages and disadvantages regarding the use of multispectral/hyperspectral sensor-embedded UAVs in engineering.
Table 6. Advantages and disadvantages regarding the use of multispectral/hyperspectral sensor-embedded UAVs in engineering.
AdvantagesLimitations
Wider range of spectral information compared to RGB-type sensorsHigh complexity in the process of data acquisition and analysis
Ability to identify elements invisible to the human eyeLow availability of studies focused on building pathology
Enables capture of large amounts of data.High acquisition cost
Table 7. Application of deep learning and UAV for the identification of pathologic manifestations in building façades.
Table 7. Application of deep learning and UAV for the identification of pathologic manifestations in building façades.
ApplicationReferenceAuthorsTechnology
Crack Detection[148]Wei et al. (2023)YOLOv7; BFD-YOLO’s
[149]Moreh et al. (2024)DenseNet; CNN
[17]Teng and Chen (2022)DeepLab_v3+; MATLAB; CNN
[117]Wang et al. (2024)ResNet50; YOLOv8
[18]Ali et al. (2022)AlexNet; ZFNet; GoogLeNet; YOLO; Faster R-CNN; and Others
[150]Su et al. (2024)MOD-YOLO; MODSConv; YOLOX
[151]Yuan et al. (2024)ResNet-50; FPN-DB
[152]Zhu et al. (2023)SSD; Faster-RCNN; EfficientDet; YOLOv3; YOLOv4; CenterNet
[153]Tang et al. (2023)SSL; U-Net++; DeepLab-AASPP
[154]Mohammed et al. (2022)U-Net; CNN
Detachment of Ceramic Pieces[15]Sousa et al. (2022)YOLOv2
[155]Cumbajin et al. (2023)ResNet; VGG; AlexNET
[156]Wan et al. (2022)YOLOv5s
[157]Cao (2023)YOLOv7; R-CNN; and Others
Spalling Detection[158]Nguyen and Hoang (2022)XGBoost; DCNN
[159]Arafin et al. (2023)InceptionV3; ResNet50; VGG19
Corrosion Detection[160]Forkan et al. (2022)CNN; R-CNN
Stain Defect[161]Ha et al. (2023)GLCM; CNN
[162]Goetzke-Pala et al. (2018)ANN
[163]Hola (2023)ANN; RF; SVM
[164]Hola and Sadowski (2019)ANN
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Meira, G.d.S.; Guedes, J.V.F.; Bias, E.d.S. UAV-Embedded Sensors and Deep Learning for Pathology Identification in Building Façades: A Review. Drones 2024, 8, 341. https://doi.org/10.3390/drones8070341

AMA Style

Meira GdS, Guedes JVF, Bias EdS. UAV-Embedded Sensors and Deep Learning for Pathology Identification in Building Façades: A Review. Drones. 2024; 8(7):341. https://doi.org/10.3390/drones8070341

Chicago/Turabian Style

Meira, Gabriel de Sousa, João Victor Ferreira Guedes, and Edilson de Souza Bias. 2024. "UAV-Embedded Sensors and Deep Learning for Pathology Identification in Building Façades: A Review" Drones 8, no. 7: 341. https://doi.org/10.3390/drones8070341

APA Style

Meira, G. d. S., Guedes, J. V. F., & Bias, E. d. S. (2024). UAV-Embedded Sensors and Deep Learning for Pathology Identification in Building Façades: A Review. Drones, 8(7), 341. https://doi.org/10.3390/drones8070341

Article Metrics

Back to TopTop