Next Article in Journal
An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation
Previous Article in Journal
Time Distribution of Strong Seismic Events in the Fore-Sudetic Monocline in Context of Signals Registered by Water-Tube Gauges in Książ Geodynamic Laboratory
Previous Article in Special Issue
A UAV-Based Eddy Covariance System for Measurement of Mass and Energy Exchange of the Ecosystem: Preliminary Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation and Selection of Video Stabilization Techniques for UAV-Based Active Infrared Thermography Application

by
Shashank Pant
1,*,
Parham Nooralishahi
2,
Nicolas P. Avdelidis
2,3,
Clemente Ibarra-Castanedo
2,
Marc Genest
1,
Shakeb Deane
3,
Julio J. Valdes
1,
Argyrios Zolotas
3 and
Xavier P. V. Maldague
2
1
National Research Council Canada, Ottawa, ON K1A 0R6, Canada
2
Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Laval University, Quebec City, QC G1V 0A6, Canada
3
School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield MK43 0AL, UK
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(5), 1604; https://doi.org/10.3390/s21051604
Submission received: 23 December 2020 / Revised: 3 February 2021 / Accepted: 15 February 2021 / Published: 25 February 2021
(This article belongs to the Special Issue Sensors for Unmanned Aircraft Systems and Related Technologies)

Abstract

:
Unmanned Aerial Vehicles (UAVs) that can fly around an aircraft carrying several sensors, e.g., thermal and optical cameras, to inspect the parts of interest without removing them can have significant impact in reducing inspection time and cost. One of the main challenges in the UAV based active InfraRed Thermography (IRT) inspection is the UAV’s unexpected motions. Since active thermography is mainly concerned with the analysis of thermal sequences, unexpected motions can disturb the thermal profiling and cause data misinterpretation especially for providing an automated process pipeline of such inspections. Additionally, in the scenarios where post-analysis is intended to be applied by an inspector, the UAV’s unexpected motions can increase the risk of human error, data misinterpretation, and incorrect characterization of possible defects. Therefore, post-processing is required to minimize/eliminate such undesired motions using digital video stabilization techniques. There are number of video stabilization algorithms that are readily available; however, selecting the best suited one is also challenging. Therefore, this paper evaluates video stabilization algorithms to minimize/mitigate undesired UAV motion and proposes a simple method to find the best suited stabilization algorithm as a fundamental first step towards a fully operational UAV-IRT inspection system.

1. Introduction

1.1. General UAV Applications

The use of Unmanned Aerial Vehicles (UAVs) for the remote inspection of large and/or difficult to access areas has witnessed significant growth in the last few years thanks to their flexibility of movement and their ability to carry multiple sensors. Constant technological evolvement has contributed to making UAVs more affordable, easier and safer to deploy. Moreover, thanks to the recent developments in variety of sensors for UAV applications that are low weight, low power consumption, and improved performance, thereby, allowing multiple sensors to be flown at the same time. UAV related scientific literature is overwhelmingly extensive with a wide variety of applications ranging from precision agriculture [1], traffic analysis [2], 3D mapping/modeling [3], archeological exploration [4], surveillance [5], public safety [6], mining and air pollution monitoring [7], etc. The list is vast and rapidly growing. In some cases, compared to traditional technologies, UAV-based survey systems offer better image spatial resolution (e.g., compared to satellites), and/or are much faster (e.g., compared to ground surveys in remote areas). Moreover, UAV’s flight operation is becoming more automated, and image processing and data fusion tools are continuously evolving [8].

1.2. Passive Thermography and UAV Applications

UAVs and Infrared Thermography (IRT) are a perfect match for a contactless survey of thermal phenomena of either large and/or otherwise inaccessible areas, or to ease the inspection of large structures that are difficult to access. The use of UAV-IRT systems has been explored for numerous applications. In most cases, the passive approach has been employed, i.e., the observation of thermal phenomena without the use of external energy stimulation, with the assumption that the features of interests (plants, building materials, photovoltaic cells, people, etc.) will naturally produce thermal gradients that can be isolated from the background. This assumption is valid in many cases, such as building inspection [9] (e.g., detection of thermal bridges, air leakages, moisture, or humidity); precision agriculture [10] (e.g., monitor nutriments levels or lack of water in crop fields); quality assessment of large structures [7] (e.g., inspection of photovoltaic panels farms, wind turbines); or for surveillance applications [11] (e.g., people or animal tracking).
Heat transfer is a complex and transient phenomenon that depends on a combination of factors. This can be used advantageously in some cases, such as for finding the presence of some anomalies (e.g., porosity, fracturing and weathering of rocks, soil slopes, landslide hazard, etc.) [12]. In other cases, the presence of anomalies could be missed if the inspection is not performed at the “correct” time. Building inspection is a good example of this, where the temperature of building materials fluctuates following day/night and seasonal cycles, and also by the effect of weather. Solar Loading Thermography (SLT) [13] exploits the periodic solar irradiation (day and night fluctuations) to retrieve in-depth information about surface and sub-surface anomalies (e.g., cracks, delaminations, thermal bridges, etc.). Moreover, it does so at the expense of large acquisition time, at least 24 h to have a complete “view” of the thermal behavior of different materials.
Another example of passive thermography is to detect water ingression in honeycomb aircraft structures after landing [14], based on the principle that, if water is present inside the honeycomb cells it would take longer to warm-up (thaw) after landing than other materials (aluminum, Nomex, composites) and will appear as cold spots in thermal images.
A different situation is encountered when the feature of interest is at approximately the same temperature as its background. This is the situation found during the inspection of aeronautical components during production or in-service, where typical anomalies can be difficult to detect visually (e.g., cracks, impact damage) and/or can be situated at a certain depth inside the materials (e.g., delaminations, internal damage, liquid ingress). In such cases, the active thermography approach is better suited as explained in the following section.

1.3. Active Thermography and UAV Applications

In the case of structures where the features of interest are under the surface and there is no naturally occurring thermal gradient, the passive approach is seldom useful. In those cases, it is far more practical and effective to stimulate the structures to be inspected with a controlled energy source and to use data processing to improve the results [15].
On one hand, the inspection of large structures with the purpose of defect detection and characterization (i.e., determination of the size, depth, or thermo-physical properties) requires careful control of the input energy (duration, waveform type), and data recording (frame rate, time window) to exploit the relationship between the heat transfer rate and the appearance of eventual sub-surface anomalies (shallow defects appear earlier and with superior thermal contrast than deep defects). On the other hand, classical IRT experimental setups allow the inspection of relatively small surfaces at a time (the larger the area the lower the spatial resolution). A map of the complete inspected surface can be reconstructed from several individual inspections (i.e., mosaicking) [16].
Alternatively, large structures can be inspected using a dynamic configuration such as Line Scan Thermography (LST) [17], where the camera and the heat source move in tandem (the camera records thermograms right after heating) with respect to the surface of the component, which is normally static while being inspected. This can be performed for example by mounting a camera and a source on a robot or a 2-axis actuator. LST allows inspecting large and/or complex-shaped components faster than classical static IRT. It is an excellent option for quality control during production. For in-service inspection, the ideal situation would be to inspect an aircraft without the need of removing any component. A LST system would require in this case a huge robotic arm or several smaller robots properly installed and distributed to cover all the areas that need to be inspected. Alternatively, a dynamic system moving “freely” around the aircraft and performing the inspection of all the areas of interest in a fast and effective manner can be conceived. This is where a UAV based active IRT system becomes interesting. Mavromatidis et al. investigated the use of UAVs with active thermography for the inspection and estimation of thermophysical properties of building materials [18]. The authors demonstrated the feasibility of a flash-based system and pointed out the need to improve UAV stability during the acquisition or the development of stabilization post-processing methods.
Although unexpected motions like sudden spikes may have little to no impact on the detection of large and/or shallow damage, detection of small and/or deeper damage often requires further processing. This is illustrated in Figure 1, in which the raw temperature data sequence is processed by pulsed phase thermography or PPT [19] (pixel by pixel through time) to obtain phase profiles that are put together to reconstruct phase images (phasegrams), which significantly improves defect detection. Any undesirable motions can disturb the pixel-wise alignment of consecutive frames, which is already often noisy adding errors in the analysis of temperature evolution used for damage detection. Therefore, video stabilization is required as a first step to minimize/mitigate any undesired motions prior to the application of signal processing technique (e.g., PPT as exemplified in Figure 1), thereby improving the damage detection capabilities of UAV based active thermography inspection technique.

1.4. Video Stabilization for UAV Applications

Video stabilization methods are primarily based on mechanical/optical, and digital techniques. In the mechanical/optical stabilization techniques, the camera motion is detected and measured by internal sensors such as accelerometers, gyroscopes, etc. Motion compensation is done by mechanical/optical means, i.e., by using a microcontroller to direct small linear motors to move the image sensor or optically by shifting the lens [20]. Mechanical/optical stabilization is usually built-in as a part of the camera system. Digital video stabilization on the other hand compares the motion between two consecutive frames and shifts the frames to compensate for the undesired motion. The advantages of using digital image stabilization techniques are that there are no moving components and also the ability to apply different algorithms to improve the stabilization.
Digital image stabilization for UAV applications is not new, Shen et al. used block matching technique with polynomial smoothing for stabilization [21]. Wang et al. used corner point detection and matching with a cubic spline for smoothing [22]. In both cases, translations and rotation were used for evaluation. Hong et al. provided a multiresolution video stabilization algorithm based on the Scale Invariant Feature Transform (SIFT) and Haar Wavelet decomposition algorithm. To quantify the improvement, processing time and accuracy were used [23]. Rahmanair et al. used Speeded-up Robust Features (SURF) for motion estimation and Kalman filter to minimize unstable UAV videos used to detect moving objects [24]. Walha et al. used SIFT and Kalman filter with median filter for smoothing and stabilization [25]. Zhou and Ansari [26] compared SIFT and SURF for motion estimation between frames and used Motion Vector Integration (MVI) with adaption damping proposed in [27] for smoothing. Marcenaro et al. used grid and feature-based methods to estimate motion between two consecutive frames [28].
To quantify the stabilized videos Peak Signal to Noise Ratio (PSNR) and Interframe Transformation Fidelity (ITF) were used by Walha, Zhou, Marcenaro, and others, [27,28,29,30,31,32], just to name a few. PSNR and ITF are image quality measurements based on Mean Squared Error (MSE), which is a pixel-by-pixel comparison of two images and does not take into account any changes in luminance and contrast, which are expected to vary due to heating and cooling of the specimen during an active thermography inspection. To evaluate the performance of the video stabilization algorithm Multi-Scale Structural Similarity (MS-SSIM) is used in this work instead of the commonly used PSNR and ITF. MS-SSIM was chosen as it takes into account luminance, contrast, and structural information between two images and compares them at different scales, where in each additional scale the images are passed through a low pass filter and down-sampled by 2 from the previous scale, providing a more advanced image quality measure [31] as compared to PNSR and ITF.

2. Experimental Setup

Two sets of experiments were performed, where the thermal and optical videos in both experiments were acquired from a DJI Matrice 210 RTK UAV equipped with a Zenmuse X4S (FC6510) optical camera and a Zenmuse XT thermal camera. In the first experiment, the UAV was flown over three carbon fiber specimens that were flat, curved, and trapezoidal-shaped, as shown in Figure 2. The UAV was navigated manually by an experienced pilot in an indoor facility by maintaining a height of approximately 1.5 m above the specimen while acquiring optical and thermal footages at 1920 × 1080 pixels and 720 × 480 pixels respectively for approximately 25 s. The optical and thermal videos were acquired at 24 frames per second (fps) and 30 fps respectively; however, the thermal videos were down-sampled to 24 fps to match the optical videos’ frame rate for future image registration purposes. The first experimental data set was used to develop and validate the video stabilization and selection method.
The second experiment was conducted on a Nomex honeycomb core carbon-fiber skin sandwich aircraft part containing manually crafted undersurface defects (holes) at different depths with various shapes and sizes, which are provided in Table 1. The aircraft part was inspected using UAV-based active thermography in an indoor environment using the same UAV and camera setup as the first experiment. The specimen was constantly heated using two halogen flash lamps, as shown in Figure 3. The UAV was navigated manually by an expert pilot, following a predefined flight pattern at three different altitudes of 1.5, 2, and 3 m above the specimen. Only the thermal videos were processed as the purpose of this experimental set was to evaluate a preliminary drone-based active thermography inspection technique, as well as, to further validate the video stabilization algorithm and selection method.

3. Methodology

In this paper, a comparative analysis of various smoothing techniques is conducted to develop a method on how to find the most suitable stabilization option for reducing/minimizing the effect of undesired UAV’s motions. For this purpose, the video stabilization process pipeline described in Figure 4 was implemented in Python 3.7.6 with the use of the OpenCV library based on the flow suggested by Thakur [32].
The process flow shown in Figure 4 can be broken down into seven major steps. First, the desired number of strongest corners or features, shown as (×) in Figure 5 are extracted for frame (fi) using Shi-Tomasi method [33]. Second, the extracted features from frame (fi) are matched and tracked in the consecutive frame (fi+1) using Lucas-Kanade optical flow [34], as shown by dotted lines connecting the features in Figure 5. The number of features is selected such that no frames are skipped. For the first experimental set, 100 features could be reliably tracked for optical videos as compared to 50 features for the thermal videos, which is the result of the lower spatial resolution of the thermal camera capturing fewer details as compared to the optical one, which can also be seen in Figure 5. For both optical and thermal videos, a minimum distance of 20 pixels was set to minimize feature clustering around a single or few strong features. As for the second experiment, 40 features could be reliably tracked with a minimum feature distance set at 20 pixels for all three different heights. Third, an affine transformation matrix is constructed using the features’ movement to find the overall inter-frame motion in the x-direction, y-direction, and rotation between frames (fi) and (fi+1). Fourth, all the inter-frame motions are compiled to retrieve the global trajectory of the UAV in the x-direction, y-direction, and rotation. Fifth, different algorithms are used for smoothing the global trajectory. Sixth, individual frames are shifted based on the difference between the original trajectory and the smoothed global trajectory. Seventh, a stabilized video is then constructed from the shifted frames.
As a result of shifting a frame for stabilization to fit within the desired video size, undefined regions with black pixels known as Blank Borders (BB), such as the one shown in Figure 6 are generated. For simplicity, optical and thermal stabilized frames from both experimental sets were enlarged to 130% from their initial size to minimize BB in this work.
The BB was calculated for each frame by converting the color image into grayscale first, and then into a binary image by setting the threshold to zero. The binary image was used to find the number of black pixels, which were divided by the overall pixels to find the BB in each frame. The average BB was then calculated to provide information regarding the overall content of BB in the stabilized video.
Several smoothing techniques, such as Simple Moving Average (SMA) [35], Exponential Moving Average (EMA) [35], Gaussian Filter (GF) [36], Linear Regression (LR) [37], Support Vector Regression—Linear Regression (SVR-LR) [38], and a Low-pass Butter worth filter (LBW) [39] were used to stabilize both optical and thermal videos. For SMA and EMA three different window sizes of 1 s (second), 3 s, and 5 s were used. Since both optical and thermal videos were processed at 24 frames per second (fps), 1 s, 3 s, and 5 s refer to window sizes of 24, 72, and 120 frames, respectively. Similarly, for GF, the standard deviation of 24, 72, and 120 frames were used and are referred to as GF-1 s, 3 s, and 5 s, respectively. Default settings were chosen for SVR-LR and LR. As for the LBW, the 5th order with a cut-off frequency of 1 Hz was selected to remove any high-frequency motion. These are the only handful of smoothing techniques with limited parameter settings. Since the focus of this work is to develop a methodology to select an optimal smoothing algorithm for stabilizing UAV based active IRT videos, these algorithms are deemed sufficient to demonstrate the concept.

4. Results

For the first experimental run, typical extracted motions between the original and the stabilized frames are shown in Figure 7 and Figure 8 for optical and thermal videos respectively.
From Figure 7, it can be seen that the stabilization algorithm managed to reduce the vibration in both x and y directions for the optical video. As for the stabilization algorithm results of the thermal video shown in Figure 8, sudden spikes can be seen around frame number 225, which was due to the thermal video being paused while undergoing automatic Flat Field Correction (FFC) for approximately one second [40]. FFC is performed during power up and periodically during operation to compensate for errors, which may have built-up during operation. FFC requires a shutter or similar uniform temperature device to cover the camera field of view [41]. Pausing of the thermal video, while undergoing FFC can be witnessed between frame number 203 to 226, where the x and y-translations were zero during pausing, followed by a sudden spike (both are highlighted by vertical lines in Figure 8). During this self-calibration period, the thermal camera did not acquire any new frames; however, the UAV continued to fly on its trajectory. The acquisition restarted again upon completion of the FFC process. As can be witnessed from Figure 8, translations and rotation can be prone to outliers caused by sudden shift or lack of enough tracking features between two consecutive frames. To mitigate any errors due to outliers, motions are characterized using Upper Bound (UB) and Lower Bound (LB), which are calculated using Tukey’s fence method, where data outside of LB and UB are considered as outliers [42]. The overall performance of the smoothing techniques for both optical and thermal videos from the first experiment are summarized in Table 2 and Table 3, respectively.
From the summary of the first experimental set provided in Table 2 and Table 3, it can be seen that most of the stabilization algorithms worked well for both optical and thermal videos when average MS-SSIM was used for comparison, i.e., an increase in average MS-SSIM when compared with the original average MS-SSIM. When the actual range of translation was used for comparison, the stabilization algorithm worked equally well for both optical and thermal videos, where there is a reduction in the Range of Motion (RoM) defined as Upper Bound (UB) minus Lower Bound (LB), using some of the algorithms. No significant rotations were present in both optical and thermal videos. It can also be noted that the outlier shown in Figure 8, due to temporary pausing in the thermal video acquisition for auto-calibration did not affect the RoM. As for the BB, the higher the SMA, EMA windows, and GF standard deviations, the higher the BB in the stabilized video. From Table 2 and Table 3, it can also be noticed that some algorithms performed well when average MS-SSIM was used for comparison, while others performed well when RoM and BB were used for comparison. Thus, it is evident that in addition to image quality measure, other parameters should be considered, such as reduction in RoM which provides information regarding how much of the unwanted motion has been reduced, as well as blank border which provides evidence regarding how much the frames are shifted for stabilization, thereby preserving or losing information.
The method proposed in this work takes into account additional features to provide a single metric to evaluate the overall outcome of different stabilization algorithms. The proposed method to identify the best overall algorithm is a weighted Overall Stabilization Metric (OSM). The OSM is based on the Range of MS-SSIM (RoMS_SSIM), defined as the maximum MS-SSIM minus the minimum MS-SSIM ( max MS _ SSIM min MS _ SSIM ) . Additional terms in the OSM are the reduction in the RoM, which is what the stabilization algorithm tries to minimize, and average BB content providing information regarding how much the information in the frames are preserved (0 means no BB, and 1 means that the entire video contains only black pixels). The OSM is expressed as:
OSM   =   W M S _ S S I M ( RoMS _ SSIM ori RoMS _ SSIM stab RoMS _ SSIM ori ) + i = x , y , r o t W i ( RoM ori RoM stab RoM ori ) i W B B BB average
where, ori refers to original and stab refers to stabilized. RoM is the Range of Motion and is calculated individually for x-translation, y-translation and rotation. W M S _ S S I M ,   W i = x , y , r o t , and W B B are weights associated with RoMS_SSIM, individual motion, and BB respectively.
For simplicity, equal weights are assigned for both experimental sets, where out of 100 weight scores, 33.3, 33.3, and 33.4 were assigned to   W M S _ S S I M ,   W B B , and W x , y , r o t , respectively. Since there was minimal to no rotation in both experimental sets, W r o t was set as zero; whereas, W x   and   W y were each assigned 16.7. These weights can be adjusted by the user depending on their preference on what is important. For example, if minimizing BB is important, then the user can assign a higher weight to BB and vice-versa. Results of the OSM for the first experiment are presented in Table 4 for both optical and thermal videos, where OSM values greater than zero would signify overall improvements in the stabilized videos, as compared to the original ones. The greater the OSM value, the better the stabilization algorithm.
From Table 4, it can be seen that for the first experiment most of the smoothing techniques used for the optical video had an overall improvement in the stabilized video (OSM greater than zero); however, some made it worst (OSM less than zero). As for the thermal video, all the smoothing techniques had an overall improvement. The lowest-performing algorithm with the lowest OSM was EMA-5s for both optical and thermal videos (highlighted in Table 4). Upon close inspection from the summary provided in Table 2 for optical video, it can be witnessed that the lowest-performing algorithm significantly increased the RoM instead of reducing them and had significant BB. Similarly, the best performing algorithms with the highest OSM was GF-5s (highlighted in Table 4) for optical video and LR for thermal videos, which had the opposite effect such as: reduction in RoM and range of MS-SSIM, as well as, low to no BB.
The video stabilization and selection method were further evaluated on the second experimental set, along with a preliminary demonstration of a UAV-based active thermography inspection technique, where the UAV was flown above an aircraft part at three different heights of 1.5, 2, and 3 m, while acquiring optical and thermal videos. Figure 9 shows optical and thermal frames taken at different heights, where the damage (drilled holes) can be seen in the thermal frame but not in the optical frame demonstrating a UAV-based active thermography for detecting damage. As for evaluating the smoothing techniques using the developed OSM approach, the same weight scores that were used in the first experimental set were used here. As mentioned earlier only the thermal video was analyzed from the second experimental set and the outcome is provided in Table 5. For brevity, only OSMs are provided.
From Table 5 it can be seen that SVR-LR provided the best overall results, i.e., the highest OSM value when the UAV was flown at heights above the specimen at 1.5 and 3 m; whereas, LR provided the best results when the UAV was flown at 2 m (highlighted in Table 5). This also highlights that since UAV motions are unpredictable, a single smoothing technique may not always provide the best results even for similar applications. Results of the stabilization in x and y translations are provided in Figure 10 and Figure 11, respectively for the best performing algorithms at different heights.
The presence of an outlier in the x-translation can be seen in (Figure 10 left), since Tuckey’s fencing technique [42] is adopted for OSM, outliers like these have no effect in the OSM calculation. No significant improvements can be noted in the x-translation (Figure 10); however, for y-translation (Figure 11), the stabilization algorithm managed to reduce the overall range of motion, which was also observed during the first experiment, as shown in Figure 7 and Figure 8.

5. Discussion

The video stabilization and selection method presented in this paper was developed for UAV based active thermography inspection, where it was deemed essential to improve the damage detection capabilities. The video stabilization method was based on the one suggested by Thakur [32]. In terms of quantifying the improvement of the stabilized videos, several new methods are proposed. First, instead of using PSNR and ITF, the range of MS-SSIM is used because MS-SSIM considers luminance, contrast, and structural information between two images and compares them at different scales, providing a more advanced image quality measure. Additionally, active thermography relies on processing temperature evolution over time of acquired image sequence; therefore, MS-SSIM range provides a better indication of how close or far-apart all the images are to one another—the smaller the range of MS-SSIM, the more similar the images in the video are to one another; and hence, the better the stabilization algorithm. Second, the inclusion of RoM, which is the difference between Tuckey’s fence UB and LB capturing the reduction in the overall motion while removing any outliers, providing a robust method to quantify stabilization. Third, the inclusion of BB, which is created due to excessive shifting of the frames for stabilization, offering a quantitative indication of how much of the information is retained or lost due to shifting.
It was also found that the majority of the research in the literature was conducted on a single or handful of stabilization algorithms, where the improvements were quantified using individual comparison metric such as translations, rotations, processing time, accuracy, PSNR, ITF, etc. In this work, the range of MS-SSIM, reduction in undesired motion, and BB were all included in a single quality metric to provide a complete evaluation of various stabilization algorithms. The stabilization and selection methods were applied to optical and thermal videos from two different experimental sets; the outcome of which is provided in the Results section. The highest scoring OSM was selected as the best performing algorithm. Enhancements from the higher scoring OSM algorithms can be witnessed as a significant reduction in the RoM, lower MS_SSIM ranges, and low BBs, all of which signify overall improvements in the stabilized videos.
The OSM calculation contains BB, which can be further reduced by enlarging the stabilized frame size; however, this can result in information loss and, therefore, care must be taken. There are ways to fill these BBs using several techniques such as mosaicking [43], finding a match in the neighboring frames [44], interpolating matching sharper pixels from the neighboring frames for stitching [45], etc. These techniques to fill the BB, are applicable to optical videos but may not be applicable to thermal videos used for pulse/step-heating based active thermography inspection because they rely on the pixel temperature evolution over time, which is expected to vary between neighboring frames. The OSM presented in this paper neatly captures important aspects of video stabilization and provides a simple means to evaluate different algorithms to identify the best one. The user can also add processing time to the OSM, if it is deemed critical.
It can also be noted that in the second experiment the specimen was heated constantly using halogen lamps for active thermography inspection, therefore damages can be seen in the thermal frames although the depth and sizes of these damages are difficult to determine. To detect and size the damage at different depths, infrared thermography post-processing techniques can be employed [46], where the pixel-wise thermal evolution is processed using several techniques such as pulsed phase thermography, shown in Figure 1 [19], thermographic signal reconstruction (TSR) [47], TSR combined with 1st and 2nd derivative approaches [48], principal component analysis [49], etc. Methods to stabilize and select an optimal video stabilization such as the one presented in this paper are required for such applications to reduce/eliminate any pixel shifts attributing to the post-processing errors. Continuation of this work includes developing a UAV-based pulse/step-heating inspection technique and applying the methods developed here to stabilize and select the optimal smoothing techniques.

6. Conclusions

An Unmanned Aerial Vehicle (UAV) based inspection system that can move “freely” around an aircraft to perform the inspection of all the areas of interest in a fast and effective manner can have a significant impact in reducing inspection time and cost. However, UAV inspection is challenging because the UAV carrying the optical and thermal cameras is subject to vibration and undesired motion. To reduce such undesired motion, a digital video stabilization technique along with a proper methodology to select the best smoothing techniques are presented in this paper. The stabilization method is based on finding the motion between two consecutive frames using a features-based approach. To evaluate the performance of the video stabilization algorithms Multi-Scale Structural Similarity (MS-SSIM), reduction in undesired motion, and Blank Border (BB) were used. Some algorithms performed better when MS-SSIM was used for comparison, while others performed better when the range of motion and BB were used for comparison. Instead of using three different comparison metrics, a simple weighted Overall Stabilization Metric (OSM) based on the reduction in the range of MS-SSIM and motion, as well as average BB content was proposed for an overall evaluation of the stabilization algorithms. The stabilization and selection methods were evaluated on two different experimental sets. The first experimental set was used to develop and test the methodologies; whereas, the second experiment was conducted to demonstrate a UAV-based active thermography technique, as well as, to evaluate the methods developed to stabilize and select the best smoothing techniques. The OSM showed that different smoothing techniques had different stabilization results, some improved them, and some made them worst. The highest OSM was used as a criterion to find the best-suited algorithm. The highest scoring smoothing techniques all had low range of MS-SSIM and motions, as well as, low BB content, all of which are characteristics of better overall stabilization. Therefore, the method presented in this paper provides a simple means to stabilize videos and to evaluate different stabilization algorithms to select the one that is best suited for the application, which is a fundamental first step towards developing a fully operational UAV-based active thermography inspection system.

Author Contributions

Conceptualization and methodology S.P., M.G., P.N., X.P.V.M., N.P.A., J.J.V., and C.I.-C.; experimental data acquisition S.P., M.G., P.N.; formal analysis, S.P., M.G.; resources X.P.V.M., A.Z., M.G., writing—original draft preparation S.P., M.G. and C.I.-C.; writing—review and editing A.Z., S.D., X.P.V.M., N.P.A., J.J.V., S.P., M.G. and C.I.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research is part of the MultiAcT (Multiple robotic inspection of composite aircraft structures using Active Thermography) collaborative R&D proposal jointly funded by the NRC (National Research Council Canada) and InnovateUK (Project No. 105625) through Eureka UK Canada Competition.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Zarco-Tejada, P.J.; Diaz-Varela, R.; Angileri, C.; Loudjani, V.P. Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods. Eur. J. Agron. 2014, 55, 89–99. [Google Scholar] [CrossRef]
  2. Salvo, G.; Caruso, L.; Scordo, A. Urban Traffic Analysis through an UAV. Procedia Soc. Behav. Sci. 2014, 111, 1083–1091. [Google Scholar] [CrossRef] [Green Version]
  3. Siebert, S.; Teizer, J. Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system. Automat. Constr. 2014, 41, 1–14. [Google Scholar] [CrossRef]
  4. Cassana, J.; Kantner, J.; Wiewel, A.; Cothren, J. Archaeological aerial thermography: A case study at the Chaco-era Blue J community, New Mexico. J. Archaeol. Sci. 2014, 45, 207–219. [Google Scholar] [CrossRef]
  5. Nigam, N. The Multiple Unmanned Air Vehicle Persistent Surveillance Problem: A Review. Machines 2014, 2, 13–72. [Google Scholar] [CrossRef] [Green Version]
  6. Clarke, R.; Bennett Moses, L. The Regulation of Civilian UAVs’ Impacts on Public Safety. CLSR. 2014, 30, 263–285. [Google Scholar] [CrossRef]
  7. Kim, D.; Youn, J.; Kim, C. Automatic Fault Recognition of Photovoltaic Modules Based on Statistical Analysis of Uav Thermography. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 179–182. [Google Scholar] [CrossRef] [Green Version]
  8. Barbedo, J.G.A. A Review on the Use of Unmanned Aerial Vehicles and Imaging Sensors for Monitoring and Assessing Plant Stresses. Drones 2019, 3, 40. [Google Scholar] [CrossRef] [Green Version]
  9. Ortiz-Sanz, J.; Gil-Docampo, M.; Arza-García, M.; Cañas-Guerrero, I. IR Thermography from UAVs to Monitor Thermal Anomalies in the Envelopes of Traditional Wine Cellars: Field Test. Remote Sens. 2019, 11, 1424. [Google Scholar] [CrossRef]
  10. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A Review on UAV-Based Applications for Precision Agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
  11. Burke, C.; McWhirter, P.R.; Veitch-Michaelis, J.; McAree, O.; Pointon, H.A.G.; Wich, S.; Longmore, S. Requirements and Limitations of Thermal Drones for Effective Search and Rescue in Marine and Coastal Areas. Drones 2019, 3, 78. [Google Scholar] [CrossRef] [Green Version]
  12. Melis, M.T.; Da Pelo, S.; Erbì, I.; Loche, M.; Deiana, G.; Demurtas, V.; Meloni, M.A.; Dessì, F.; Funedda, A.; Scaioni, M.; et al. Thermal Remote Sensing from UAVs: A Review on Methods in Coastal Cliffs Prone to Landslides. Remote Sens. 2020, 1, 1971. [Google Scholar] [CrossRef]
  13. Ibarra-Castanedo, C.; Sfarra, S.; Klein, M.; Maldague, X. Solar loading thermography: Time-lapsed thermographic survey and advanced thermographic signal processing for the inspection of civil engineering and cultural heritage structures. Infrared Phys. Technol. 2017, 82, 56–74. [Google Scholar] [CrossRef]
  14. Ibarra-Castanedo, C.; Brault, L.; Genest, M.; Farley, V.; Maldague, X.P. Detection and characterization of water ingress in honeycomb structures by passive and active infrared thermography using a high resolution camera. In Proceedings of the 11th International Conference on Quantitative InfraRed Thermography, Naples, Italy, 11–14 June 2012. [Google Scholar] [CrossRef]
  15. Zhang, H.; Avdelidis, N.P.; Osman, A.; Ibarra-Castanedo, C.; Sfarra, S.; Fernandes, H.; Matikas, T.E.; Maldague, X.P. Enhanced infrared image processing for impacted carbon/glass fiber-reinforced composite evaluation. Sensors 2017, 18, 45. [Google Scholar] [CrossRef] [Green Version]
  16. Aghaei, M.; Leva, S.; Grimaccia, F. PV power plant inspection by image mosaicing techniques for IR real-time images. In Proceedings of the 2016 IEEE 43rd Photovoltaic Specialists Conference (PVSC), Portland, OR, USA, 5–10 June 2016; pp. 3100–3105. [Google Scholar]
  17. Oswald-Tranta, B.; Sorger, M. Scanning pulse phase thermography with line heating. Quant. InfraRed Thermogr. J. 2013, 9, 103–122. [Google Scholar] [CrossRef]
  18. Mavromatidis, L.E.; Dauvergne, J.-L.; Saleri, R.; Batsale, J.-C. First experiments for the diagnosis and thermophysical sampling using impulse IR thermography from Unmanned Aerial Vehicle (UAV). In Proceedings of the QIRT Conference, Bordeaux, France, 7–11 July 2014. [Google Scholar] [CrossRef]
  19. Ibarra-Castanedo, C.; Maldague, X. Pulsed phase thermography reviewed. Quant. InfraRed Thermogr. J. 2004, 1, 47–70. [Google Scholar] [CrossRef]
  20. Sachs, D.; Nasiri, S.; Goehl, D.; Image Stabilization Technology Overview. InvenSense. Whitepaper. 2006. Available online: https://www.digikey.gr/Web%20Export/Supplier%20Content/invensense-1428/pdf/invensense-image-stabilization-technology.pdf (accessed on 22 February 2021).
  21. Shen, H.; Pan, Q.; Cheng, Y.; Yu, Y. Fast video stabilization algorithm for UAV. In Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 20–22 November 2009; pp. 542–546. [Google Scholar]
  22. Wang, Y.; Hou, Z.; Leman, K.; Chang, R. Real-time video stabilization for Unmanned Aerials Vehicles. In Proceedings of the Conference on Machine Vision Applications, Nara, Japan, 13–15 June 2011; pp. 336–339. [Google Scholar]
  23. Hong, S.; Hong, T.; Yang, W. Multi-resolution unmanned aerial vehicle video stabilization. In Proceedings of the IEEE 2010 National Aerospace and Electronics Conference (NAECON), Dayton, OH, USA, 14–16 July 2010; pp. 126–131. [Google Scholar]
  24. Rahmanair, W.; Wang, W.-J.; Chen, H.-C. Real-Time Detection and Recognition of Multiple Moving Objects for Aerial Surveillance. Electronics 2019, 8, 1373. [Google Scholar] [CrossRef] [Green Version]
  25. Walha, A.; Wali, A.; Alimi, A.M. Video Stabilization for Aerial Video Surveillance. AASRI Procedia 2013, 4, 72–77. [Google Scholar] [CrossRef]
  26. Zhou, M.; Ansari, V.K. A fast video stabilization system based on speeded-up robust features. In Advances in Visual Computing, Proceedings of the International Symposium on Visual Computing; Las Vegas, NV, USA, 26–28 September 2011; Bebis, G., Boyle, R., Parvin, B., Koracin, D., Wang, S., Kyungnam, K., Benes, B., Moreland, K., Borst, C., DiVerdi, S., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 428–435. [Google Scholar]
  27. Auberger, S.; Miro, C. Digital Video Stabilization Architecture for Low Cost Devices. In Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, Zagreb, Croatia, 15–17 September 2005; pp. 474–479. [Google Scholar]
  28. Marcenaro, L.; Vernazza, G.; Regazzoni, C.S. Image stabilization algorithms for video-surveillance applications. In Proceedings of the 2001 International Conference on Image Processing (Cat. No.01CH37205), Thessaloniki, Greece, 7–10 October 2001; Volume 1, pp. 349–352. [Google Scholar] [CrossRef]
  29. Morimoto, C.; Chellappa, R. Evaluation of image stabilization algorithms. In Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP ’98 (Cat. No.98CH36181), Seattle, WA, USA, 15 May 1998; Volume 5, pp. 2789–2792. [Google Scholar] [CrossRef]
  30. Souza, M.; Pedrini, H. Digital video stabilization based on adaptive camera trajectory smoothing. J. Image Video Proc. 2018, 37. [Google Scholar] [CrossRef] [Green Version]
  31. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar] [CrossRef] [Green Version]
  32. Thakur, A.S. Video Stabilization Using Point Feature Matching in OpenCV. Learn OpenCV. Available online: https://www.learnopencv.com/author/abhi-una12/ (accessed on 12 July 2020).
  33. Shi-Tomasi Corner Detector. OpenCV. Available online: https://docs.opencv.org/3.4/d8/dd8/tutorial_good_features_to_track.html (accessed on 19 January 2021).
  34. Optical Flow. OpenCV. Available online: https://docs.opencv.org/3.4/d4/dee/tutorial_optical_flow.html (accessed on 19 January 2021).
  35. Klinker, F. Exponential moving average versus moving exponential average. Math. Semesterber. 2011, 58, 97–107. [Google Scholar] [CrossRef] [Green Version]
  36. Zhang, X. Gaussian Distribution. In Encyclopedia of Machine Learning; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2011. [Google Scholar] [CrossRef]
  37. Linear Regression. Scikit-Learn. Available online: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html (accessed on 10 July 2020).
  38. Support Vector Regression. Scikit-Learn. Available online: https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html (accessed on 13 July 2020).
  39. Butterworth Filter. SciPy.org. Available online: https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.butter.html (accessed on 18 July 2020).
  40. ZENMUSE XT. User Manual, v1.2; FFC Calibration. 2016, p. 14, DJI. Available online: https://dl.djicdn.com/downloads/zenmuse_xt/en/Zenmuse_XT_User_Manual_en_v1.2.pdf (accessed on 22 February 2021).
  41. What Calibration Terms are Applied in the Camera? There is the FFC and also the Gain Calibration. Are there others? Can I Do My Own Calibration? FLIR. Available online: https://www.flir.ca/support-center/oem/what-calibration-terms-are-applied-in-the-camera-there-is-the-ffc-and-also-the-gain-calibration.-are-there-others-can-i-do-my-own-calibration/ (accessed on 7 January 2021).
  42. Tukey, J.W. Exploratory Data Analysis. Addison-Wesley: Reading, MA, USA, 1977; ISBN 978-0-201-07616-5. [Google Scholar]
  43. Litvin, A.; Konrad, J.; Karl, W. Probabilistic Video Stabilization Using Kalman Filtering and Mosaicing. In Proceedings of the SPIE 5022, Image and Video Communications and Processing, Santa Clara, CA, USA, 7 May 2003; pp. 663–674. [Google Scholar]
  44. Cheung, V.; Frey, B.J.; Jojic, N. Video Epitomes. Int. J. Comput. Vis. 2008, 76, 141–152. [Google Scholar] [CrossRef]
  45. Matsushita, Y.; Ofek, E.; Ge, W.; Tang, X.; Shum, H.-Y. Full-frame video stabilization with motion inpainting. IEEE Trans. Patt. Anal. Mach. Intell. 2006, 28, 1150–1163. [Google Scholar] [CrossRef] [PubMed]
  46. Maldague, X.P.V. Theory and Practice of Infrared Technology for NonDestructive Testing; John Wiley-Interscience: New York, NY, USA, 2001; 684. [Google Scholar]
  47. Shepard, S.M.; Lhota, J.R.; Rubadeux, B.A.; Wang, D.; Ahmed, T. Reconstruction and enhancement of active thermographic image sequences. Opt. Eng. 2003, 42, 1337–1342. [Google Scholar] [CrossRef]
  48. Martin, R.E.; Gyekenyesi, A.L.; Shepard, S.M. Interpreting the results of pulsed thermography data. Mater. Eval. 2003, 61, 611–616. [Google Scholar]
  49. Rajic, N. Principal component thermography for flaw contrast enhancement and flaw depth characterisation in composite structures. Compos. Struct. 2002, 58, 521–528. [Google Scholar] [CrossRef]
Figure 1. Example of defect detection improvement through signal processing (starting at top left following clockwise motion): raw temperature data sequence; temperature profiles of defective (red) and non-defective (blue) areas; phase profiles of the corresponding areas; and reconstructed phasegrams at selected frequencies, showing the defects.
Figure 1. Example of defect detection improvement through signal processing (starting at top left following clockwise motion): raw temperature data sequence; temperature profiles of defective (red) and non-defective (blue) areas; phase profiles of the corresponding areas; and reconstructed phasegrams at selected frequencies, showing the defects.
Sensors 21 01604 g001
Figure 2. Image of the UAV along with the specimen for the first experimental setup.
Figure 2. Image of the UAV along with the specimen for the first experimental setup.
Sensors 21 01604 g002
Figure 3. Second experimental setup (left), underside defects (middle), non-defective side (right).
Figure 3. Second experimental setup (left), underside defects (middle), non-defective side (right).
Sensors 21 01604 g003
Figure 4. Process flow for video stabilization.
Figure 4. Process flow for video stabilization.
Sensors 21 01604 g004
Figure 5. Example of motion estimation using movement of features between two consecutive frames (fi) and (fi+1) for optical (left) and thermal (right). Note that the optical and thermal frames shown are not to scale. Optical frames and thermal frames had resolutions of 1920 × 1080 pixels and 720 × 480 pixels respectively.
Figure 5. Example of motion estimation using movement of features between two consecutive frames (fi) and (fi+1) for optical (left) and thermal (right). Note that the optical and thermal frames shown are not to scale. Optical frames and thermal frames had resolutions of 1920 × 1080 pixels and 720 × 480 pixels respectively.
Sensors 21 01604 g005
Figure 6. Example of a Blank Border (BB) highlighted within a rectangle. Frame shown is from an optical video with a resolution of 1920 × 1080 pixels.
Figure 6. Example of a Blank Border (BB) highlighted within a rectangle. Frame shown is from an optical video with a resolution of 1920 × 1080 pixels.
Sensors 21 01604 g006
Figure 7. Comparison of extracted motion between original and stabilized frames for optical videos (result shown is that of GF-5s based smoothing).
Figure 7. Comparison of extracted motion between original and stabilized frames for optical videos (result shown is that of GF-5s based smoothing).
Sensors 21 01604 g007
Figure 8. Comparison of extracted motion between original and stabilized frames for thermal videos (result shown is that of LR based smoothing).
Figure 8. Comparison of extracted motion between original and stabilized frames for thermal videos (result shown is that of LR based smoothing).
Sensors 21 01604 g008
Figure 9. Frames extracted from optical video (top) and corresponding thermal video (bottom) at increasing UAV heights of 1.5 m, 2 m, and 3 m (from left to right, respectively) during active thermography inspection experiment.
Figure 9. Frames extracted from optical video (top) and corresponding thermal video (bottom) at increasing UAV heights of 1.5 m, 2 m, and 3 m (from left to right, respectively) during active thermography inspection experiment.
Sensors 21 01604 g009
Figure 10. Results of x-translation at increasing UAV heights of 1.5, 2, and 3 m (from left to right) for SVR-LR, LR, and SVR-LR, respectively.
Figure 10. Results of x-translation at increasing UAV heights of 1.5, 2, and 3 m (from left to right) for SVR-LR, LR, and SVR-LR, respectively.
Sensors 21 01604 g010
Figure 11. Results of y-translation at increasing UAV heights of 1.5, 2, and 3 m (from left to right) for SVR-LR, LR, and SVR-LR, respectively.
Figure 11. Results of y-translation at increasing UAV heights of 1.5, 2, and 3 m (from left to right) for SVR-LR, LR, and SVR-LR, respectively.
Sensors 21 01604 g011
Table 1. Aircraft part with underside defects placed in five rows and are partly filled with silicone to simulate different depth.
Table 1. Aircraft part with underside defects placed in five rows and are partly filled with silicone to simulate different depth.
RowsDefect TypeQuantitySizeDepth
1Circular45 cm (diameter)2.2 cm
2Circular45 cm (diameter)1 cm (filled)
3Circular45 cm (diameter)(filled)
4Circular45 cm (diameter)(filled)
5Rectangular12 × 24 cm2 cm
Table 2. Summary of optical video stabilization using different trajectory smoothing techniques for first experimental set.
Table 2. Summary of optical video stabilization using different trajectory smoothing techniques for first experimental set.
Smoothing Algorithmx-Translation
(Pixels)
y-Translation
(Pixels)
Rotation
(Radians)
MS-SSIMAverage BB
Percent
Lower BoundUpper BoundLower BoundUpper BoundLower BoundUpper BoundMinMaxAverage
Original−4.542.28−1.987.740.000.000.721.000.860.00
EMA-1s−3.181.500.2510.970.000.000.750.990.880.08
EMA-3s−4.321.42−0.978.550.000.010.800.990.904.85
EMA-5s−24.551.360.0042.680.000.000.840.990.9212.67
GF-1s−3.081.811.317.860.000.000.740.960.880.00
GF-3s−1.061.141.335.000.000.000.810.950.880.00
GF-5s−1.870.851.224.290.000.000.830.960.900.20
LBW−5.942.65−2.4210.430.000.000.711.000.870.00
LR−1.235.000.684.200.000.000.780.910.860.12
SMA-1s−4.712.05−1.209.940.000.000.730.990.870.00
SMA-3s−3.136.100.039.940.000.000.700.990.881.05
SMA-5s−2.281.40−0.047.640.000.000.750.990.883.56
SVR-LR−0.927.710.744.190.000.000.790.910.860.34
Table 3. Summary of thermal video stabilization using different trajectory smoothing techniques for first experimental set.
Table 3. Summary of thermal video stabilization using different trajectory smoothing techniques for first experimental set.
Smoothing Algorithmx-Translation
(Pixels)
y-Translation
(Pixels)
Rotation
(Radians)
MS-SSIMAverage BB
Percent
Lower BoundUpper BoundLower BoundUpper BoundLower BoundUpper BoundMinMaxAverage
Original−1.741.55−1.904.590.000.000.861.000.970.00
EMA-1s−1.371.11−0.463.710.000.000.921.000.980.40
EMA-3s−1.140.88−0.483.990.000.000.891.000.988.64
EMA-5s−1.090.83−0.743.990.000.000.851.000.9919.33
GF-1s−1.290.99−0.073.450.000.000.921.000.980.00
GF-3s−0.760.450.163.340.000.000.930.990.980.10
GF-5s−0.700.420.332.930.000.000.930.990.990.55
LBW−2.342.14−2.005.490.000.000.921.000.980.00
LR−0.700.391.372.450.000.000.920.990.980.57
SMA-1s−1.901.71−1.364.740.000.000.921.000.980.00
SMA-3s−1.381.10−0.363.570.000.000.921.000.982.05
SMA-5s−0.980.79−0.774.650.000.000.931.000.986.64
SVR-LR−0.730.351.262.430.000.000.880.990.981.05
Table 4. Summary of OSM for different smoothing algorithm for first experimental set.
Table 4. Summary of OSM for different smoothing algorithm for first experimental set.
Smoothing AlgorithmOptical Video OSMThermal Video OSM
Original0.000.00
EMA-1s7.9225.34
EMA-3s11.8016.15
EMA-5s−92.482.64
GF-1s17.2829.12
GF-3s37.8537.09
GF-5s39.8439.63
LBW−10.645.23
LR30.5242.26
SMA-1s−0.5714.22
SMA-3s−8.2624.76
SMA-5s14.4724.33
SVR-LR25.1531.60
Table 5. Summary of OSM for different smoothing techniques applied to the thermal video from the second experimental set, where the UAV was flown at three different heights.
Table 5. Summary of OSM for different smoothing techniques applied to the thermal video from the second experimental set, where the UAV was flown at three different heights.
Smoothing
Algorithm
UAV Flown
at 1.5 m Height
OSM
UAV Flown
at 2 m Height
OSM
UAV Flown
at 3 m Height
OSM
Original0.000.000.00
EMA-1s−18.9014.5317.46
EMA-3s0.3813.2827.57
EMA-5s−2.150.8629.51
GF-1s14.1528.9825.75
GF-3s18.9225.3738.20
GF-5s8.3323.6029.75
LBW−19.82−4.14−4.66
LR20.2433.9743.82
SMA-1s−10.613.032.07
SMA-3s−29.6818.0810.40
SMA-5s−6.6910.3220.54
SVR-LR22.9332.7944.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pant, S.; Nooralishahi, P.; Avdelidis, N.P.; Ibarra-Castanedo, C.; Genest, M.; Deane, S.; Valdes, J.J.; Zolotas, A.; Maldague, X.P.V. Evaluation and Selection of Video Stabilization Techniques for UAV-Based Active Infrared Thermography Application. Sensors 2021, 21, 1604. https://doi.org/10.3390/s21051604

AMA Style

Pant S, Nooralishahi P, Avdelidis NP, Ibarra-Castanedo C, Genest M, Deane S, Valdes JJ, Zolotas A, Maldague XPV. Evaluation and Selection of Video Stabilization Techniques for UAV-Based Active Infrared Thermography Application. Sensors. 2021; 21(5):1604. https://doi.org/10.3390/s21051604

Chicago/Turabian Style

Pant, Shashank, Parham Nooralishahi, Nicolas P. Avdelidis, Clemente Ibarra-Castanedo, Marc Genest, Shakeb Deane, Julio J. Valdes, Argyrios Zolotas, and Xavier P. V. Maldague. 2021. "Evaluation and Selection of Video Stabilization Techniques for UAV-Based Active Infrared Thermography Application" Sensors 21, no. 5: 1604. https://doi.org/10.3390/s21051604

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop