Next Article in Journal
Tracking and Classification of Head Movement for Augmentative and Alternative Communication Systems
Previous Article in Journal
Realguard: A Lightweight Network Intrusion Detection System for IoT Gateways
Previous Article in Special Issue
Estimation of Fluor Emission Spectrum through Digital Photo Image Analysis with a Water-Based Liquid Scintillator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Aircraft Dent Inspection via a Modified Fourier Transform Profilometry Algorithm

Integrated Vehicle Health Management Centre, Cranfield University, Cranfield MK43 0AL, UK
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(2), 433; https://doi.org/10.3390/s22020433
Submission received: 18 November 2021 / Revised: 23 December 2021 / Accepted: 5 January 2022 / Published: 7 January 2022

Abstract

:
The search for dents is a consistent part of the aircraft inspection workload. The engineer is required to find, measure, and report each dent over the aircraft skin. This process is not only hazardous, but also extremely subject to human factors and environmental conditions. This study discusses the feasibility of automated dent scanning via a single-shot triangular stereo Fourier transform algorithm, designed to be compatible with the use of an unmanned aerial vehicle. The original algorithm is modified introducing two main contributions. First, the automatic estimation of the pass-band filter removes the user interaction in the phase filtering process. Secondly, the employment of a virtual reference plane reduces unwrapping errors, leading to improved accuracy independently of the chosen unwrapping algorithm. Static experiments reached a mean absolute error of 0.1   m m at a distance of 60   c m , while dynamic experiments showed 0.3   m m at a distance of 120   c m . On average, the mean absolute error decreased by 34 % , proving the validity of the proposed single-shot 3D reconstruction algorithm and suggesting its applicability for future automated dent inspections.

1. Introduction

Maintenance, repair and overhaul (MRO) companies have an interest in the progressive automation of aircraft inspections. These are not only hazardous, but also costly, time-consuming, labour-intensive, and subject to human error [1], yet are critical for the airworthiness assessment. The potential advantages of automation are evident: improved safety, higher productivity, reduced aircraft downtime, and standard output. Therefore, the general problem of automating inspections has been attracting the attention of many researchers lately, targeting damages on aircraft skin for its extent and its desirable characteristic of being directly accessible. Understandably, unmanned aerial vehicles (UAVs) are often considered to reach areas of the skin at a certain height, relieving the engineer from connected safety risks [2]: in 2018, Airbus launched an indoor inspection drone [3] and recently approved UAVs for lightning checks [4]. Similarly, other companies, such as easyJet and Air France Industries-KLM, are pioneering the adoption of UAVs for inspections.
Among inspection technologies that enable the use of UAVs [5], pattern recognition is at the basis of several commercial products recently introduced in MRO. It has the advantage of being easy to implement, not requiring particular hardware except for a high-definition camera, and has been successfully applied for detecting scratches and lightning damage. Thermal imaging has also been proposed for its capability to detect delamination and corrosion [6]. Despite their relevance and frequency of occurrence on the aircraft skin, little has been carried out for dents. Maintenance procedures require periodic and comprehensive inspections during which dents must be found, measured, and reported. However, automation of dent inspections poses new challenges due to the nature of this damage.
A dent is smooth and without well-defined boundaries, with depth easily lower than 1   m m . As the aircraft skin is generally single-coloured and lacking in texture, a dent may not be clearly distinguishable even by trained engineers, who usually rely on reflections or on the light of a torch that seeps under a ruler placed on the surface, to highlight depth variations and spot dent locations during general visual inspections. Furthermore, measures of depth and width are required to classify a dent as allowable damage or not [7]. In traditional inspections, measures are collected by means of a depth gauge and a ruler. Although high-accuracy handheld 3D scanning tools are available, quick but less-accurate traditional methods still account for the most part [8,9] and 3D scanners are reserved for special inspections. The reason is probably to be found in the relatively short operating distances of these devices and in the fact that the aircraft skin includes locations at height normally reached by platforms and climbing equipment. As such, handheld scanning devices not only still require the engineer to reach those locations, they also add practical difficulties for carrying them and increased setup time and scan data analysis effort.
Among the few works addressing dent inspections, Jovancevic et al. [10] proposed the use of Air-Cobot equipped with commercial 3D scanners and a region-growing algorithm for the automatic segmentation of dented areas. However, it left open the problem of reaching the upper part of the aircraft. Doğru et al. [11] proposed a convolutional neural network to detect dents, suggesting the use of UAVs to achieve full automation. Still, the use of monocular images not only prevents measuring the damage but also results in low accuracy. The capability to detect a dent with this method is virtually zero for shallow dents over nontextured skin, which are the most common and nevertheless must be found and reported. Hence, the acquisition of 3D data cannot be avoided.
In the abovementioned context, this article discusses the feasibility of automated dent scanning via a single-shot structured-light 3D scanner. With respect to multiple-shot, the choice of a single-shot algorithm makes the system resilient to fast movements and vibration during scanning. With further development, such a system could be integrated on a UAV, enabling quick and effective automated dent inspections. After a brief review of the available structured-light codification strategies, an algorithm based on the Fourier transform profilometry (FTP) is tested for the scope. The use of a virtual reference plane is proposed for:
  • The automatic band-pass estimation (Section 2.2), thus eliminating the user interaction in the filtering process.
  • The reduction of errors in the phase unwrapping process via a virtual reference image (Section 2.4).
Simulations (Section 3) and experiments (Section 4) on both static and moving surfaces presenting artificial dents prove the validity of the proposed algorithm and show that the use of FTP is promising for the automation of aircraft dent inspections. Discussion and Conclusions follow in Section 5 and Section 6.

Structured-Light Codification Strategies

Considering the purpose of this work, only the codification strategies applicable to a minimal structured-light system, composed of a camera and a projector, are taken into account. Therefore, strategies involving the use of multiple cameras, additional constraints in the system geometry, or restricted depth range are omitted.
Coding approaches can usually be classified as time-multiplexing or spatial neighbourhood. Time-multiplexing (or multiple-shot) is capable of obtaining higher resolution and accuracy. Techniques in this group can be roughly divided into Gray coding and temporal phase shifting, or a combination of the two [12]. The main advantage of phase-based coding over Gray coding is to provide sub-pixel accuracy, thanks to the use of continuous functions. The large number of projected patterns makes time-multiplexing unsuitable for moving objects [13]. Methods using a limited number of patterns (≥2) and high-speed devices can be found in the literature [14,15,16]; however, their employment on moving objects, outside laboratory conditions, and without high-end hardware is still an active field of research.
Spatial neighbourhood (or single-shot) enables the measurement of moving surfaces and is insensitive to vibrational noise [17]. This comes at the price of lower accuracy, as the coding information must be contained in one pattern only, and local smoothness must be assumed for the neighbourhood to be correctly decoded. Spatial methods can use colour or greyscale values. In colour-based approaches, a compromise is to be found between the number of colours and noise sensitivity, while the use of greyscale intensity values results in more robustness [13]. Similarly to time-multiplexing, spatial coding can also be divided into discrete and continuous. The De Bruijn colour sequence [18] is probably the most representative discrete method, from which several techniques have followed [19,20,21]. Discrete methods extract codewords from local areas, which in most cases require several pixels, limiting resolution. Continuous spatial coding, instead, uses a smooth pattern to obtain a dense surface reconstruction.
Among the latter, Fourier transform profilometry (FTP) aims at calculating the continuous phase value from a single projected pattern through a filter in the frequency domain [22,23]. The filtering is a critical step, and aliasing can prevent the extraction of the fundamental spectrum. It also poses a limit to the phase derivative along the fringe direction, but not directly to the maximum measurable height [22]. With nonstationary signals, the classic FTP may have difficulties in isolating the first harmonic. In order to solve this, windowed FTP and wavelet transform provide a space–frequency representation demanding high computational cost [17,24,25,26] and the choice of additional system parameters, although approaches for the automatic selection of the window size have been proposed [27,28].
Finally, in both time-multiplexing approaches and FTP, the phase is used to calculate the object 3D coordinates. However, while time-multiplexing is generally able to obtain the absolute phase directly (via temporal unwrapping), FTP can only provide a wrapped phase that needs to be processed through a spatial unwrapping algorithm and offset to obtain the absolute phase, not without difficulties [29,30]. The absolute phase can then be used to find a correspondence between camera and projector, and thus proceed with stereo triangulation [31,32], or the depth can be seen as a function of the phase difference from a physical reference plane [22,30,33].

2. Proposed Method

In this study, a modified triangular stereo FTP method is implemented to discuss its compatibility and performance towards automated dent inspections. In addition, two contributions are presented.
First, the automatic estimation of the band-pass filter is proposed (Section 2.2). This feature is essential, as the first-spectrum band varies with the distance from the scanned surface and the system should be able to proceed without user interaction.
Secondly, a strategy for the reduction of phase unwrapping errors is shown (Section 2.4). All phase-based single-shot methods initially output a wrapped (or modulo 2 π ) phase. After spatial unwrapping, two methods can be found in the literature to relate phase with depth. In systems calibrated using the pinhole model [34], the object phase can be directly used to solve the stereo correspondence problem, thus proceeding with triangulation [31,32]. Alternatively, the depth can be seen as a function of the phase shift of the object, calculated with respect to a reference plane, as commonly performed in phase-height mapping methods [22,33]. While triangular stereo methods are generally more accurate, suitable for extended depth range measurements, and easier to calibrate out of the lab [30,35], here, it is shown that the employment of the phase shift has the benefit of reducing unwrapping errors and that its advantage can be brought to triangular stereo methods by means of a virtual reference plane.

2.1. Notation and System Geometry

Throughout the article, the pinhole camera model is used for both camera and projector [34]. Without loss of generality, the camera optical centre is placed in the world origin O c = ( 0 , 0 , 0 ) , oriented in the same way so that the world z-axis corresponds to the camera optical axis. K c is the camera intrinsic matrix, while its rotation matrix and translation vector, due to its position, are simply R c = I ( 3 × 3 identity matrix) and t c = 0 , respectively. The projector optical center is placed in O p = ( x p , y p , z p ) , freely oriented. Its intrinsic matrix, rotation matrix, and translation vector are identified by K p , R p , and t p , respectively. Lens distortion correction is managed using the radial and tangential model [34], but is not explicitly reported in the following formulae. Camera-projector calibration is assumed as known, following one of the several methods available in the literature [31,36,37,38]. The described system geometry is represented in Figure 1.
I c and I p are the camera and projector image planes, respectively. In particular, I c is the plane where the image acquired by the camera lays after distortion correction has been applied, while the distortion-free fringe image chosen as projector input lays on I p . On each image, a pixel coordinate system is defined, with ( u c , v c ) I c and ( u p , v p ) I p being two generic points laying on camera and projector images, respectively.
A sinusoidal fringe pattern is chosen with fringes perpendicular to the u p -axis, having period T p (in pixels) and frequency f p = 1 / T p . A red-coloured stripe is drawn for one central period, following the intensity trend (Figure 2). After projection, the corresponding points can be identified from the camera using a weighted average of red intensity values over the u-axis, producing sub-pixel accuracy. The use of a similar centerline was also proposed in [31].
The equation of the virtual reference plane R is considered at z = l c , thus laying perfectly in front of the camera, where l c is chosen as the average depth of the red stripe world points, readily calculated by triangulation.
The aim of FTP is to find the generic 3D point H belonging to the object, observed as A c I c and H p I p , with A being the intersection of the ray O c H ¯ and R .

2.2. Automatic Band-Pass Filter

In FTP, a band-pass filter must be applied to isolate the fundamental spectrum and filter out DC component and upper harmonics [17,22,29]. The band of interest can be identified by the interval [ f c r , f c + r ] , where f c is the fundamental frequency, as seen from the camera, and r is an appropriate radius. Experimentally, it can be shown that the fundamental spectrum varies significantly when scanning objects at different distances, requiring the human interaction for the manual selection of the band-pass at each scan. This becomes impractical when dealing with a large number of scans with varying distances. The automatic band-pass filter solves this problem.
The estimation of f c assumes that the area of major interest is at depth z = l c (as calculated in Section 2.1). The intersection of camera optical axis and R is simply C w = ( 0 , 0 , l c ) T , that is projected on the projector image plane to find C p = ( u p , v p ) I p :
u p v p 1 = Normalise K p · [ R p | t p ] 0 0 l c
where Normalise ( ) is a function that divides a homogeneous coordinate by its 3rd element. On I p , two points can be identified as
C p l e f t = u p T p 2 v p 1 C p r i g h t = u p + T p 2 v p 1
covering a full period T p . Then, C p l e f t is deprojected back to R and, from here, to the camera:
C c l e f t = Normalise ( K c · M · Normalise ( ( K p · R p ) 1 · u p T p 2 v p 1 ) )
where:
M = l c z p 0 x p 0 l c z p y p 0 0 l c
The same applies to C p r i g h t , giving C c r i g h t . Lens distortion is corrected accordingly in all of the steps. C c l e f t and C c r i g h t lie on I c at a distance corresponding to the projector period and its orientation. From Pythagorean geometry, T c is calculated as the projection of C c l e f t C c r i g h t ¯ over the u c -axis. Finally, f c = 1 / T c and, throughout this paper, r = f c / 2 .
The band-pass estimation does not require any user input and allows for some automation [29,35] and extended adaptability of FTP in different system geometries and at varying distances from the object, particularly when scanning surfaces roughly placed in front of the camera, as in the application here discussed. The automatic band-pass can successfully isolate the main spectrum (Figure 3) and it was successfully used in all the following simulations (Section 3) and experiments (Section 4).

2.3. Virtual Reference Image

On the domain of the points of R observed from the camera and illuminated by the projector, the relation between projector and camera image planes is bijective. This allows to virtually build a reference image as seen from the camera.
The generic point A p = ( u p A , v p A ) I p is projected to the world point A R , which is, in turn, observed from the camera as A c = ( u c A , v c A ) I c . A p is calculated from A c as
u p A v p A 1 = Normalise K p · l c R p K c 1 | t p u c A v c A 1 1
In other words, the effect of Equation (5) is to find the line passing through O c and A c that intersects R at a certain world point A, and then project A onto I p . As usual, camera and projector coordinates are also corrected for lens distortion.
With this simple strategy, each point of the camera image is mapped to its corresponding point and its intensity value on the projected fringe image on I p . As the mapping generally produces noninteger numbers, the final camera image is derived with cubic interpolation. The red stripe is not needed in this image.
The phase variation due to perspective is clearly visible in Figure 4. As for its frequency content, the virtual reference image is the same as the one that would be acquired placing a real plane at the same position. However, the effects of light diffusion, object albedo, and environment illumination are not introduced.
The virtual reference image is then used to calculate the phase shift, similarly to phase-height methods [22,33] but without a physical plane. The usefulness of this is explained below.

2.4. Reduction of Noise Effect over Phase Jumps

With single-shot FTP, the choice of unwrapping algorithms is limited to the spatial ones. To calculate the absolute phase Φ , estimated as Φ ˜ , spatial phase unwrapping cannot rely on information other than the phase map Φ w that, due to the presence of noise, may present genuine or fake 2 π -jumps [39]. The two are indistinguishable, making the spatial unwrapping such a challenging task [40].
In both stereo triangulation and phase-height mapping methods, Φ ˜ is related to the world z coordinate. While stereo triangulation directly relates Φ ˜ to a point on I p [31,32], phase-height mapping estimates z as a function of the phase shift Δ Φ ˜ from a reference phase Φ 0 calculated over a plane [22,33].
Although the proposed method uses a stereo triangulation approach, it also exploits the calculation of Δ Φ ˜ with respect to a virtual reference image (Section 2.3) to reduce the number of 2 π -jumps, both genuine and fake ones. Consequently, unwrapping errors are reduced, as there is less probability for the noise to interfere with 2 π -jumps.
The virtual reference and the object intensity values observed from the camera can be represented, respectively, as
g 0 ( u c , v c ) = A 0 ( 1 + cos Φ 0 )
= A 0 { 1 + cos [ 2 π f c u c + φ 0 ( u c , v c ) ] }
and
g ( u c , v c ) = A ( 1 + cos Φ )
= A { 1 + cos [ 2 π f c u c + φ ( u c , v c ) ] }
where f c is the fundamental frequency (Section 2.2), A = A ( u c , v c ) and A 0 = A 0 ( u c , v c ) are intensity modulations, φ 0 ( u c , v c ) represents the phase modulation over the virtual reference image (caused by perspective only), and φ ( u c , v c ) is the phase modulation over the object caused by perspective, object depth, and noise.
After fast Fourier transform (FFT), filtering and inverse FFT (without aliasing) of g, the wrapped object phase can be obtained as
Φ w ( u c , v c ) = a r g A 2 e i Φ ( u c , v c ) = W [ 2 π f c u c + φ ( u c , v c ) ]
where W [ · ] stands for the wrapping operation (or modulo 2 π ). Then, the classic direct method estimates the unwrapped object phase as
Φ ˜ r ( u c , v c ) = U [ Φ w ( u c , v c ) ]
where U [ · ] represents a generic spatial unwrapping algorithm. Note that Φ ˜ r is still a relative phase, with uncertain fringe order.
Furthermore, as in phase-height mapping methods [22,33], the wrapped phase shift can be calculated from a new signal obtained as the filtered object signal multiplied by the complex conjugate of the filtered reference [22]:
Δ Φ w ( u c , v c ) = a r g A 2 e i Φ ( u c , v c ) · A 0 2 e i Φ 0 ( u c , v c ) = W [ φ ( u c , v c ) φ 0 ( u c , v c ) ]
and the unwrapped object phase can be obtained also, as
Φ ˜ r ( u c , v c ) = Φ 0 ( u c , v c ) + U [ Δ Φ w ( u c , v c ) ] = Φ 0 ( u c , v c ) + Δ Φ ˜ r ( u c , v c )
where Φ 0 ( u c , v c ) is the absolute reference phase and Δ Φ ˜ r is the unwrapped phase shift.
Equations (11) and (13) should lead to the same result. However, the interaction of noise over 2 π -jumps will be greater in Equation (11) than in Equation (13). In fact, under the general assumption of FTP, φ and φ 0 must vary very slowly compared to the fundamental frequency f c [22], and as a direct consequence, Δ Φ w presents less 2 π -jumps compared to Φ w . An example is shown in Figure 5.
Therefore, Equation (13) allows to deal with the unwrapping of the signal Δ Φ w that contains a slower trend superimposed modulo 2 π on the same noise distribution found on Φ w (which originates from the sole object image, as the virtual reference is noise-free). This leads to a more faithful unwrapped phase.
To show the practical effect of this, a white noise ( σ = 0.6 ) is added to Φ of the same example of Figure 5. The wrapped object phase and phase shift are shown in Figure 6: it is clear that the task of unwrapping becomes more challenging due to noise interaction with jumps and that the probability for it to interfere are lower in the second case, due to fewer 2 π -jumps.
The unwrapped phase using the algorithm by Itoh [41] is shown in Figure 7. Even in this very simple case, noise causes a genuine jump right after u c = 20 to be missed, thus propagating the error to the right. The unwrapping of Δ Φ w in Equation (13), instead, is more robust and outputs the correct result.
Basic single-shot structured-light applications cannot rely on additional knowledge to eliminate noise, and a spatial unwrapping algorithm cannot count on information other than the wrapped phase itself. Therefore, the reduction of 2 π -jumps increases performance, regardless of the unwrapping algorithm used. This was confirmed by the following simulations (Section 3) and experiments (Section 4).
Furthermore, simulations in the absence of noise revealed that the use of Equation (13) in place of Equation (11) increases the numerical precision. This can be explained by the fact that, in computers, complex numbers are generally stored in algebraic form and the argument is extracted by means of the a r c t a n 2 function. Numerically, its asymptote regions are more sensitive to machine precision error, and Equation (13) leads to reduced error, as the slow-growing signal crosses those regions fewer times. Depending on sampling and period, the error Φ ˜ Φ decreased up to 10 6 .

2.5. Stereo Correspondence

Before proceeding with triangulation, the relative unwrapped phase Φ ˜ r must be shifted accordingly to obtain the absolute phase Φ ˜ , following
Φ ˜ = Φ ˜ r + 2 k π
where k K is the difference from the correct fringe order [40].
Additional information is needed to find Φ ˜ , for example, the absolute phase value in correspondence of the red stripe [31]:
Φ ˜ ( u c , v c ) = Φ ˜ r ( u c , v c ) n = 0 N Φ ˜ n ( u c , v c ) N
where N is the number of pixels of the red stripe and Φ ˜ n ( u c , v c ) phase values at stripe locations. Then, each Φ ˜ ( u c , v c ) value may be directly associated with its u p coordinate, as
u p = W p · Φ ˜ ( u c , v c ) 2 π
where W p is the projector resolution along the u p -axis.
Equivalently, Equation (14) can be rewritten as
2 π f p u p = 2 π f p u p A + Δ Φ ˜ r ( u c A , v c A ) + 2 k π
where u p A is obtained through Equation (5). Applying the above relation for each of the red stripe u-coordinates found on I c allows to resolve for k. The average k is selected and rounded to the nearest integer. Numerically, the latter approach delivers increased accuracy, as k is correctly forced to be an integer.
Once k is known, Equation (17) can be used to retrieve all the u p points with respect to each camera point. Finally, v p is calculated via epipolar geometry, and having all the corresponding couples, ( u c , v c ) I c and ( u p , v p ) I p , one may proceed with stereo triangulation.

3. Computer Simulations

In the aircraft structural repair manual, a dent is defined as a damaged area pushed from its normal contour that presents smooth edges [7]. The length of a dent is the longest distance from one end to the other, while the width is the maximum distance measured at 90 from the direction of the length. The depth is measured at their intersection. A dent is usually classified by its width, depth, and width/depth ratio.
In the absence of a formal shape definition, the following was chosen as representative dent function:
z = e 1 1 r 2 if | r | < 1 0 elsewhere
where r = x 2 + y 2 . As for real dents, the function presents a smooth trend and vanishing boundaries (Figure 8). It can be rescaled and rotated to resemble different dents.
A simulated dent with maximum depth 3 mm and width 30 mm was scanned from 1 m of distance with a resolution of 1280 × 720 pixels and T p = 8 . White noise ( σ = 0.6 ) was added, similarly to Section 2.4. The object phase maps and the middle row signals are shown in Figure 9 and Figure 10, respectively.
Due to noise, Itoh’s unwrapping algorithm missed 2 π -jumps on more occasions when using Equation (11) compared to Equation (13). In Figure 11, the detail of a missed genuine 2 π -jump is showed.
The mean absolute error (MAE) was chosen as a metric, calculated as the distance between the expected mesh to the point cloud [42]. After 3D reconstruction, the MAE from the original simulated dent was 6.820   m m (its standard deviation was σ = 7.174   m m ) with the direct method, and 3.827   m m ( σ = 7.090   m m ) with the proposed method. Reconstructed 3D points are shown in Figure 12.
The simulation was repeated using the noise-robust unwrapping algorithm by Estrada et al. [43]. With τ = 0.8 , the MAE from the original 3D dent was 7.597   m m ( σ = 4.822   m m ) with the direct method and 1.591   m m ( σ = 1.339   m m ) with the proposed method.
In this case, the unwrapping was also more effective using the virtual reference. As shown in Figure 13, in the first case, Estrada’s algorithm is unable to correctly unwrap the phase, leaving a visible artefact. This does not happen with the proposed method due to a reduced number of 2 π -jumps to deal with.
Decreasing to τ = 0.7 for higher noise suppression (or regularisation), the reconstruction was free of artefacts in both methods at the price of lower dynamic range [43]. The MAEs were 1.241   m m ( σ = 1.047   m m ) and 1.221   m m ( σ = 1.032   m m ), respectively. Although less evident, the improvement of a virtual reference still applies.
While using Estrada’s noise-robust unwrapping algorithm results in a better reconstruction over Itoh’s algorithm, the proposed virtual reference method visibly improves the final result independently from the choice of unwrapping algorithm.
The average running time for the Python implementation was 0.36   s for the direct method and 1.18   s for the proposed method, higher due to the reference image generation and processing.

4. Experiments

4.1. Static Scenario

An inexpensive and lightweight experimental setup was composed of an ELP USB camera (resolution of 1920 × 1080 pixels at 30 FPS) and an AnyBeam class 1 laser projector (resolution of 1280 × 720 pixels). To correct lens distortion, three coefficients for the radial distortion and two for the tangential one were used. The system was calibrated using a method similar to [36] and the resulting baseline was b = 235.040   m m . Although not indicative of the accuracy over arbitrary objects [38], Table 1 shows the RMS reprojection errors.
Following the dent function defined above, four different dented surface samples were modeled and 3D printed as 15 cm × 15 cm tiles with a white matt finish. The samples were scanned from a distance of circa 60   c m using T p = 8 . Images were cropped to 760 × 640 pixels to select only the tile and remove the background. The characteristics of the samples are reported in Table 2. Captured image and calculated virtual reference plane for sample A are showed in Figure 14.
Figure 15 reports the depth and its error along the middle row of sample A. The causes of the shown noise distribution include reflections, laser speckle, sampling, spectral leakage, border artefacts in the Fourier transform, and systematic error of the 3D printing process.
The MAE and its standard deviation are reported for each sample, comparing the direct mapping and the virtual reference methods using 2D Itoh’s (Table 3) and Estrada’s (Table 4) unwrapping algorithms.
Independently from the unwrapping algorithm, the proposed method can generally deliver more accurate 3D reconstructions. The difference is visible, for example, in Figure 16, showing the sample C reconstruction using Estrada’s unwrapping. Border artefacts were responsible for the most of the outliers, accounting for approximately 10% of MAE. In future, the windowed FTP or the wavelet transform could be considered to reduce these errors.

4.2. Dynamic Scenario

The class 1 laser projector employed above has the advantage of being focus-free, that is, the projected image is always in focus at any distance from the object. This technology is made possible by the use of microelectromechanical systems (MEMS) that drive laser beams with a certain refresh rate, independent from camera exposure time.
This also causes a flickering effect in the camera image, that in the experiments above was compensated by setting high exposure times. Consequently, such a system cannot be used to scan moving objects as is. To maintain the focus-free benefit, the best solution would be to employ a synchronised hardware, so that the camera exposure time corresponds to one projector refresh cycle.
Unfortunately, such technology was not available during this work, thus the projector was replaced with a common LCD one of the same resolution, and the system was recalibrated to test the algorithm over moving objects. Table 5 shows the RMS reprojection errors in this second configuration ( b = 266.784   m m ).
Samples were scanned while moving at a speed of about 0.1   m / s along the x-axis at a distance of circa 120   c m . As before, compared results between the two methods are reported in Table 6. This time, only Estrada’s unwrapping algorithm was used, and, for reference, all the values from dynamic scenes were compared with the corresponding static ones of the current setup, to highlight the effect of movement. Figure 17 shows the results for sample C.
As expected, dynamic MAE does not consistently increase when compared with the static one, as the single-shot acquisition is not significantly affected by movement and vibration, provided that motion blur is suppressed by a short exposure time with respect to the speed of the object. MAE is much more affected by other factors, such as object albedo and distance, which are independent of movement.
Overall, if compared with the first configuration, the reduced performance is due to the change of projector and the increased distance. In addition, for this setup, the use of the phase shift, calculated with respect to the virtual reference, performed better than the direct mapping in all the cases. On average, the MAE difference was 0.138   m m , even more relevant than in the first configuration.

5. Discussion

To date, most of the UAV-based systems developed for aircraft inspections make use of a monocular camera as a primary acquisition device and are thus incapable of detecting shallow dents or collecting measures [11]. As the acquisition of measures is essential for damage evaluation [7], 3D scanning must be considered for proper automation. However, the employment of 3D scanning technology on UAVs is not straightforward: high-accuracy 3D scanners are generally based on multiple-shot algorithms, as the projection of many different patterns allows for detailed codification, and are incompatible with fast movements and vibration that would be experienced while scanning from a flying drone.
This work showed that the FTP should be considered to automate aircraft dent inspections. The single-shot nature of the method, together with its high phase sensitivity to depth variations, constitutes an appropriate solution for the integration with UAVs, ultimately providing fast and effective inspections. Table 7 summarises existing dent inspection approaches, comparing their pros and cons against the proposed one.
FTP is naturally resilient to movement and vibration, although codification information must be condensed into a single pattern, implying local smoothness. While the local smoothness assumption is true for most of the aircraft skin, the major challenge for effective application in the field is the increase of signal-to-noise ratio, especially from longer distances. This translates into overcoming surface albedo and employing a projection technology capable of operating in common indoor or even outdoor light conditions, generally not dark enough.
The use of light sources outside the visible spectrum or higher class lasers might be considered. However, while the use of a laser projector highly improves the contrast of the projected pattern, it normally introduces the speckle phenomenon, presented as a granular pattern over the camera image that affects the filtered phase and, consequently, produces noise in the final measures. Speckle can be corrected with different types of despecklers [44].

6. Conclusions

The aviation industry is increasingly making use of UAVs for aircraft inspections. To date, a UAV-based solution for the detection and measurement of dents has yet to be found, as high-accuracy 3D scanners usually work with multiple-shot algorithms and are thus incompatible with fast movements and vibration.
Triangular stereo Fourier transform profilometry (FTP) was proposed here to evaluate its potential towards automated aircraft dent inspections. Additionally, two modifications to the FTP algorithm were introduced. First, the automatic band-pass estimation was presented via a virtual reference plane, thus eliminating the user interaction in the filtering process. Secondly, after showing that the interaction of noise over 2 π -jumps is lower using the phase shift, a virtual reference image was employed to increase accuracy.
Overall, the proposed method was able to faithfully reconstruct the 3D point clouds of demonstrative dents at distances of 60   c m and 120   c m , delivering sub-millimeter accuracy despite the use of unsophisticated hardware. Simulations and experiments with 3D-printed dent samples showed that the employment of the virtual reference delivers better results, independent of the choice of unwrapping algorithm.
With additional research, an FTP-based system could be integrated with a UAV, enabling quick and effective dent inspection. Solutions to deal with light conditions in common operational environments are to be found. It must also be noted that a successful 3D data acquisition is just the first step for comprehensive automation, as the difficulties in detecting dents in the physical world are somehow translated into the digital world. As such, an end-to-end automatic system for dent detection and measurement remains an open challenge in MRO.

Author Contributions

Conceptualization, P.L.; Formal analysis, P.L. Investigation, P.L.; Software, P.L.; Supervision, I.-S.F. and N.P.A.; Validation, P.L.; Visualization, P.L.; Writing— original draft, P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Cranfield IVHM Centre.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Latorella, K.; Prabhu, P. A review of human error in aviation maintenance and inspection. Int. J. Ind. Ergon. 2000, 26, 133–161. [Google Scholar] [CrossRef]
  2. Papa, U.; Ponte, S. Preliminary design of an unmanned aircraft system for aircraft general visual inspection. Electronics 2018, 7, 435. [Google Scholar] [CrossRef] [Green Version]
  3. Airbus Launches Advanced Indoor Inspection Drone. Available online: https://www.airbus.com/en/newsroom/press-releases/2018-04-airbus-launches-advanced-indoor-inspection-drone-to-reduce-aircraft (accessed on 19 December 2021).
  4. Drones Finally Approved For Lightning Checks. Available online: https://aviationweek.com/mro/drones-finally-approved-lightning-checks-dent-measurement-next (accessed on 19 December 2021).
  5. Lafiosca, P.; Fan, I.S. Review of non-contact methods for automated aircraft inspections. Insight-Non-Destr. Test. Cond. Monit. 2020, 62, 692–701. [Google Scholar] [CrossRef]
  6. Deane, S.; Avdelidis, N.P.; Ibarra-Castanedo, C.; Zhang, H.; Nezhad, H.Y.; Williamson, A.A.; Mackley, T.; Davis, M.J.; Maldague, X.; Tsourdos, A. Application of NDT thermographic imaging of aerospace structures. Infrared Phys. Technol. 2019, 97, 456–466. [Google Scholar] [CrossRef] [Green Version]
  7. The Boeing Company. Structural Repair Manual B737-400; The Boeing Company: Chicago, IL, USA, 2015. [Google Scholar]
  8. Civil Aviation Authority. CAP 716—Aviation Maintenance Human Factors; Civil Aviation Authority: Crawley, UK, 2003.
  9. Civil Aviation Authority. Paper 2013/03—Reliability of Damage Detection in Advanced Composite Aircraft Structures; Civil Aviation Authority: Crawley, UK, 2013.
  10. Jovančević, I.; Pham, H.H.; Orteu, J.J.; Gilblas, R.; Harvent, J.; Maurice, X.; Brèthes, L. 3D point cloud analysis for detection and characterization of defects on airplane exterior surface. J. Nondestruct. Eval. 2017, 36, 1–17. [Google Scholar] [CrossRef] [Green Version]
  11. Doğru, A.; Bouarfa, S.; Arizar, R.; Aydoğan, R. Using convolutional neural networks to automate aircraft maintenance visual inspection. Aerospace 2020, 7, 171. [Google Scholar] [CrossRef]
  12. Guehring, J. Dense 3-D surface acquisition by structured light using off-the-shelf components. In Proceedings of the SPIE—The International Society for Optical Engineering, San Jose, CA, USA, 21–24 January 2001. [Google Scholar] [CrossRef]
  13. Salvi, J.; Pages, J.; Batlle, J. Pattern codification strategies in structure light systems. Pattern Recognit. 2004, 37, 827–849. [Google Scholar] [CrossRef] [Green Version]
  14. Zuo, C.; Tao, T.; Feng, S.; Huang, L.; Asundi, A.; Chen, Q. Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second. Opt. Lasers Eng. 2017, 102, 70–91. [Google Scholar] [CrossRef] [Green Version]
  15. An, Y.; Hyun, J.S.; Zhang, S. Pixel-wise absolute phase unwrapping using geometric constraints of structured light system. Opt. Express 2016, 24, 18445–18459. [Google Scholar] [CrossRef] [PubMed]
  16. Chen, Y.; Song, B.; He, R.; Hu, H.; Chen, S. Fast two-step phase-shifting method for measuring the three-dimensional contour of objects. Opt. Eng. 2021, 60, 94104. [Google Scholar] [CrossRef]
  17. Zhang, Z. Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques. Opt. Lasers Eng. 2012, 50, 1097–1106. [Google Scholar] [CrossRef]
  18. Hügli, H.; Maître, G. Generation And Use Of Color Pseudo Random Sequences For Coding Structured Light In Active Ranging. In Proceedings of the 1988 International Congress on Optical Science and Engineering, Hamburg, Germany, 19–23 September 1988. [Google Scholar] [CrossRef]
  19. Vuylsteke, P.; Oosterlinck, A. Range image acquisition with a single binary-encoded light pattern. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 148–164. [Google Scholar] [CrossRef]
  20. Ulusoy, A.; Calakli, F.; Taubin, G. One-shot scanning using De Bruijn spaced grids. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, Kyoto, Japan, 27 September–4 October 2009. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, l.; Curless, B.; Seitz, S. Rapid Shape Acquisition Using Color Structured Lightand Multi-pass Dynamic Programming. In Proceedings of the First International Symposium on 3D Data Processing Visualization and Transmission, Padova, Italy, 19–21 June 2002. [Google Scholar] [CrossRef] [Green Version]
  22. Takeda, M.; Mutoh, K. Fourier transform profilometry for the automatic measurement of 3-D object shapes. Appl. Opt. 1983, 22, 3977–3982. [Google Scholar] [CrossRef] [PubMed]
  23. Salvi, J.; Fernandez, S.; Pribanic, T.; Llado, X. A state of the art in structured light patterns for surface profilometry. Pattern Recognit. 2010, 43, 2666–2680. [Google Scholar] [CrossRef]
  24. Kemao, Q.; Wang, H.; Wenjing, G. Windowed Fourier transform for fringe pattern analysis: Theoretical analyses. Appl. Opt. 2008, 47, 5408–5419. [Google Scholar] [CrossRef]
  25. HUANG, L.; Kemao, Q.; Pan, B.; Asundi, A. Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry. Opt. Lasers Eng. 2010, 48, 141–148. [Google Scholar] [CrossRef]
  26. Watkins, L. Phase Recovery Using The Wavelet Transform. In Proceedings of the AIP Conference Proceedings, Penang, Malaysia, 21–23 December 2010. [Google Scholar] [CrossRef]
  27. Zhong, J.; Zeng, H. Multiscale windowed Fourier transform for phase extraction of fringe patterns. Appl. Opt. 2007, 46, 2670–2675. [Google Scholar] [CrossRef]
  28. Fernandez, S.; Gdeisat, M.; Salvi, J.; Burton, D. Automatic window size selection in Windowed Fourier Transform for 3D reconstruction using adapted mother wavelets. Opt. Commun. 2011, 284, 2797–2807. [Google Scholar] [CrossRef]
  29. Xu, J.; Zhang, S. Status, challenges, and future perspectives of fringe projection profilometry. Opt. Lasers Eng. 2020, 135, 106193. [Google Scholar] [CrossRef]
  30. Feng, S.; Zuo, C.; Zhang, L.; Tao, T.; Hu, Y.; Yin, W.; Qian, J.; Chen, Q. Calibration of fringe projection profilometry: A comparative review. Opt. Lasers Eng. 2021, 143, 106622. [Google Scholar] [CrossRef]
  31. Zhang, S.; Huang, P. Novel method for structured light system calibration. Opt. Eng. 2006, 45, 83601. [Google Scholar] [CrossRef]
  32. Li, B.; Zhang, S. Structured light system calibration method with optimal fringe angle. Appl. Opt. 2014, 53, 7942–7950. [Google Scholar] [CrossRef]
  33. Wen, Y.; Li, S.; Cheng, H.; Su, X.; Zhang, Q. Universal calculation formula and calibration method in Fourier transform profilometry. Appl. Opt. 2010, 49, 6563–6569. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  35. Zhang, S. High-speed 3D shape measurement with structured light methods: A review. Opt. Lasers Eng. 2018, 106, 119–131. [Google Scholar] [CrossRef]
  36. Li, Z.; Shi, Y.; Wang, C.; Wang, Y. Accurate calibration method for a structured light system. Opt. Eng. 2008, 47, 53604. [Google Scholar] [CrossRef]
  37. Martynov, I.; Kamarainen, J.K.; Lensu, L. Projector Calibration by “Inverse Camera Calibration”. In Proceedings of the Scandinavian Conference on Image Analysis, Ystad, Sweden, 1 May 2011. [Google Scholar] [CrossRef] [Green Version]
  38. Moreno, D.; Taubin, G. Simple, Accurate, and Robust Projector-Camera Calibration. In Proceedings of the 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, Zurich, Switzerland, 13–15 October 2012. [Google Scholar] [CrossRef]
  39. Gdeisat, M.; Lilley, F. One-Dimensional Phase Unwrapping Problem. Available online: https://www.ljmu.ac.uk/~/media/files/ljmu/about-us/faculties-and-schools/fet/geri/onedimensionalphaseunwrapping_finalpdf.pdf (accessed on 17 November 2021).
  40. Zhang, S. Absolute phase retrieval methods for digital fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 107, 28–37. [Google Scholar] [CrossRef]
  41. Itoh, K. Analysis of the phase unwrapping algorithm. Appl. Opt. 1982, 21, 2470. [Google Scholar] [CrossRef]
  42. Dickin, F.; Pollard, S.; Adams, G. Mapping and correcting the distortion of 3D structured light scanners. Precis. Eng. 2021, 72, 543–555. [Google Scholar] [CrossRef]
  43. Estrada, J.C.; Servin, M.; Quiroga, J.A. Noise robust linear dynamic system for phase unwrapping and smoothing. Opt. Express 2011, 19, 5126–5133. [Google Scholar] [CrossRef] [Green Version]
  44. Kompanets, I.; Zalyapin, N. Methods and Devices of Speckle-Noise Suppression. Opt. Photonics J. 2020, 10, 219–250. [Google Scholar] [CrossRef]
Figure 1. System configuration: camera and projector are freely positioned (optical axes in blue solid). Without loss of generality, camera is placed in the world origin and a virtual reference plane R lays in front of the camera.
Figure 1. System configuration: camera and projector are freely positioned (optical axes in blue solid). Without loss of generality, camera is placed in the world origin and a virtual reference plane R lays in front of the camera.
Sensors 22 00433 g001
Figure 2. A fringe pattern with period T p , with a central red-coloured stripe.
Figure 2. A fringe pattern with period T p , with a central red-coloured stripe.
Sensors 22 00433 g002
Figure 3. Module of the FFT on both reference (dotted red) and object (solid blue) images along middle row for a real dent sample. Spectrum around the fundamental frequency (solid vertical) is isolated with the automatic band-pass filter (dotted vertical).
Figure 3. Module of the FFT on both reference (dotted red) and object (solid blue) images along middle row for a real dent sample. Spectrum around the fundamental frequency (solid vertical) is isolated with the automatic band-pass filter (dotted vertical).
Sensors 22 00433 g003
Figure 4. Virtually built reference image with sinusoidal fringes. Perspective affects frequency components.
Figure 4. Virtually built reference image with sinusoidal fringes. Perspective affects frequency components.
Sensors 22 00433 g004
Figure 5. (Top) Illustrative reference Φ 0 (blue) and object Φ (red) phases. (Middle) The wrapped object phase Φ w presents many more 2 π -jumps compared to the (Bottom) wrapped phase shift Δ Φ w .
Figure 5. (Top) Illustrative reference Φ 0 (blue) and object Φ (red) phases. (Middle) The wrapped object phase Φ w presents many more 2 π -jumps compared to the (Bottom) wrapped phase shift Δ Φ w .
Sensors 22 00433 g005
Figure 6. (Top) The wrapped object phase Φ w and (Bottom) wrapped phase shift Δ Φ w with the same white noise ( σ = 0.6 ) added.
Figure 6. (Top) The wrapped object phase Φ w and (Bottom) wrapped phase shift Δ Φ w with the same white noise ( σ = 0.6 ) added.
Sensors 22 00433 g006
Figure 7. The unwrapped object phase Φ ˜ r ( u c , v c ) using (top) Equation (11) and (bottom) Equation (13) with respect to the original phase (dashed blue). In the first case, the noise causes the unwrapping algorithm to discard a genuine jump right after u c = 20 , introducing error. This does not happen in the second case due to the reduced presence of 2 π -jumps.
Figure 7. The unwrapped object phase Φ ˜ r ( u c , v c ) using (top) Equation (11) and (bottom) Equation (13) with respect to the original phase (dashed blue). In the first case, the noise causes the unwrapping algorithm to discard a genuine jump right after u c = 20 , introducing error. This does not happen in the second case due to the reduced presence of 2 π -jumps.
Sensors 22 00433 g007
Figure 8. Section of the reference dent function. Maximum depth is 0.368 and just 0.005 at x = 0.9 .
Figure 8. Section of the reference dent function. Maximum depth is 0.368 and just 0.005 at x = 0.9 .
Sensors 22 00433 g008
Figure 9. Compared to the (a) object phase Φ w , the slower trend of the (b) phase shift Δ Φ w is clearly visible in a simulated dent with added white noise ( σ = 0.6 ).
Figure 9. Compared to the (a) object phase Φ w , the slower trend of the (b) phase shift Δ Φ w is clearly visible in a simulated dent with added white noise ( σ = 0.6 ).
Sensors 22 00433 g009
Figure 10. (Top) Wrapped phase Φ w and (Bottom) phase shift Δ Φ w at middle row. Unwrapping of Φ w has to manage many more 2 π -jumps.
Figure 10. (Top) Wrapped phase Φ w and (Bottom) phase shift Δ Φ w at middle row. Unwrapping of Φ w has to manage many more 2 π -jumps.
Sensors 22 00433 g010
Figure 11. Unwrapping of the simulated dent signal with added white noise ( σ = 0.6 ) using Itoh’s algorithm. Detail of one of the several genuine 2 π -jumps missed using Equation (11) (top) but not using Equation (13) (bottom).
Figure 11. Unwrapping of the simulated dent signal with added white noise ( σ = 0.6 ) using Itoh’s algorithm. Detail of one of the several genuine 2 π -jumps missed using Equation (11) (top) but not using Equation (13) (bottom).
Sensors 22 00433 g011
Figure 12. Comparison of 3D reconstruction of the simulated dent (as heatmap of MAE) using Itoh’s algorithm with the (a) direct mapping and the (b) virtual reference methods.
Figure 12. Comparison of 3D reconstruction of the simulated dent (as heatmap of MAE) using Itoh’s algorithm with the (a) direct mapping and the (b) virtual reference methods.
Sensors 22 00433 g012
Figure 13. Comparison of 3D reconstruction of the simulated dent (as heatmap of MAE) using Estrada’s noise-robust algorithm ( τ = 0.8 ) with the (a) direct mapping method and the (b) proposed virtual reference method.
Figure 13. Comparison of 3D reconstruction of the simulated dent (as heatmap of MAE) using Estrada’s noise-robust algorithm ( τ = 0.8 ) with the (a) direct mapping method and the (b) proposed virtual reference method.
Sensors 22 00433 g013
Figure 14. Input images for sample A. (a) Captured object image. Red values are converted to greyscale before FFT. (b) Virtual reference image.
Figure 14. Input images for sample A. (a) Captured object image. Red values are converted to greyscale before FFT. (b) Virtual reference image.
Sensors 22 00433 g014
Figure 15. Calculated (solid) and ground truth (dashed) depth of sample A (top) and error (bottom) along the middle section using direct mapping (left) and proposed virtual reference method (right). Overall MAE was 0.158   m m and 0.126   m m , respectively.
Figure 15. Calculated (solid) and ground truth (dashed) depth of sample A (top) and error (bottom) along the middle section using direct mapping (left) and proposed virtual reference method (right). Overall MAE was 0.158   m m and 0.126   m m , respectively.
Sensors 22 00433 g015
Figure 16. Comparison of 3D reconstruction for sample C (as heatmap of MAE) using Estrada’s unwrapping algorithm with the (a) direct mapping and the (b) proposed virtual reference methods. Blue areas correspond to lower error than green ones. The border artefacts of the Fourier transform are visible as high-error areas (yellow and red).
Figure 16. Comparison of 3D reconstruction for sample C (as heatmap of MAE) using Estrada’s unwrapping algorithm with the (a) direct mapping and the (b) proposed virtual reference methods. Blue areas correspond to lower error than green ones. The border artefacts of the Fourier transform are visible as high-error areas (yellow and red).
Sensors 22 00433 g016
Figure 17. Comparison of 3D reconstruction for sample C acquired during movement using Estrada’s unwrapping algorithm with the (a) direct mapping and the (b) proposed virtual reference methods. Blue areas correspond to lower error than green ones. The border artefacts of the Fourier transform are visible as high-error areas (yellow and red).
Figure 17. Comparison of 3D reconstruction for sample C acquired during movement using Estrada’s unwrapping algorithm with the (a) direct mapping and the (b) proposed virtual reference methods. Blue areas correspond to lower error than green ones. The border artefacts of the Fourier transform are visible as high-error areas (yellow and red).
Sensors 22 00433 g017
Table 1. RMS reprojection errors of the first configuration.
Table 1. RMS reprojection errors of the first configuration.
CameraProjectorStereo
0.4771.2312.686
Table 2. Dimensions of dent samples.
Table 2. Dimensions of dent samples.
SampleLength (mm)Width (mm)Depth (mm)
A60402
B1201002
C100803
D120801
Table 3. Compared MAE (and standard deviation) of the direct mapping versus the proposed virtual reference method with 2D Itoh’s unwrapping algorithm. The last column reports the absolute and percentage difference.
Table 3. Compared MAE (and standard deviation) of the direct mapping versus the proposed virtual reference method with 2D Itoh’s unwrapping algorithm. The last column reports the absolute and percentage difference.
Direct MappingProposedDifference
SampleMAE (std) in mmMAE (std) in mmmm (%)
A0.158 (0.176)0.126 (0.115)0.032 (22.5%)
B0.167 (0.202)0.144 (0.151)0.023 (10.1%)
C0.259 (0.616)0.154 (0.494)0.105 (50.8%)
D0.158 (0.566)0.144 (0.537)0.014 (9.3%)
Table 4. Compared MAE (and standard deviation) of the direct mapping versus the proposed virtual reference method with Estrada’s unwrapping algorithm ( τ = 0.8 ). The last column reports the absolute and percentage difference.
Table 4. Compared MAE (and standard deviation) of the direct mapping versus the proposed virtual reference method with Estrada’s unwrapping algorithm ( τ = 0.8 ). The last column reports the absolute and percentage difference.
Direct MappingProposedDifference
SampleMAE (std) in mmMAE (std) in mmmm (%)
A0.153 (0.174)0.120 (0.110)0.033 (24.2%)
B0.165 (0.190)0.135 (0.130)0.030 (20.0%)
C0.202 (0.162)0.115 (0.114)0.087 (54.9%)
D0.213 (0.368)0.124 (0.131)0.089 (52.8%)
Table 5. RMS reprojection error of the second configuration.
Table 5. RMS reprojection error of the second configuration.
CameraProjectorStereo
0.2820.7722.625
Table 6. Compared MAE (and standard deviation) of the direct mapping versus the proposed virtual reference method with Estrada’s unwrapping algorithm ( τ = 0.8 ) over both static and dynamic samples.
Table 6. Compared MAE (and standard deviation) of the direct mapping versus the proposed virtual reference method with Estrada’s unwrapping algorithm ( τ = 0.8 ) over both static and dynamic samples.
Direct MappingProposed
SampleTypeMAE (std) in mmMAE (std) in mm
ADynamic0.381 (0.456)0.280 (0.307)
Static0.504 (0.486)0.285 (0.217)
BDynamic0.419 (0.438)0.309 (0.396)
Static0.432 (0.329)0.287 (0.255)
CDynamic0.436 (0.443)0.271 (0.265)
Static0.493 (0.546)0.320 (0.468)
DDynamic0.411 (0.394)0.295 (0.294)
Static0.370 (0.357)0.293 (0.401)
Table 7. Detection and measures of dents in MRO. Summary of pros and cons for different inspection approaches.
Table 7. Detection and measures of dents in MRO. Summary of pros and cons for different inspection approaches.
ApproachProsCons
TraditionalNo special hardware requirements StraightforwardHazardous Subjective output Time-consuming
Handheld 3D scannerHigh accuracy RepeatableHazardous Time-consuming
Monocular camera UAVFast Repeatable SafeNo measures Shallow dents not detectable
UAV equipped with FTPFast Repeatable Safe Measures collectedComplex light control Smoothness assumption
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lafiosca, P.; Fan, I.-S.; Avdelidis, N.P. Automated Aircraft Dent Inspection via a Modified Fourier Transform Profilometry Algorithm. Sensors 2022, 22, 433. https://doi.org/10.3390/s22020433

AMA Style

Lafiosca P, Fan I-S, Avdelidis NP. Automated Aircraft Dent Inspection via a Modified Fourier Transform Profilometry Algorithm. Sensors. 2022; 22(2):433. https://doi.org/10.3390/s22020433

Chicago/Turabian Style

Lafiosca, Pasquale, Ip-Shing Fan, and Nicolas P. Avdelidis. 2022. "Automated Aircraft Dent Inspection via a Modified Fourier Transform Profilometry Algorithm" Sensors 22, no. 2: 433. https://doi.org/10.3390/s22020433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop