Next Article in Journal
Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments
Previous Article in Journal
Output Characteristics and Circuit Modeling of Wiegand Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Target Structural Displacement Measurement Using Reference Frame-Based Deepflow

1
School of Civil and Environmental Engineering, Chung-Ang University, Dongjak, Seoul 06974, Korea
2
School of Civil Engineering, Chungbuk National University, Cheongju 28356, Korea
3
Department of Civil and Environmental Engineering, University of Hawaii at Manoa, Honolulu, HI 96822, USA
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(13), 2992; https://doi.org/10.3390/s19132992
Submission received: 29 May 2019 / Revised: 29 June 2019 / Accepted: 5 July 2019 / Published: 7 July 2019
(This article belongs to the Section Remote Sensors)

Abstract

:
Displacement is crucial for structural health monitoring, although it is very challenging to measure under field conditions. Most existing displacement measurement methods are costly, labor-intensive, and insufficiently accurate for measuring small dynamic displacements. Computer vision (CV)-based methods incorporate optical devices with advanced image processing algorithms to accurately, cost-effectively, and remotely measure structural displacement with easy installation. However, non-target-based CV methods are still limited by insufficient feature points, incorrect feature point detection, occlusion, and drift induced by tracking error accumulation. This paper presents a reference frame-based Deepflow algorithm integrated with masking and signal filtering for non-target-based displacement measurements. The proposed method allows the user to select points of interest for images with a low gradient for displacement tracking and directly calculate displacement without drift accumulated by measurement error. The proposed method is experimentally validated on a cantilevered beam under ambient and occluded test conditions. The accuracy of the proposed method is compared with that of a reference laser displacement sensor for validation. The significant advantage of the proposed method is its flexibility in extracting structural displacement in any region on structures that do not have distinct natural features.

1. Introduction

Structural health monitoring (SHM) increases a structure’s lifetime and ensure its safety; the continuous monitoring provided by SHM allows for early-stage damage detection and downtime reduction, as well as potentially preventing failure during operation. For efficient monitoring, accurate and precise acquisition of structural response data is critical for condition assessment and decision-making that requires processed data. Structural displacement is one of the most important SHM factors when evaluating a structure’s condition; traditionally, displacement is measured directly using a linear variable differential transformer (LVDT). One end of the LVDT is fixed to the structure and the other is attached to a stationary reference such as a scaffold. If the reference is fixed and stable, displacement can be measured to within a few micrometers. However, the use of LVDTs is hindered by practical difficulties in installing a reference point [1,2]. Hence, measurement is limited to only several points on a structure. A laser Doppler vibrometer (LDV), another direct measurement method, can provide high-resolution noncontact displacement data [3,4] but is cost-inefficient and restricted to measuring displacement in the direction of the emitted laser.
Alternatively, indirect methods using a global positioning system (GPS) have been proposed [5,6,7,8,9] for SHM; by their nature, such methods do not require a stationary reference and sensors can be instrumented without much effort. The accuracy of general GPS is applicable to structures with large deformation and it can be employed for long-term monitoring. However, GPS may not be sufficient for monitoring bridges which need a few mm level measurement. Also, displacement can be obtained indirectly by double integration of acceleration data [1], but the measurement is limited to determining zero-mean dynamic displacement.
Recently, computer vision (CV) based structural health monitoring research has been an active research area in displacement measurement for SHM because of its cost-effectiveness, high resolution, and relatively simple instrumentation. CV-based displacement measurement can be grouped into target- and non-target-based CV systems. Target-based systems employ custom designed target for accurate and robust tracking of displacement [2,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. For example, a target with a known geometry containing four white dots on a black background and a tracking algorithm that detects the center of the target using adoptive binary thresholding were employed for robust and real-time displacement measurement [2]. The target, designed with a high-contrast black and white pattern, allowed accurate displacement measurement but requires access to the structure for installation, which can limit a system’s usefulness in the field.
Recently, non-target-based CV systems have been developed to capture a structure’s existing features [25,26,27,28,29,30,31,32,33,34,35,36,37] for tracking displacement and system identification [38,39,40,41,42,43]. However, detection of a structure’s natural features can be very difficult due to a lack of contrast and background conditions. To address these issues, the Kanade–Lucas–Tomasi (KLT) tracker [44,45] is widely employed for non-target-based displacement measurement, as it detects features like bolts and edges based on the magnitude of the image gradient. Once features are detected, the KLT algorithm calculates optical flow [46,47], which is the velocity field of features across two input images—displacement is then obtained by integrating the Lucas–Kanade optical flow [47]. The performance of KLT tracker in structural displacement measurement has been validated experimentally [28,42,48,49]. For example, a virtual vibration monitoring measurement was proposed to track feature points in selected regions on a structure to track multiple points simultaneously [48]. System identification of a shear building structure was conducted by measuring displacement in multiple regions using a Harris corner feature detector and KLT tracker [49]. However, two main challenges remain in implementing KLT tracker: the disappearance or incorrect tracking of feature points resulting from a low gradient in the given image.
This paper proposes a novel method of non-target structural displacement measurement. The proposed method uses Deepmatching and Deepflow to find dense correspondence between two images frames and calculate pixel-wise optical flow at the point of interests (POIs) to measure the displacement from images with sparse feature points. Additionally, the reference frame-based displacement method is proposed to drift-free displacement. The proposed method is validated through an experiment on a cantilever beam. The remainder of this paper is organized as follows: Section 2 briefly reviews the optical flow and KLT tracker that are most widely used for displacement measurement. In Section 3, the proposed method—including POI selection via masking, Deepmatching, Deepflow, reference frame-based displacement measurement and signal filtering—is explained. Section 4 describes experimental validation of the proposed method conducted on a cantilever beam under occluded conditions. Finally, Section 5 presents conclusions drawn based on the experimental validation.

2. Background

2.1. Optical Flow

Optical flow [46,47] refers to a local displacement vector field of object motion between two consecutive frames, which occur because of movement by the object or the camera. Calculating optical flow requires two basic assumptions: (1) brightness constancy, which assumes that the pixel intensities of an object in an image do not change between consecutive frames, and (2) small motion between consecutive images. If a pixel in image frame I (x, y, t) moves by distance (dx,dy) in the next frame, taken after a period of time dt, the following equation can be applied under basic assumptions of optical flow:
I ( x , y , t ) = I ( x + d x , y + d y , t + d t )
Expanding the first term of Equation (1) using the Taylor series, the following equation can be obtained by removing higher-order terms and dividing by dt:
I ( x + d x , y + d y , t + d t ) = I ( x , y , t ) + I x d x d t + I y d y d t + I t + H . O . T
Combining Equations (1) and (2) leads to
I x d x d t + I y d y d t + I t = 0     o r     I x V x + I y V y = I t
where Vx and Vy are components of the velocity or optical flow of I(x,y,t), and Ix,Iy and It are derivatives of the image at (x,y,t). Equation (3) is called the optical flow equation.

2.2. Lucas-Kanade Method and KLT Tracker

The Equation (3) has two unknowns and thus information from a single point in an image frame is not sufficient to accurately determine the optical flow vector. The Lucas–Kanade tracker assumes that there is a region of interest (ROI) in which all points have the same constant optical flow vector that satisfies the following equations:
[ I x i 2 I x i I y i I x i I y i I y i 2 ] [ V x V y ] = [ I x i I t i I y i I t i ]
where i is the index of the pixels in the window. Note that the first term on the left-hand side is a Hessian matrix in the first image, I (x,y,t), which affects the stability of the solution to Equation (4)—the inverse of the Hessian can become a singular matrix if the minimum eigenvalues are very small. The main idea behind the KLT tracker is to find only good features such that the inverse of the Hessian become nonsingular and reliable tracking can be performed. The KLT tracker allows for fast computation of optical flow, as only sets of good features are tracked across frames. However, due to the sparsity of the feature points, tracking accuracy heavily relies on the quality of feature points, which may change in appearance over time due to movement. Also, as the displacement is obtained by integrating optical flow V from two subsequent images, small errors may accumulate and result in drift.

3. Proposed Method

3.1. Overview

Despite the fast computation of KLT, its application to displacement monitoring using KLT tracker suffers from long-term drift and loss of feature points due to object occlusion and movement. The proposed method is composed of Deepflow [50] to obtain pixel-wise optical flow, and POI selection to extract displacement of interest. In addition, the proposed method resolves displacement drift by calculating optical flow for the incoming image frame in reference to the initial reference frame, yielding a direct displacement field without the numerical integration of optical flow that is implemented in the KLT method. An overview of the proposed structural displacement measurement method is illustrated in Figure 1. Calibration is conducted in the first stage to compensate for lens distortion. During initialization, the initial image frame is set as a reference and POIs that denote the pixel coordinates for displacement measurement are defined using a mask. Once the reference frame and POIs are set, the optical flow between the reference and subsequent input frames is computed using Deepmatching [51] and Deepflow [50], tracking the movement of POIs. The measured displacements are filtered to eliminate noisy measurements and then averaged to provide high-accuracy displacement results.
The DJI Phantom 3, a commercial grade UAV, was used for the test. The camera installed on the UAV has 1080p resolution and frame rate of 25 fps. Also, the gimbal that holds the camera to the UAV embeds an accelerometer and gyroscope to stabilize the motion of the camera against 3-axis rotation. The UAV was hovering by maintaining 2m distance from the structure to avoid collision.
As a reference, a stationary camera, an LG smartphone G3, was installed on the ground 1m away from the structure, to extract the absolute structural displacements. In addition, accelerometers were deployed on each story of the shear building model, to compare the system identification results.

3.2. Camera Calibration

Camera calibration compensates for image errors induced by lens distortion and viewing position by identifying parameters for intrinsic, extrinsic, and distortion coefficients. These parameters can be identified by employing a pinhole camera model to precisely estimate scale factor λ, intrinsic matrix K, translation vector T, and rotation matrix R.
[ x y 1 ] = λ K [ R | T ] [ X Y Z 1 ]
the scale factor links pixels to corresponding distances in global coordinates. The intrinsic matrix K is related to the camera’s intrinsic properties, including focal lengths, the skew parameter, and the principal point. Extrinsic parameters are related to the physical position of the camera’s view, and include the rotation matrix and translation vector. The proposed method calibrates internal parameters in the lab to compensate for distortion and obtain a scaling factor by comparing the number of pixels of a target structure in an image frame with the corresponding actual distances. The rotation matrix and translation vector are assumed to be a unit matrix and zero vector because the only one stationary camera was used in experimental validation.

3.3. POI Selection by Masking

Estimating dense optical flow using correspondence in high-resolution input image or video is computationally expensive. Cropping the input image and selecting POIs as preprocessing can provide greater efficiency. The flow is estimated from the POI features within the cropped image. In the KLT method, POIs should be larger than the structural features to extract more feature points for reliable tracking; thus, it is possible to detect points outside structural areas. The proposed POI selection by masking efficiently extracts points for tracking in non-target-based CV applications, where detecting natural feature points or patterns can be challenging. Figure 2 illustrates the proposed masking method, which selects POIs using a binary mask; these points are tracked by the dense optical flow vector calculated using Deepflow, which is explained in Section 3.3. The main advantage of POI selection is to acquire dense POIs on the structure regardless of distinctive features, patterns, or textures, which is very challenging when using the KLT method.

3.4. Deepmatching

Deepmatching computes dense correspondences between reference image and the target image. The matching algorithm is based on a multilayered architecture, similar to deep convolutional networks (see Figure 3). Deepmatching splits the image at the ith frame into nonoverlapping 4 × 4 pixel atomic patches and convolves it with the image at the jth frame to obtain a response map for the corresponding image patch. This process is repeated for all patches. In the aggregation stage, response maps are max-pooled with a 3 × 3 filter and downsampled by a factor of two to reduce computational complexity. Then, average pooling is implemented for preprocessed response maps that are extracted from four neighboring patches. The final aggregation process is nonlinear filtering, which avoids fast convergence. Through aggregation, a virtual response map for 8 × 8, 16 × 16, and 32 × 32 patches is constructed; the procedure is iterated to acquire a multiscale pyramid. Note that the pyramid is built using a bottom-up approach, whereas extracting corresponding matches uses a top-down method by extracting scale-space local maxima and backtracking the configuration to obtain quasi-dense correspondences.

3.5. Deepflow

Deepflow is a variational optical flow that combines color and gradient constraints with a global smoothness over the computed flow field and blends the Deepmatching algorithm into an energy minimization framework. The energy to be optimized is a weighted sum of data term ED, smoothness term ES and matching term EM, expressed as
E ( W ) = Ω E D + α E S + β E M d X
where w = ( u , v ) T is the optical flow field, x : = ( x , y ) T denotes a point in the image domain Ω, and α, and β are tuning parameters. Data term EM penalizes brightness and gradient constancy assumptions; it is the sum of two terms, balanced by weights δ and γ:
E D = δ ψ ( | I 2 ( x + w ( x ) ) I 1 ( x ) | 2 ) + γ ψ ( | I 2 ( x + w ( x ) ) I 1 ( x ) | 2 )
where ψ is a robust function that handles occlusions. The smoothness term enforces regularity by penalizing the total variation of the flow field, as
E s = ψ ( u ( x ) 2 + v ( x ) 2 )
The matching term approximates the flow estimation to a precomputed vector field by penalizing the difference between computed vector field W and precomputed vector field Wʹ.
E M = c ψ φ ( w w 2 )
where c is a binary term with a value of 1 if a match is possible and ϕ is a weight term that has a low value if the match is false. An incremental coarse-to-fine warping strategy is employed to solve a nonconvex and nonlinear energy functional for Deepflow.

3.6. Reference Frame-Based Displacement Measurement

Reference frame-based displacement measurement takes the initial frame as reference and calculates its optical flow with the current frame to directly obtain a displacement field, without having to integrate the optical flow from two subsequent images as KLT methods do. This approach uses two subsequent images and can be disturbed by occlusion of the camera by obstacles and by the accumulation of tracking error at each subsequent frame, which causes displacement drift. Reference frame-based displacement measurement calculates the change in the displacement of the input frame associated with the reference frame. Figure 4 shows a reference frame-based measurement, where dm represents the measured displacement between the ith and reference frame at m-th POI.

3.7. Outlier Filtering and Signal Averaging

Deepflow provides time-series displacement for each pixel in a selected ROI. However, the estimated flow may include outliers caused by noise, such as vanishing features or incorrectly matched feature and background points. This section proposes a filtering process for extracting accurate points related to the displacement of a target object.
Let D be a matrix containing offset-removed displacements at POIs, as D = [d1,d2, … dm]. D is first threshold-filtered with its median value to remove points related to a stationary background:
D f i l t e r = { d f 1 , d f 2 , d f n } ,    { d m D f i l t e r i f max ( | d m | ) t h r e s h o l d , d m D f i l t e r            o t h e r w i s e                                
where Dfilter is filtered displacements and n is the number of filtered displacements. To improve measurement accuracy, outlier filtering using a correlation coefficient is adopted. The correlation coefficient matrix Rij between the filtered displacements is
R i j = E [ ( d f i μ i σ i ) ( d f j μ j σ j ) ]
where μ i and σ i are the mean and standard deviation of d f i M C R = { m c r 1 , m c r 2 , m c r m } , the mean of cross correlation is obtained with m indicating index of points on POI. MCR over defined threshold is selected and corresponding displacements on POI are extracted.

4. Experimental Validation on a Steel Beam Model

4.1. Experimental Setup

An experiment was carried out to validate the proposed non-target-based structural displacement measurement method and for comparison with KLT and laser displacement sensor methods. A subsequent experiment with environmental disturbance was implemented by blocking the camera during measurement to simulate occlusion. An overview of the experimental setup is described in Figure 5. In the experiment, a steel cantilever beam with a height of 1000 mm and a cross-section of 100 mm × 5 mm was used as a testbed. The three major natural frequencies of the beam were 4.8 Hz, 24.4 Hz, and 69 Hz. Video of the beam’s motion was taken with a Samsung Galaxy S9+ mobile phone camera at 1 m from the beam using the 4K UHD (60 fps, 3840 × 2160 pixel resolution) setting. The camera was calibrated with 20 images of the checkerboard to obtain intrinsic parameters and lens distortion was corrected. The reference displacement was measured using an ILD-1420 with a 1 kHz sampling rate. A logo was attached to the back of the beam to artificially introduce noise in feature tracking—only this region was cropped for efficient image processing. The scale factor (1 mm/2.8 pixels) was obtained by comparing the beam thickness of 5 mm and the corresponding image pixels.

4.2. Feature Extraction

Figure 6 shows the cropped ROI from an image. From the image, the feature points that are tracked for displacement measurement over frames should be carefully selected for reliable tracking. The proposed masking-based POI allows selection of any points for tracking, so 77 points inside the structure were chosen for displacement tracking (see Figure 6). In the KLT method, feature points were selected based on feature detection algorithms such as Harris corner and scale-invariant feature transform (SIFT) methods using the gradient of the given image. Compared with the proposed method, the KLT method with Harris corner feature detection only detected ten points that included the edge of the beam and background features, but no features were detected inside the structure because feature point detection is heavily affected by gradient magnitude.
Figure 7 shows that the region inside the structure has a very small gradient magnitude, resulting in no feature detection, whereas edges and backgrounds with strong gradients match points where features were extracted.

4.3. Displacement Measurement UNDER Ambient Condition

The KLT and proposed methods were utilized to measure the displacement of the cantilever beam model and compared with reference data measured by a laser displacement sensor. The measured displacements using the KLT and proposed methods are shown in Figure 8. To compare the result of the reference displacement sensor with those of the proposed and KLT method, a third-order Butterworth low-pass filter with a cutoff at 30 Hz was applied to the reference displacement sensor and the measurements were synchronized. The maximum and root-mean-squared errors (RMSE) of the displacements are compared in Table 1.
The proposed method showed a maximum displacement error of 0.43% compared to the reference displacement sensor, whereas the KLT method showed an error of 19.77%. Comparing RMSEs, the proposed method had a very small error (0.07 mm) validating its accuracy in measuring displacements of less than 0.1 mm, but the KLT method showed an error of 0.29 mm. The ratio of the RMSE to the maximum reference displacement was 3.80% for the proposed method and 16% for the KLT method. The KLT method showed relatively lower accuracy because of scaling errors and drift. Because the KLT detects feature points that are determined by feature detection algorithms such as Harris corner and SIFT, detected features are likely to contain background features that do not have structural motion or noisy features that are strongly affected by changes in brightness. The displacements from detected features are simply averaged without filtering, so resulting displacements become smaller than the desired structural displacement due to inclusion of the motionless background. Moreover, detection of noisy features leads to displacement drift as errors accumulate through numerical integration. The scaling error and drift are clearly identified by comparing the frequency domains in Figure 9. The magnitude of the power spectral density (PSD) at the first natural frequency at 4.8 Hz from the proposed method agrees very well with the reference displacement sensor, indicating that the dynamic responses captured and successfully identified the frequency peak. However, the magnitude of the PSD at the first natural frequency from the KLT method was smaller than that from the reference displacement, indicating a smaller displacement measurement. Furthermore, the proposed method had almost the same PSD magnitude in the 0–0.1 Hz region as the reference displacement sensor, whereas the KLT method showed a magnitude as high as the first natural frequency, indicating significant measurement drift in the resulting displacement compared to the reference displacement sensor.
Computing time for the proposed method with KLT method is compared. The proposed method computes displacement using cropped region of 440 × 320 from original image size (3840 × 2160, 4 K), and KLT method calculate displacement from original image size. All software was run on a PC with an Intel i7-8700 CPU and 32 GB of RAM. The displacement computation took an average of 0.8 s per frame in the proposed method and an average of 0.25 s in the KLT method. Given that the proposed method can provide an average of 1.2 fps, the proposed method can be a good choice if precise non-target measurement is required.

4.4. Displacement Measurement under Disturbed Condition

A second experiment was conducted to determine the robustness of the proposed method to occlusion, for long-term measurements. In the field, vision systems are interfered with by many factors that block the camera’s sight, which causes significant measurement errors. To implement occlusion, the camera’s view was blocked with A4 white paper for about 1 s and then removed. Figure 10 shows displacement measured by the KLT and proposed methods. In Figure 10b, the KLT method measured very large displacement when occlusion occurred. Since the KLT method tracks feature points that are detected based on the difference between the structure and the surrounding background, errors were caused by mistakenly recognizing some parts of the obstacle as feature points. Additionally, after the camera’s view has recovered, the feature points cannot be restored properly, resulting in an offset error. In contrast, Figure 10a shows that the proposed method, which captures features inside the structure using a masking technique, continuously measured displacement by correctly recovering the feature points.

5. Conclusions

This paper proposes a non-target- and CV-based structural displacement measurement system using reference frame-based Deepflow, POI selection with masking, and signal filtering and averaging techniques. The proposed method directly measures displacement by calculating optical flow with a reference frame, which is updated to provide a robust tracking result. In addition, as Deepflow allows for pixelwise optical flow calculation, feature points related to structural displacement can be abundantly populated. These feature points are filtered and averaged for accurate displacement measurements while removing background noise. The proposed method was experimentally validated with a cantilever beam and its displacement result was compared with that of a laser displacement sensor. First, the proposed method was compared with KLT in stable conditions; due to some incorrect matches by the KLT method, the proposed method showed a better RMSE with 0.07 mm and 0.29 mm for proposed method and KLT, respectively. Note that the KLT method showed drift over the measurement period because of erroneous feature point detection between the structure and the background. Second, displacement was measured under occluded conditions where the camera was entirely blocked for about 2 s. During blocking, the proposed method tracked drift-free displacement under such abrupt disturbance whereas the KLT method missed or incorrectly detected feature points, resulting in significant drift and offset measurement errors. In conclusion, the ability to measure non-target-specific drift-free displacement was the most significant advantage of the proposed method, which was implemented with Deepflow, masking, and signal filtering and averaging techniques. Future work based on this study will include long-term field experiments and multiple-point tracking for system identification.

Author Contributions

All authors contributed to the main idea of this paper. J.W. wrote the software and designed the experiments. K.P., D.-S.M. and J.-W.P. analyzed the experimental results. H.Y. and J.W. wrote the article, and J.-W.P. and K.P. supervised the overall research effort.

Funding

This research was supported by the National Research Foundation of Korea (NRF) Grant funded by the Ministry of Science and ICT for convergent research in Development program for convergence R&D over Science and Technology Liberal Arts (NRF-2017M3C1B6069981) and Chung-Ang University Graduate Research Scholarship in 2018.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Park, J.-W.; Sim, S.-H.; Jung, H.-J. Displacement Estimation Using Multimetric Data Fusion. IEEE/ASME Trans. Mechatron. 2013, 18, 1675–1682. [Google Scholar] [CrossRef]
  2. Lee, J.; Lee, K.C.; Cho, S.; Sim, S.H. Computer vision-based structural displacement measurement robust to light-induced image degradation for in-service bridges. Sensors 2017, 17, 2317. [Google Scholar] [CrossRef] [PubMed]
  3. Nassif, H.H.; Gindy, M.; Davis, J. Comparison of laser Doppler vibrometer with contact sensors for monitoring bridge deflection and vibration. Ndt E Int. 2005, 38, 213–218. [Google Scholar]
  4. Staszewski, W.J.; Bin Jenal, R.; Klepka, A.; Szwedo, M.; Uhl, T. A Review of Laser Doppler Vibrometry for Structural Health Monitoring Applications. Key Eng. Mater. 2012, 518, 1–15. [Google Scholar] [CrossRef]
  5. Yi, T.-H.; Li, H.-N.; Gu, M. Characterization and extraction of global positioning system multipath signals using an improved particle-filtering algorithm. Meas. Sci. Technol. 2011, 22, 075101. [Google Scholar] [CrossRef]
  6. Jo, H.; Sim, S.H.; Tatkowski, A.; Spencer, B.F., Jr.; Nelson, M.E. Feasibility of displacement monitoring using low-cost GPS receivers. Struct. Control Health Monitor. 2013, 20, 1240–1254. [Google Scholar] [CrossRef]
  7. Yi, T.H.; Li, H.N.; Gu, M. Experimental assessment of high-rate GPS receivers for deformation monitoring of bridge. Measurement 2013, 46, 420–432. [Google Scholar] [CrossRef]
  8. Yi, T.H.; Li, H.N.; Gu, M. Wavelet based multi-step filtering method for bridge health monitoring using GPS and accelerometer. Smart Struct. Syst. 2013, 11, 331–348. [Google Scholar] [CrossRef]
  9. Kaloop, M.R.; Li, H. Multi input–single output models identification of tower bridge movements using GPS monitoring system. Measurement 2014, 47, 531–539. [Google Scholar] [CrossRef]
  10. Choi, H.S.; Cheung, J.H.; Kim, S.H.; Ahn, J.H. Structural dynamic displacement vision system using digital image processing. NDT E Int. 2011, 44, 597–608. [Google Scholar]
  11. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A vision-based sensor for noncontact structural displacement measurement. Sensors 2015, 15, 16557–16575. [Google Scholar] [CrossRef] [PubMed]
  12. Narita, Y.; Fukuda, Y.; Feng, M.Q.; Kaneko, S.; Tanaka, T. Vision-Based Displacement Sensor for Monitoring Dynamic Response Using Robust Object Search Algorithm. IEEE Sens. J. 2013, 13, 4725–4732. [Google Scholar]
  13. Fukuda, Y.; Feng, M.Q.; Shinozuka, M. Cost-effective vision-based system for monitoring dynamic response of civil engineering structures. Struct. Control. Health Monit. 2010, 17, 918–936. [Google Scholar] [CrossRef]
  14. Jeon, H.; Kim, Y.; Lee, D.; Myung, H. Vision-based remote 6-DOF structural displacement monitoring system using a unique marker. Smart Struct. Syst. 2014, 13, 927–942. [Google Scholar] [CrossRef]
  15. Park, J.-W.; Lee, J.-J.; Jung, H.-J.; Myung, H. Vision-based displacement measurement method for high-rise building structures using partitioning approach. NDT E Int. 2010, 43, 642–647. [Google Scholar] [CrossRef]
  16. Shariati, A.; Schumacher, T. Eulerian-based virtual visual sensors to measure dynamic displacements of structures. Struct. Control Health Monitor. 2017, 24, e1977. [Google Scholar] [CrossRef]
  17. Ye, X.; Ni, Y.; Wai, T.; Wong, K.; Zhang, X.; Xu, F. A vision-based system for dynamic displacement measurement of long-span bridges: algorithm and verification. Smart Struct. Syst. 2013, 12, 363–379. [Google Scholar] [CrossRef]
  18. Ye, X.; Yi, T.-H.; Dong, C.; Liu, T. Vision-based structural displacement measurement: System performance evaluation and influence factor analysis. Measurement 2016, 88, 372–384. [Google Scholar] [CrossRef] [Green Version]
  19. Wahbeh, A.M.; Caffrey, J.P.; Masri, S.F. A vision-based approach for the direct measurement of displacements in vibrating systems. Smart Mater. Struct. 2003, 12, 785–794. [Google Scholar] [CrossRef]
  20. Feng, D.; Feng, M.Q. Vision-based multipoint displacement measurement for structural health monitoring. Struct. Control Health Monitor. 2016, 23, 876–890. [Google Scholar] [CrossRef]
  21. Ho, H.-N.; Lee, J.-H.; Park, Y.-S.; Lee, J.-J. A Synchronized Multipoint Vision-Based System for Displacement Measurement of Civil Infrastructures. Sci. World J. 2012, 2012, 519146. [Google Scholar] [CrossRef] [PubMed]
  22. Cigada, A.; Mazzoleni, P.; Zappa, E. Vibration monitoring of multiple bridge points by means of a unique vision-based measuring system. Exp. Mech. 2014, 54, 255–271. [Google Scholar]
  23. Chen, Z.; Zhang, X.; Fatikow, S. 3D robust digital image correlation for vibration measurement. Appl. Opt. 2016, 55, 1641. [Google Scholar] [CrossRef] [PubMed]
  24. He, L.; Tan, J.; Hu, Q.; He, S.; Cai, Q.; Fu, Y.; Tang, S. Non-Contact Measurement of the Surface Displacement of a Slope Based on a Smart Binocular Vision System. Sensors 2018, 18, 2890. [Google Scholar] [CrossRef] [PubMed]
  25. Ribeiro, D.; Calçada, R.; Ferreira, J.; Martins, T. Non-contact measurement of the dynamic displacement of railway bridges using an advanced video-based system. Eng. Struct. 2014, 75, 164–180. [Google Scholar] [CrossRef]
  26. Feng, D.; Feng, M.Q. Identification of structural stiffness and excitation forces in time domain using noncontact vision-based displacement measurement. J. Sound Vib. 2017, 406, 15–28. [Google Scholar] [CrossRef]
  27. Pan, B.; Tian, L.; Song, X. Real-time, non-contact and targetless measurement of vertical deflection of bridges using off-axis digital image correlation. NDT E Int. 2016, 79, 73–80. [Google Scholar] [CrossRef]
  28. Yoon, H.; Elanwar, H.; Choi, H.; Golparvar-Fard, M.; Spencer, B.F.; Golparvar-Fard, M. Target-free approach for vision-based structural system identification using consumer-grade cameras. Struct. Control. Health Monit. 2016, 23, 1405–1416. [Google Scholar] [CrossRef]
  29. Khuc, T.; Catbas, F.N. Computer vision-based displacement and vibration monitoring without using physical target on structures. Struct. Infrastruct. Eng. 2017, 13, 505–516. [Google Scholar] [CrossRef]
  30. Ji, Y.F.; Chang, C.-C. Nontarget Image-Based Technique for Small Cable Vibration Measurement. J. Bridge Eng. 2008, 13, 34–42. [Google Scholar] [CrossRef]
  31. Feng, M.Q.; Fukuda, Y.; Feng, D.; Mizuta, M. Nontarget Vision Sensor for Remote Measurement of Bridge Dynamic Response. J. Bridge Eng. 2015, 20, 4015023. [Google Scholar] [CrossRef]
  32. Dong, C.-Z.; Celik, O.; Catbas, F.N. Marker-free monitoring of the grandstand structures and modal identification using computer vision methods. Struct. Health Monit. 2018. [Google Scholar] [CrossRef]
  33. Xu, Y.; Brownjohn, J.; Kong, D. A non-contact vision-based system for multipoint displacement monitoring in a cable-stayed footbridge. Struct. Control. Heal. Monit. 2018, 25, e2155. [Google Scholar] [CrossRef] [Green Version]
  34. Lydon, D.; Lydon, M.; Taylor, S.; Del Rincon, J.M.; Hester, D.; Brownjohn, J. Development and field testing of a vision-based displacement system using a low cost wireless action camera. Mech. Syst. Signal Process. 2019, 121, 343–358. [Google Scholar] [CrossRef]
  35. Yoon, H.; Shin, J.; Spencer, B.F., Jr. Structural displacement measurement using an unmanned aerial system. Comput. Aided Civ. Infrastruct. Eng. 2018, 33, 183–192. [Google Scholar] [CrossRef]
  36. Zhang, D.; Tian, B.; Wei, Y.; Hou, W.; Guo, J. Structural dynamic response analysis using deviations from idealized edge profiles in high-speed video. Opt. Eng. 2019, 58, 014106. [Google Scholar] [CrossRef]
  37. Kong, X.; Li, J. Vision-Based Fatigue Crack Detection of Steel Structures Using Video Feature Tracking. Comput. Civ. Infrastruct. Eng. 2018, 33, 783–799. [Google Scholar] [CrossRef]
  38. Javh, J.; Slavič, J.; Boltežar, M. The subpixel resolution of optical-flow-based modal analysis. Mech. Syst. Signal Process. 2017, 88, 89–99. [Google Scholar] [CrossRef]
  39. Sarrafi, A.; Mao, Z.; Niezrecki, C.; Poozesh, P. Vibration-based damage detection in wind turbine blades using Phase-based Motion Estimation and motion magnification. J. Sound Vib. 2018, 421, 300–318. [Google Scholar] [CrossRef] [Green Version]
  40. Chen, J.G.; Wadhwa, N.; Cha, Y.-J.; Durand, F.; Freeman, W.T.; Büyüköztürk, O. Modal identification of simple structures with high-speed video using motion magnification. J. Sound Vib. 2015, 345, 58–71. [Google Scholar] [CrossRef]
  41. Yang, Y.; Sanchez, L.; Zhang, H.; Roeder, A.; Bowlan, J.; Crochet, J.; Farrar, C.; Mascareñas, D. Estimation of full-field, full-order experimental modal model of cable vibration from digital video measurements with physics-guided unsupervised machine learning and computer vision. Struct. Control. Health Monit. 2019, 26, e2358. [Google Scholar] [CrossRef]
  42. Harmanci, Y.E.; Gülan, U.; Holzner, M.; Chatzi, E. A Novel Approach for 3D-Structural Identification through Video Recording: Magnified Tracking. Sensors 2019, 19, 1229. [Google Scholar] [CrossRef] [PubMed]
  43. Sarrafi, A.; Poozesh, P.; Mao, Z. A comparison of computer-vision-based structural dynamics characterizations. In Model Validation and Uncertainty Quantification; Springer: Berlin, Germany, 2017; Volume 3, pp. 295–301. [Google Scholar]
  44. Shi, J.; Tomasi, C. Good Features to Track; Cornell University: Ithaca, NY, USA, 1993. [Google Scholar]
  45. Tomasi, C.; Detection, T.K. Tracking of Point Features; Carnegie Mellon University: Pittsburgh, PA, USA, 1991. [Google Scholar]
  46. Beauchemin, S.S.; Barron, J.L. The computation of optical flow. ACM Comput. Surv. 1995, 27, 433–466. [Google Scholar] [CrossRef]
  47. Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; pp. 674–679. [Google Scholar]
  48. Morlier, J.; Michon, G. Virtual Vibration Measurement Using KLT Motion Tracking Algorithm. J. Dyn. Syst. Meas. Control. 2010, 132, 011003. [Google Scholar] [CrossRef]
  49. Yoon, H.; Hoskere, V.; Park, J.-W.; Spencer, B.F. Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles. Sensors 2017, 17, 2075. [Google Scholar] [CrossRef] [PubMed]
  50. Weinzaepfel, P.; Revaud, J.; Harchaoui, Z.; Schmid, C. DeepFlow: Large displacement optical flow with deep matching. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1385–1392. [Google Scholar]
  51. Revaud, J.; Weinzaepfel, P.; Harchaoui, Z.; Schmid, C. DeepMatching: Hierarchical Deformable Dense Matching. Int. J. Comput. Vis. 2016, 120, 300–323. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart of proposed method.
Figure 1. Flowchart of proposed method.
Sensors 19 02992 g001
Figure 2. POI selection using masking.
Figure 2. POI selection using masking.
Sensors 19 02992 g002
Figure 3. Deepmatching flow diagram.
Figure 3. Deepmatching flow diagram.
Sensors 19 02992 g003
Figure 4. Reference frame-based displacement measurement.
Figure 4. Reference frame-based displacement measurement.
Sensors 19 02992 g004
Figure 5. Experimental setup.
Figure 5. Experimental setup.
Sensors 19 02992 g005
Figure 6. Feature detection in the cropped images: proposed mask (Left) and Harris corner (Right).
Figure 6. Feature detection in the cropped images: proposed mask (Left) and Harris corner (Right).
Sensors 19 02992 g006
Figure 7. Gradient magnitudes in the cropped image.
Figure 7. Gradient magnitudes in the cropped image.
Sensors 19 02992 g007
Figure 8. Optical displacement comparison: (a) displacement sensor vs. the proposed method (b) displacement sensor vs. KLT (c) displacement error from (a), (b).
Figure 8. Optical displacement comparison: (a) displacement sensor vs. the proposed method (b) displacement sensor vs. KLT (c) displacement error from (a), (b).
Sensors 19 02992 g008
Figure 9. Frequency domain comparison using PSD.
Figure 9. Frequency domain comparison using PSD.
Sensors 19 02992 g009
Figure 10. Comparison of optical displacement: (a) Proposed method (b) KLT.
Figure 10. Comparison of optical displacement: (a) Proposed method (b) KLT.
Sensors 19 02992 g010
Table 1. Comparison of System Identification Results.
Table 1. Comparison of System Identification Results.
MethodMaximum Displacement (mm)RMSE (mm)
Proposed method1.79090.0753
KLT1.50170.2943
Reference displacement sensor1.7986-

Share and Cite

MDPI and ACS Style

Won, J.; Park, J.-W.; Park, K.; Yoon, H.; Moon, D.-S. Non-Target Structural Displacement Measurement Using Reference Frame-Based Deepflow. Sensors 2019, 19, 2992. https://doi.org/10.3390/s19132992

AMA Style

Won J, Park J-W, Park K, Yoon H, Moon D-S. Non-Target Structural Displacement Measurement Using Reference Frame-Based Deepflow. Sensors. 2019; 19(13):2992. https://doi.org/10.3390/s19132992

Chicago/Turabian Style

Won, Jongbin, Jong-Woong Park, Kyoohong Park, Hyungchul Yoon, and Do-Soo Moon. 2019. "Non-Target Structural Displacement Measurement Using Reference Frame-Based Deepflow" Sensors 19, no. 13: 2992. https://doi.org/10.3390/s19132992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop