Next Article in Journal
Assessment of Ionospheric Gradient Impacts on Ground-Based Augmentation System (GBAS) Data in Guangdong Province, China
Next Article in Special Issue
Structural Health Monitoring of Above-Ground Storage Tank Floors by Ultrasonic Guided Wave Excitation on the Tank Wall
Previous Article in Journal
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems
Previous Article in Special Issue
An Architecture for On-Line Measurement of the Tip Clearance and Time of Arrival of a Bladed Disk of an Aircraft Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges

1
School of Urban and Environmental Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Korea
2
Korea Railroad Research Institute, Uiwang 16105, Korea
3
Department of Civil and Environmental Engineering, University of Seoul, Seoul 02504, Korea
*
Authors to whom correspondence should be addressed.
Sensors 2017, 17(10), 2317; https://doi.org/10.3390/s17102317
Submission received: 2 July 2017 / Revised: 3 October 2017 / Accepted: 9 October 2017 / Published: 11 October 2017

Abstract

:
The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments.

1. Introduction

Structural health monitoring (SHM) is an essential tool for the effective maintenance of civil infrastructure, with a number of SHM systems employed in real-world applications [1,2,3,4,5]. Data acquisition of structural responses is a fundamental step in SHM systems where the data is subsequently processed for condition assessment and decision-making. Displacement responses from a civil engineering structure are considered to be informative in evaluating the structure’s current structural condition and safety. As it is directly related to structural stiffness and loadings, displacement can be an indicator of structural changes and excessive external loadings [6,7]. For example, the plastic deformation ratios of building structures are estimated by drift displacement data [8]. Most design codes used in modern countries (e.g., the AASHTO LRFD bridge design specification) specify maximum displacement levels for bridge structures to assure structural safety and usability. Thus, displacement information is commonly employed for infrastructure maintenance purposes.
Displacement sensors, such as linear variable differential transformers (LVDT) and strain-based displacement transducers, are widely adopted for conducting displacement measurements in practice. These sensors are typically placed between a target point on a structure and a fixed reference point, measuring relative displacements. A sensor’s installation requires additional supporting structures with respect to the fixed references that are often unavailable or difficult to prepare in field testing involving full-scale civil engineering structures. Furthermore, vibrations of the supporting structures can significantly degrade measurement accuracy. Thus, using traditional sensors to measure displacement responses from a full-scale structure is considered to be inefficient.
Recent research efforts have focused on effectively addressing this issue and providing a practical means for displacement measurement. Specifically, extant research includes the following: (1) the development of indirect displacement estimation algorithms to convert other physical quantities, such as acceleration and strain, to displacement; and (2) an applicability investigation of relatively new sensors, including the laser Doppler vibrometer (LDV), global positioning systems (GPS), and computer vision-based approaches. Indirect estimation methods typically use acceleration and strain measurements that are independent of the reference points [9,10,11,12,13,14,15,16,17,18,19]. However, the estimation performance and accuracy of the indirect methods are highly dependent on displacement conversion algorithms that should be carefully handled to avoid unexpected large errors. The laser Doppler vibrometer (LDV) is a representative noncontact-type sensor that measures displacement by the Doppler shift of emitted and reflected laser rays [20,21]. Although the LDV exhibits excellent accuracy, disadvantages such as high cost and a limitation wherein an LDV only can measure displacement in the direction of the emitted laser have prevented the widespread adoption of the LDV in practice. Several studies have investigated global positioning systems (GPS) with respect to structural health monitoring (SHM) as summarized succinctly by Im et al. [22]. The GPS is an attractive and promising alternative for deflection monitoring. However, the positioning accuracy of current GPS technology is considered as only appropriate for structures with large deflections, such as long-span bridges [23,24,25] and high-rise buildings [26], while most of the other civil structures with small deflections require better alternatives [27].
Previous studies have reported that computer vision-based methods possess the potential to address the issues in existing techniques [28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53]. The existing vision-based methods differ by (1) non-target approaches, (2) feature detection, and (3) coordinate transforms. The non-target approaches utilize noticeable features from a structure, which are tracked to measure displacement. Example algorithms include orientation code matching (OCM) [28,29], Kanade–Lucas–Tomasi (KLT) [30], Eulerian-based algorithms [31,32], and upsampled cross correlation (UCC) [33]. Target-based approaches use a target marker with specially designed features, such as a circle [34,35,36,37,38,39], a checkerboard [40,41,42,43,44], or a random pattern [45]. Once a feature is detected, the position of the feature is transformed to the physical domain by using a coordinate transform. Several different transformation methods have been employed, such as simple scaling [28,29,30,31,32,33,34,35,40,44], the affine transform [36,37], extrinsic parameters acquisition [42,43], and the homography transform [46,47,48,49,50]. Previous studies have shown the immense potential of computer vision for displacement sensing and other SHM applications, such as system identification [51] and long-span bridge displacement measurement [52].
Several practical issues in computer vision-based displacement sensing have been identified in the literature, including the use of target markers, the selection of camera locations, and light-induced error. The non-target approaches are convenient in that they do not need an installation of target markers. Despite the convenience, target-based measurement becomes useful when combined with the homography transform, which can greatly increase field applicability by allowing cameras to be arbitrarily placed [46,47,48,49,50]. Regarding light-induced error, few studies have examined feature detection in a harsh field-testing environment, particularly those with adverse light conditions [53]. Sunlight causes an image blur of target markers and thereby leads to significant error in finding features in the captured images.
This study presents a computer vision-based approach for displacement measurement tailored to field testing for civil engineering structures. Following the hardware configuration and coordinate transform used in previous studies, the proposed approach includes an image-processing scheme associated with an adaptive region-of-interest (ROI) process to reliably identify marker locations under the presence of light-induced image degradation. A laboratory-scale experiment is conducted to validate the proposed method in terms of light conditions. Field-testing results involving a 40-m-long steel box girder bridge are presented to validate the performance of the proposed approach.

2. Computer Vision-Based Displacement Measurement

2.1. Overview

Computer vision-based displacement measurement methods typically consist of hardware and software components (see Figure 1). The hardware part can be prepared with a commercial camera, a computer for data acquisition and processing, and a user-defined target marker to build a highly cost-effective system. The marker’s movements are recorded by the camera and simultaneously transferred to the computer that calculates the displacement using image-processing algorithms and coordinate transforms.
As the coordinate transforms relate the image and the physical coordinates, the marker and the camera must be properly aligned and placed considering the limitations of the coordinate transforms selected. However, to find an appropriate location where the camera can be securely placed and the assumptions in the coordinate transforms are not violated is often challenging in field testing. In this section, coordinate transforms introduced by the previous research works are briefly described to discuss issues in camera placement.
Four types of coordinate transforms are employed in the existing vision-based methods, which are (1) simple scaling [28,29,30,31,32,33,34,35,40,44], the affine transform [36,37], the extrinsic parameters acquisition method [42,43], and the homography transform [46,47,48,49,50]. Simple scaling multiplies the scaling factor (unit: mm/pixel) to the measured image coordinate displacement; thus, the direction of the target’s movement and the image displacement must be aligned with each other. The affine transform requires the camera and the target marker to be aligned perpendicular to each other, because the assumptions adopted in defining the transform disregard the perspective projection. Extrinsic camera parameters, which describe six degree-of-freedom motions (three-dimensional (3D) translation and 3D rotation) of the target marker, can be acquired only with a short focal length lens [54], which limits the camera to stay near the target marker. The homography transform can map the image plane to the marker plane regardless of the camera’s position as Figure 2 describes. As a result, the homography transform is regarded as an appropriate solution for unconstrained camera positioning in field testing, and is used in this study.
The displacement calculation using planar homography is based on the relationship between the physical coordinates in the marker plane and the image coordinates in the captured image as described in Equation (1).
s i 3 × 1 = H 3 × 3 w 3 × 1
where s denotes a scaling factor, i3×1 = [u v 1]T denotes an image coordinate, H3×3 denotes the homography matrix, and w3×1 = [x y 1]T denotes a physical coordinate. Firstly, the homography matrix is computed by means of the direct linear transformation (DLT) algorithm [52], which adopts at least four physical coordinates in the marker plane and the corresponding image coordinates in the first image to determine the optimal transformation matrix between the image and physical coordinate systems. Once the homography matrix is computed from the first image, a time history of the marker’s movement is calculated by the inverse homography transform of the image coordinates in the sequentially acquired images into the physical coordinates.
In full-scale civil structure applications, the vision-based approaches meet several practical issues, such as the selection of a camera installation point and light-induced error. In the case of the camera installation issue, the homography transform can be a solution allowing a camera to be placed at an arbitrary point. The light conditions in a field-testing environment can have an adverse effect on the captured images for a displacement measurement. To enhance the field applicability of the computer vision-based method, this paper focuses on addressing light-induced image degradation, while utilizing the homography transform to provide a wide freedom of camera installation.

2.2. Light-Induced Image Degradation

The features on a target marker could be inaccurately detected under adverse light conditions, particularly in field testing. Direct sunlight or reflected light can cause significant degradation in images captured with target markers. Such degradation can cause imbalanced brightness, a loss of definite edges, or a change in feature shapes, which results in the erroneous positioning of features. Feng and Feng [53] addressed this issue by providing a lab-scale experiment conducted in a dim-light condition. However, the effects of excessive light exposure need to be further investigated. In this section, the action of adverse light on the feature detection process is briefly described with an example.
For a better illustration, consider marker images (197 pixels × 193 pixels) acquired in the laboratory experiment shown in Figure 3. A typical way of using the marker for a displacement measurement is to find the centroid of the white circle surrounded by the black background, which can be obtained by image binarization. In case of the clear image shown in Figure 3a, the white circle is successfully separated from the background. On the contrary, when the image has imbalanced brightness (i.e., some background pixels are whiter than the circle), the binarization process fails to isolate the circle, resulting in an incorrect estimation of the centroid. As cameras are typically directed upward to focus on a structure where strong sunlight appears behind the structure in daytime, this light-related issue must be controlled.
Light-induced error must be carefully handled, as sunlight is inevitable in field testing. A strategy to overcome this issue is proposed in Section 3. The advantage of using the proposed strategy is experimentally verified in lab-scale and field testing environments in Section 4 and Section 5.

3. Displacement Measurement Using an Adaptive ROI

3.1. Adaptive ROI Algorithm

The proposed approach enables an accurate displacement measurement in a field-testing environment with adverse light exposure. To effectively address the light-induced feature detection error described in the previous section, this study proposes a computer vision-based displacement measurement strategy that focuses on a reliable feature detection algorithm with an adaptive ROI.
The adaptive ROI method is an automated and fast procedure to select the smallest ROI in each captured image. As shown in Figure 4, the adaptive ROI method is composed of four steps. The first step involves acquiring the boundary of the circle by applying an edge detection filter, such as the Sobel filter [56], to the original image. Typically, the filtered image has a hollow hole as shown in Figure 4b. This image is investigated to locate the smallest rectangular box that tightly contains the hollow hole, which is termed the adaptive ROI in this study. The image cropped by the adaptive ROI is shown in Figure 4c. Note that the cropped image contains a clear circle without a bright background, and this results in a clear distinction between the circle and the background. As shown in Figure 4d, the cropped image is binarized using a threshold method that helps in clearly separating the circle from the background. Finally, the centroid of the circle is calculated by averaging the location of the pixels in the circle. The overall procedure of the adaptive ROI takes 1.8 ms for a 200 pixels × 200 pixels size of an image with MABLAB, which can cover over 500 Hz of a feature detection process. Hence, the adaptive ROI method reliably detects features under adverse light conditions with sufficiently fast computation.
The adaptive ROI can handle the adverse light effect in the image. The original image with the whitened background shown in Figure 5a exhibits the histogram of the pixel intensity without clear separation between black and white. Thus, any threshold value, including the Otsu threshold as well as 70% and 80% of the pixel range, cannot fully isolate the circle from the background as shown in Figure 5a. On the contrary, the histogram of the cropped image by the adaptive ROI approach clearly has two groups of pixel intensities, each of which represents the circle and the background. Indeed, any threshold between the two groups can successfully binarize the cropped image. As such, the adaptive ROI provides a reliable means of feature detection tailored to a field-testing environment.
The flowchart in Figure 6 shows the overall process of the displacement measurement. The initialization step uses the feature positions (i.e., the centroids of the white circles) in the first image and the foreknown metric locations of the circles to compute the homography transform matrix. Here, the adaptive ROI method is employed in identifying the feature locations in the image to avoid the adverse effect of excessive exposure. The real-time displacement acquisition step commences when the direct linear transformation algorithm determines the homography matrix. The displacement acquisition step involves first detecting features with the adaptive ROI method from an incoming frame, and then using the homography transform to calculate the displacement. This process is repeated for each frame obtained from the camera.

3.2. Uncertainty Analysis

A numerical simulation for an uncertainty analysis was conducted to identify the effect of light on the accuracy of the adaptive ROI method. An image of 200 pixels by 200 pixels is numerically produced to have a clear white circle with a 30-pixel radius on a black background. Gaussian noise is added to the image for a realistic simulation. The added noise is assumed to be a zero-mean process with a variance of 1.8588, which is determined from a typical charge-coupled device (CCD) camera. The image’s degradation due to the excessive light exposure as shown in Figure 5a is simulated by adding a pixel intensity gradation so that the dark background on the right side of the image becomes lighter as shown in Figure 7. As a stronger pixel intensity gradation is added, the average pixel intensity of each image increases. Given an added gradation, the circle moves 0.01 pixel to the right until the circle completely moves 1 pixel, while the displacement estimation errors of the adaptive ROI method are calculated. The error is represented in terms of pixels, which can be readily converted to physical displacement values using a known scaling factor (mm/pixel). Figure 7b shows the error versus the average pixel intensity of each image. The drastic change in error around the average pixel intensity of 157 is caused when the white area expanded due to excessive light touching the circle in the center of the image. Thus, this is the limiting condition for the adaptive ROI method.

4. Experimental Validation: Laboratory-Scale

A laboratory-scale experiment was conducted using a shaking table to investigate the robustness of the adaptive ROI method with respect to light-induced image degradation. Figure 8 illustrates the experimental setup, including the camera, target marker, computer, light source, and LDV. The camera is placed 2 m away from the target marker. The target marker is installed on a shaking table that provided harmonic excitation with a frequency of 1.5 Hz and an amplitude of 2 mm in the horizontal direction. An artificial light is placed behind the marker to produce image degradation. Here, the right side of the marker appeared to be light gray in the captured image due to the backlight, which makes the white feature circles indistinguishable from the background (see Figure 8b). The resulting displacements from the adaptive ROI are compared with the reference displacements measured by the laser Doppler vibrometer (LDV). The hardware configuration is summarized in Table 1.
The displacement measured by the adaptive ROI was compared to those measured by the conventional approach (i.e., Otsu binarization without the adaptive ROI) and measured by LDV. The displacements calculated from the upper left and upper right circles are individually shown in Figure 9 to clearly demonstrate the light-induced errors. All displacements calculated using the adaptive ROI are accurately measured and agreed well with those from the LDV, even when the light significantly affected the marker images. However, the conventional approach, without the adaptive ROI, involved considerable errors, particularly with respect to the circles on the right, as expected from the image degradation caused by the right side of the background. Furthermore, the displacement from the left circles exhibits considerable errors, as shown in Figure 9b, because the centroid detection failure on the right circles directly leads to an erroneous homography matrix. For further error analysis, correlations between the displacements measured by the camera and the LDV are shown in Figure 10, which also confirms the advantage of using the adaptive ROI. The regression line in the case of the adaptive ROI has a coefficient of determination, R2, of 0.9987 and 0.9988 for the upper right and upper left, respectively. Furthermore, amplitude-dependent errors are not observed in that the regression line and the correlation plot are consistently close to each other over the entire amplitude range. The adaptive ROI-based feature detection method is thus expected to prevent possible large measurement errors that could frequently occur due to sunlight in a field-testing environment.
The measurement uncertainty of the adaptive ROI method was experimentally examined with varying light conditions. Over 14,000 images were captured while the backlight located on the right side of the marker was gradually brightened to the maximum level of the lighting equipment as described in Figure 11a. The average pixel intensities of the marker images were increasing with stronger backlight. The trend of the measurement error is shown in Figure 11b in terms of the average pixel intensity. The captured images are categorized into nine groups in terms of the change in pixel intensity compared to the without backlight case. The displacements of the circle for each group are then averaged to identify the trend of the measurement error. Herein, the position of the feature point when the backlight is turned off is assumed to be the ground truth. The measurement error is at most 0.23 pixels in the horizontal direction and 0.04 pixels in the vertical direction. The measurement error in the horizontal direction is much larger than that in the vertical direction, because the backlight is placed on the right side of the marker in this experiment. The adaptive ROI method is observed to achieve sub-pixel accuracy even with an adverse light condition, which can produce large unacceptable errors unless properly handled.

5. Field Validation

A full-scale experiment was performed at the Samseung Bridge, which is a 40-m long steel composite girder bridge located in Korea as shown in Figure 12a. The Samseung Bridge has been built for bridge testing purposes, providing an ideal field-testing environment. The same target marker as that used in the laboratory-scale experiment is attached to the bottom of the bridge deck at the mid-span using magnetic bases. As shown in Figure 12b, three different camera locations are considered to verify the benefit of the homography-based coordinate transform. An LDV that is located right below the target marker provides reference displacements to compare with those obtained from the camera. The LDV could be installed at the desired position because the area below the bridge involved an open space without any significant obstacles. This is not always the case with respect to most other bridges. A 29-ton truck operating on the bridge is used as an external load to produce bridge deflections.
As the experiment was conducted during the daytime, the captured images were observed to be affected by the sunlight and bright background as shown in Figure 13. The light from the sky severely changed the background brightness at the corner of the target marker. The degradation due to the excessive exposure impairs feature detection by the conventional approach, whereas the feature circles are correctly determined by using the adaptive ROI process in all three cases.
The measured displacements calculated by the proposed method are compared with those obtained by the conventional technique without the adaptive ROI as well as the references from the LDV. In all three cases, as shown in Figure 14, the displacements calculated with the adaptive ROI and the homography transform are consistently close to those measured by the LDV. In the case without the adaptive ROI, the feature detection failure that occurred in Cases 2 and 3 results in significant errors in the calculated displacement. For a quantitative demonstration of the results, two error indicators are defined as:
E max = | max ( | u c a m e r a | ) max ( | u e x a c t | ) |
E s d = | σ c a m e r a σ e x a c t σ e x a c t |
where |∙| denotes the absolute value, and ucamera and uexact are the displacement measured by the camera and LDV, respectively. σcamera and σexact are the standard deviations of the displacement from the camera and LDV, respectively. The error indicators calculated for each case are summarized in Table 2, and verify the observations in Figure 14. Thus, the results indicate that the proposed computer vision-based approach provides accurate displacement measurements with robustness to unfavorable light conditions and flexibility in camera position.
In addition to the error measures, the correlation between the displacements from the camera and the LDV is shown in Figure 15 to further analyze the error characteristics. Case 1 with the clear marker has a regression line with a slope of 1 and R2 of 0.9873 when the adaptive ROI is used. For Cases 2 and 3, the correlation plots deviate more from the regression lines because of the strong backlights, resulting in lower R2; however, the adaptive ROI can successfully correct the slope of the regression line to 1. This observation also can be verified from the error histograms in Figure 15. The error distributions of the adaptive ROI have mean values close to zero with smaller standard deviations compared to the displacement obtained without the adaptive ROI. Thus, the adaptive ROI can effectively handle the adverse effect of light on recording the marker image.

6. Conclusions

In the present study, a reliable computer vision-based approach to provide a practical means for structural displacement measurement was presented. To maximize the applicability of the vision-based system to full-scale civil engineering structures, the proposed approach focused on addressing image degradation due to excessive exposure. To this end, an adaptive ROI process was developed to reliably detect the features on the target marker even when undesired light significantly affects the captured maker image.
The proposed structural displacement measurement method was validated in both laboratory and field-testing environments. A laboratory-scale experiment with artificial light to generate the image degradation was conducted. Three field experiments were subsequently conducted at the Samseung Bridge to validate the performance of the adaptive ROI method. In addition, the experiments also verified the unconstrained camera positioning provided by the homography transform in the field testing environment. The images of the feature circles on the target marker were significantly illuminated, as the camera was on the side looking up the bridge. This condition was expected to be common in this type of field experiment. The proposed method measured the displacement with subpixel accuracy even with the light-induced image degradation. The results also indicated that the proposed method reliably tracked the displacements of the target marker at three different camera locations using the homography transform. In conclusion, the structural displacement measurement method examined in the present study is reliable as well as suitable for field applications which require robustness to adverse light and enhanced flexibility in selecting camera locations.

Conflicts of Interests

The authors declare no conflict of interest.

Acknowledgments

This research was supported by a grant from R&D Program of the Korea Railroad Research Institute, Republic of Korea.

Author Contributions

K.L. and S.-H.S. conceived the study. J.L. and S.C. proposed the adaptive ROI method and to use the homography transform. J.L. and S.C. conducted the lab-scale and field experiments. J.L. and S.-H.S. wrote the manuscript.

References

  1. Celebi, M.; Purvis, R.; Hartnagel, B.; Gupta, S.; Clogston, P.; Yen, P.; O’Connor, J.; Franke, M. Seismic instrumentation of the Bill Emerson Memorial Mississippi River Bridge at Cape Girardeau (MO): A cooperative effort. In Proceedings of the 4th International Seismic Highway Conference, Memphis, TN, USA, 9–11 February 2004. [Google Scholar]
  2. Wong, K.Y. Instrumentation and health monitoring of cable-supported bridges. Struct. Control Health Monit. 2004, 11, 91–124. [Google Scholar] [CrossRef]
  3. Seo, J.; Hatfield, G.; Kimn, J.-H. Probabilistic structural integrity evaluation of a highway steel bridge under unknown trucks. J. Struct. Integr. Maint. 2016, 1, 65–72. [Google Scholar] [CrossRef]
  4. Newell, S.; Goggins, J.; Hajdukiewicz, M. Real-time monitoring to investigate structural performance of hybrid precast concrete educational buildings. J. Struct. Integr. Maint. 2016, 1, 147–155. [Google Scholar] [CrossRef]
  5. Park, J.W.; Lee, K.C.; Sim, S.H.; Jung, H.J.; Spencer, B.F. Traffic safety evaluation for railway bridges using expanded multisensor data fusion. Comput.-Aided Civ. Infrastruct. Eng. 2016, 31, 749–760. [Google Scholar] [CrossRef]
  6. Sanayei, M.; Scampoli, S.F. Structural element stiffness identification from static test data. J. Eng. Mech. 1991, 117, 1021–1036. [Google Scholar] [CrossRef]
  7. Wolf, J.P.; Song, C. Finite-Element Modelling of Unbounded Media; Wiley: Chichester, UK, 1996. [Google Scholar]
  8. Nishitani, A.; Matsui, C.; Hara, Y.; Xiang, P.; Nitta, Y.; Hatada, T.; Katamura, R.; Matsuya, I.; Tanii, T. Drift displacement data based estimation of cumulative plastic deformation ratios for buildings. Smart Struct. Syst. 2015, 15, 881–896. [Google Scholar] [CrossRef]
  9. Park, K.T.; Kim, S.H.; Park, H.S.; Lee, K.W. The determination of bridge displacement using measured acceleration. Eng. Struct. 2005, 27, 371–378. [Google Scholar] [CrossRef]
  10. Gindy, M.; Nassif, H.; Velde, J. Bridge displacement estimates from measured acceleration records. Transp. Res. Rec. 2007, 2028, 136–145. [Google Scholar] [CrossRef]
  11. Gindy, M.; Vaccaro, R.; Nassif, H.; Velde, J. A state-space approach for deriving bridge displacement from acceleration. Comput.-Aided Civ. Infrastruct. Eng. 2008, 23, 281–290. [Google Scholar] [CrossRef]
  12. Lee, H.S.; Hong, Y.H.; Park, H.W. Design of an FIR filter for the displacement reconstruction using measured acceleration in low-frequency dominant structures. Int. J. Numer. Methods Eng. 2010, 82, 403–434. [Google Scholar] [CrossRef]
  13. Kim, H.I.; Kang, L.H.; Han, J.H. Shape estimation with distributed fiber Bragg grating sensors for rotating structures. Smart Mater. Struct. 2011, 20, 035011. [Google Scholar] [CrossRef]
  14. Shin, S.; Lee, S.U.; Kim, Y.; Kim, N.S. Estimation of bridge displacement responses using FBG sensors and theoretical mode shapes. Struct. Eng. Mech. 2012, 42, 229–245. [Google Scholar] [CrossRef]
  15. Park, J.W.; Sim, S.H.; Jung, H.J. Development of a wireless displacement measurement system using acceleration responses. Sensors 2013, 13, 8377–8392. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Park, J.W.; Jung, H.J.; Sim, S.H. Displacement estimation using multimetric data fusion. IEEE/ASME Trans. Mechatron. 2013, 18, 1675–1682. [Google Scholar] [CrossRef]
  17. Cho, S.; Sim, S.H.; Park, J.W. Extension of indirect displacement estimation method using acceleration and strain to various types of beam structures. Smart Struct. Syst. 2014, 14, 699–718. [Google Scholar] [CrossRef]
  18. Cho, S.; Yun, C.B.; Sim, S.H. Displacement estimation of bridge structures using data fusion of acceleration and strain measurement incorporating finite element model. Smart Struct. Syst. 2015, 15, 645–663. [Google Scholar] [CrossRef]
  19. Almeida, G.; Melício, F.; Biscaia, H.; Chastre, C.; Fonseca, J.M. In-plane displacement and strain image analysis. Comput.-Aided Civ. Infrastruct. Eng. 2015, 31, 292–304. [Google Scholar] [CrossRef]
  20. Nassif, H.H.; Gindy, M.; Davis, J. Monitoring of bridge girder deflection using laser Doppler vibrometer. In Proceedings of the 10th International Conference and Exhibition-Structural Faults and Repair Conference, London, UK, 1–3 July 2003; p. 6. [Google Scholar]
  21. Castellini, P.; Martarelli, M.; Tomasini, E.P. Laser Doppler Vibrometry: Development of advanced solutions answering to technology’s needs. Mech. Syst. Signal Process. 2006, 20, 1265–1285. [Google Scholar] [CrossRef]
  22. Im, S.B.; Hurlebaus, S.; Kang, Y.H. Summary review of GPS technology for structural health monitoring. J. Struct. Eng. 2013, 139, 1653–1664. [Google Scholar] [CrossRef]
  23. Celebi, M.; Prescott, W.; Stein, R.; Hudnut, K.; Behr, J.; Wilson, S. GPS monitoring of dynamic behavior long-period structures. Earthq. Spectra 1999, 15, 55–66. [Google Scholar] [CrossRef]
  24. Fujino, Y.; Murata, M.; Okano, S.; Takeguchi, M. Monitoring system of the Akashi Kaikyo bridge and displacement measurement using GPS. In Proceedings of the SPIE 3995, Nondestructive Evaluation of Highways, Utilities, and Pipelines IV, Newport Beach, CA, USA, 6–8 March 2000; p. 229. [Google Scholar]
  25. Watson, C.; Watson, T.; Coleman, R. Structural monitoring of cable-stayed bridge: Analysis of GPS versus modeled deflections. J. Surv. Eng. 2007, 133, 23–28. [Google Scholar] [CrossRef]
  26. Kijewsji-Correa, T.L.; Kareem, A.; Kochly, M. Experimental verification and full-scale deployment of global positioning systems to monitor the dynamic response of tall buildings. J. Struct. Eng. 2006, 132, 1242–1253. [Google Scholar] [CrossRef]
  27. Jo, H.; Sim, S.H.; Tatkowski, A.; Spencer, B.F.; Nelson, M.E. Feasibility of displacement monitoring using low-cost GPS receivers. Struct. Control Health Monit. 2013, 20, 1240–1254. [Google Scholar] [CrossRef]
  28. Fukuda, Y.; Feng, M.Q.; Narita, Y.; Kaneko, S.I.; Tanaka, T. Vision-based displacement sensor for monitoring dynamic response using robust object search algorithm. IEEE Sens. J. 2013, 13, 4725–4732. [Google Scholar] [CrossRef]
  29. Feng, M.Q.; Fukuda, Y.; Feng, D.; Mizuta, M. Nontarget vision sensor for remote measurement of bridge dynamic response. J. Bridge Eng. 2015, 20, 04015023. [Google Scholar] [CrossRef]
  30. Yoon, H.; Elanwar, H.; Choi, H.; Golparvar-Fard, M.; Spencer, B.F. Target-free approach for vision-based structural system identification using consumer-grade cameras. Struct. Control Health. Monit. 2016, 23, 1405–1416. [Google Scholar] [CrossRef]
  31. Shariati, A.; Schumacher, T.; Ramanna, N. Eulerian-based virtual visual sensors to detect natural frequencies of structures. J. Civ. Struct. Health Monit. 2015, 5, 457–468. [Google Scholar] [CrossRef]
  32. Schumacher, T.; Shariati, A. Monitoring of structures and mechanical systems using virtual visual sensors for video analysis: Fundamental concept and proof of feasibility. Sensors 2013, 13, 16551–16564. [Google Scholar] [CrossRef]
  33. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A vision-based sensor for noncontact structural displacement measurement. Sensors 2015, 15, 16557–16575. [Google Scholar] [CrossRef] [PubMed]
  34. Wahbeh, A.M.; Caffrey, J.P.; Masri, S.F. A vision-based approach for the direct measurement of displacements in vibrating systems. Smart Mater. Struct. 2003, 12, 785–794. [Google Scholar] [CrossRef]
  35. Choi, H.S.; Cheung, J.H.; Kim, S.H.; Ahn, J.H. Structural dynamic displacement vision system using digital image processing. NDT E Int. 2011, 44, 597–608. [Google Scholar] [CrossRef]
  36. Lee, J.J.; Shinozuka, M. A vision-based system for remote sensing of bridge displacement. NDT E Int. 2006, 39, 425–431. [Google Scholar] [CrossRef]
  37. Fukuda, Y.; Feng, M.Q.; Shinozuka, M. Cost-effective vision-based system for monitoring dynamic response of civil engineering structures. Struct. Control Health Monit. 2010, 17, 918–936. [Google Scholar] [CrossRef]
  38. Song, Y.Z.; Bowen, C.R.; Kim, A.H.; Nassehi, A.; Padget, J.; Gathercole, N. Virtual visual sensors and their application in structural health monitoring. Struct. Health Monit. 2014, 13, 251–264. [Google Scholar] [CrossRef] [Green Version]
  39. Zhao, X.; Liu, H.; Yu, Y.; Xu, X.; Hu, W.; Li, M.; Ou, J. Bridge displacement monitoring method based on laser projection-sensing technology. Sensors 2015, 15, 8444–8463. [Google Scholar] [CrossRef] [PubMed]
  40. Shariati, A.; Schumacher, T. Eulerian-based virtual visual sensors to measure dynamic displacements of structures. Struct. Control Health Monit. 2016, 24, e1977. [Google Scholar] [CrossRef]
  41. Chang, C.C.; Xiao, X.H. Three-dimensional structural translation and rotation measurement using monocular videogrammetry. J. Surv. Eng 2009, 136, 840–848. [Google Scholar] [CrossRef]
  42. Ji, Y. A computer vision-based approach for structural displacement measurement. In Proceedings of the SPIE 7647, Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2010, San Diego, CA, USA, 7–11 March 2010. [Google Scholar]
  43. Jeon, H.M.; Kim, Y.J.; Lee, D.H.; Myung, H. Vision-based remote 6-DOF structural displacement monitoring system using a unique marker. Smart Struct. Syst. 2014, 13, 927–942. [Google Scholar] [CrossRef]
  44. Kim, S.W.; Jeon, B.G.; Kim, N.S.; Park, J.C. Vision-based monitoring system for evaluating cable tensile forces on a cable-stayed bridge. Struct. Health Monit. 2013, 12, 440–456. [Google Scholar] [CrossRef]
  45. Ye, X.W.; Yi, T.H.; Dong, C.Z.; Liu, T. Vision-based structural displacement measurement: System performance evaluation and influence factor analysis. Measurement 2016, 88, 372–384. [Google Scholar] [CrossRef]
  46. Wu, L.J.; Casciati, F.; Casciati, S. Dynamic testing of a laboratory model via vision-based sensing. Eng. Struct. 2014, 60, 113–125. [Google Scholar] [CrossRef]
  47. Dworakowski, Z.; Kohut, P.; Gallina, A.; Holak, K.; Uhl, T. Vision-based algorithms for damage detection and localization in structural health monitoring. Struct. Control Health Monit. 2016, 23, 35–50. [Google Scholar] [CrossRef]
  48. Kohut, P.; Holak, K.; Martowicz, A. An uncertainty propagation in developed vision based measurement system aided by numerical and experimental tests. J. Theor. Appl. Mech. 2012, 50, 1049–1061. [Google Scholar]
  49. Jeong, Y.; Park, D.; Park, K.H. PTZ camera-based displacement sensor system with perspective distortion correction unit for early detection of building destruction. Sensors 2017, 17, 430. [Google Scholar] [CrossRef] [PubMed]
  50. Sładek, J.; Ostrowska, K.; Kohut, P.; Holak, K.; Gąska, A.; Uhl, T. Development of a vision based deflection measurement system and its accuracy assessment. Measurement 2013, 46, 1237–1249. [Google Scholar] [CrossRef]
  51. Feng, D.; Feng, M.Q. Identification of structural stiffness and excitation forces in time domain using noncontact vision-based displacement measurement. J. Sound Vib. 2017, 406, 15–28. [Google Scholar] [CrossRef]
  52. Feng, D.; Feng, M.Q. Experimental validation of cost-effective vision-based structural health monitoring. Mech. Syst. Signal Process. 2017, 88, 199–211. [Google Scholar] [CrossRef]
  53. Feng, D.; Feng, M.Q. Vision-based multipoint displacement measurement for structural health monitoring. Struct. Control Health Monit. 2016, 23, 876–890. [Google Scholar] [CrossRef]
  54. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  55. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1990, 9, 62–66. [Google Scholar] [CrossRef]
  56. Sobel, I. An isotropic 3 × 3 image gradient operator. In Machine Vision for Three-Dimensional Scenes; Freeman, H., Ed.; Academic Press: New York, NY, USA, 1990; pp. 376–379. [Google Scholar]
Figure 1. Common configuration of vision-based displacement measurement approaches: (a) hardware; (b) software.
Figure 1. Common configuration of vision-based displacement measurement approaches: (a) hardware; (b) software.
Sensors 17 02317 g001
Figure 2. Illustration of the homography transform between the image and marker planes.
Figure 2. Illustration of the homography transform between the image and marker planes.
Sensors 17 02317 g002
Figure 3. Image binarization using the Otsu threshold [55] for feature detection: (a) clear image; (b) degraded image with imbalanced brightness.
Figure 3. Image binarization using the Otsu threshold [55] for feature detection: (a) clear image; (b) degraded image with imbalanced brightness.
Sensors 17 02317 g003
Figure 4. Flow of the adaptive region-of-interest (ROI) for the image in Figure 3b: (a) original image; (b) Sobel edge; (c) cropped ROI; (d) binarized image (Otsu threshold).
Figure 4. Flow of the adaptive region-of-interest (ROI) for the image in Figure 3b: (a) original image; (b) Sobel edge; (c) cropped ROI; (d) binarized image (Otsu threshold).
Sensors 17 02317 g004
Figure 5. Histogram and the binarized image with different threshold levels for (a) original image; (b) cropped image along the adaptive ROI.
Figure 5. Histogram and the binarized image with different threshold levels for (a) original image; (b) cropped image along the adaptive ROI.
Sensors 17 02317 g005
Figure 6. Flowchart of computer vision-based displacement measurement.
Figure 6. Flowchart of computer vision-based displacement measurement.
Sensors 17 02317 g006
Figure 7. Error of the adaptive ROI method with respect to the light exposure. (a) Simulated marker images with added pixel intensity gradations; (b) error of the adaptive ROI method.
Figure 7. Error of the adaptive ROI method with respect to the light exposure. (a) Simulated marker images with added pixel intensity gradations; (b) error of the adaptive ROI method.
Sensors 17 02317 g007
Figure 8. Laboratory-scale test: (a) experimental setup; (b) schematic view and captured frame. LDV: laser Doppler vibrometer.
Figure 8. Laboratory-scale test: (a) experimental setup; (b) schematic view and captured frame. LDV: laser Doppler vibrometer.
Sensors 17 02317 g008
Figure 9. Comparison of the displacements with and without the adaptive ROI and by LDV from the feature circles at the (a) upper right; (b) upper left.
Figure 9. Comparison of the displacements with and without the adaptive ROI and by LDV from the feature circles at the (a) upper right; (b) upper left.
Sensors 17 02317 g009
Figure 10. Correlation between the displacement from the camera and the LDV by a linear regression line from the circle at the (a) upper right; (b) upper left.
Figure 10. Correlation between the displacement from the camera and the LDV by a linear regression line from the circle at the (a) upper right; (b) upper left.
Sensors 17 02317 g010
Figure 11. Measurement errors with varying light conditions. (a) Images with different backlights; (b) error for each direction.
Figure 11. Measurement errors with varying light conditions. (a) Images with different backlights; (b) error for each direction.
Sensors 17 02317 g011
Figure 12. Experiment setup: (a) overview; (b) experimental cases with different camera locations to verify homography-based unconstrained camera positioning.
Figure 12. Experiment setup: (a) overview; (b) experimental cases with different camera locations to verify homography-based unconstrained camera positioning.
Sensors 17 02317 g012
Figure 13. Feature detection with and without the adaptive ROI process: (a) Case 1; (b) Case 2; (c) Case 3.
Figure 13. Feature detection with and without the adaptive ROI process: (a) Case 1; (b) Case 2; (c) Case 3.
Sensors 17 02317 g013
Figure 14. Comparison of measured displacements: (a) Case 1; (b) Case 2; (c) Case 3.
Figure 14. Comparison of measured displacements: (a) Case 1; (b) Case 2; (c) Case 3.
Sensors 17 02317 g014
Figure 15. Error analysis by means of a linear regression model and a histogram of the error for (a) Case 1; (b) Case 2; (c) Case 3.
Figure 15. Error analysis by means of a linear regression model and a histogram of the error for (a) Case 1; (b) Case 2; (c) Case 3.
Sensors 17 02317 g015
Table 1. Hardware specifications.
Table 1. Hardware specifications.
PartsModelFeatures
MarkerUser-defined- Four white circles in a black background
- Horizontal interval: 150 mm
- Vertical interval: 100 mm
- Radius of the circles: 10 mm
CameraCNB-A1263NL- NTSC output interface 1
- ×22 optical zoom
ComputerLG-A510- 1.73 GHz Intel Core i7 CPU
- 4 GB DDR3 RAM
LDVRSV-150- Displacement resolution: 0.3 μm
1 640 × 480 resolution at 29.97 fps. CPU: central processing unit; RAM: random access memory; NTSC: national television system committee.
Table 2. Measurement errors.
Table 2. Measurement errors.
CasesFeature DetectionEmaxEsd
Case 1without adaptive ROI0.0549 mm0.0185
with adaptive ROI0.0433 mm0.0126
Case 2without adaptive ROI0.2518 mm0.1736
with adaptive ROI0.0314 mm0.0127
Case 3without adaptive ROI0.7110 mm0.4081
with adaptive ROI0.0565 mm0.0439

Share and Cite

MDPI and ACS Style

Lee, J.; Lee, K.-C.; Cho, S.; Sim, S.-H. Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges. Sensors 2017, 17, 2317. https://doi.org/10.3390/s17102317

AMA Style

Lee J, Lee K-C, Cho S, Sim S-H. Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges. Sensors. 2017; 17(10):2317. https://doi.org/10.3390/s17102317

Chicago/Turabian Style

Lee, Junhwa, Kyoung-Chan Lee, Soojin Cho, and Sung-Han Sim. 2017. "Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges" Sensors 17, no. 10: 2317. https://doi.org/10.3390/s17102317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop