Next Article in Journal
Neuroadaptive Robust Speed Control for PMSM Servo Drives with Rotor Failure
Next Article in Special Issue
Quantitative Retrieval of Soil Salinity Using Landsat 8 OLI Imagery
Previous Article in Journal
Multiscale Compact Modelling of UTC-Photodiodes Enabling Monolithic Terahertz Communication Systems Design
Previous Article in Special Issue
Generation of a Panorama Compatible with the JPEG 360 International Standard Using a Single PTZ Camera
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Underexposed Vision-Based Sensors’ Image Enhancement for Feature Identification in Close-Range Photogrammetry and Structural Health Monitoring

by
Luna Ngeljaratan
1,2 and
Mohamed A. Moustafa
1,*
1
Department of Civil and Environmental Engineering, University of Nevada, Reno, NV 89557, USA
2
Research Center for Biomaterials, National Research and Innovation Agency, Cibinong 16911, Indonesia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(23), 11086; https://doi.org/10.3390/app112311086
Submission received: 18 October 2021 / Revised: 17 November 2021 / Accepted: 18 November 2021 / Published: 23 November 2021
(This article belongs to the Special Issue Advances in Intelligent Control and Image Processing)

Abstract

:

Featured Application

Close-range photogrammetry and structural health monitoring of civil infrastructures in challenging lighting environments.

Abstract

This paper describes an alternative structural health monitoring (SHM) framework for low-light settings or dark environments using underexposed images from vision-based sensors based on the practical implementation of image enhancement algorithms. The proposed framework was validated by two experimental works monitored by two vision systems under ambient lights without assistance from additional lightings. The first experiment monitored six artificial templates attached to a sliding bar that was displaced by a standard one-inch steel block. The effect of image enhancement in the feature identification and bundle adjustment integrated into the close-range photogrammetry were evaluated. The second validation was from a seismic shake table test of a full-scale three-story building tested at E-Defense in Japan. Overall, this study demonstrated the efficiency and robustness of the proposed image enhancement framework in (i) modifying the original image characteristics so the feature identification algorithm is capable of accurately detecting, locating and registering the existing features on the object; (ii) integrating the identified features into the automatic bundle adjustment in the close-range photogrammetry process; and (iii) assessing the measurement of identified features in static and dynamic SHM, and in structural system identification, with high accuracy.

1. Introduction

In recent years, vision-based sensors have been significantly developed for structural health monitoring (SHM) of engineering structures, and depend strongly on the acquisition of high-quality images or videos [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. However, monitored images or videos rarely meet the computer vision (CV) requirements to be processed further when the SHM is conducted during the night and in hazy atmospheres, or under merely dark settings due to the camera design trade-off. Collected data from these environments are lacking visible details and result in underexposed and low-contrast images or videos that are not only dim for human vision, but also challenging to be interpreted. They may not capture important image characteristics such as sharpness, contrast, or dynamic range, leading to difficulties in analysis using image segmentation, structure from motion, pattern recognition, detection and matching, or other CV algorithms. Without adequate lighting, more hardware or tools should be incorporated into the SHM system. Alternatively, further image processing should be conducted before employing these algorithms to enable feature identification, track structural movement, or identify structural vibration characteristics.
Only limited works are solely dedicated and reported for vision-based SHM in a dark or night environment using real images. Li et al. [18] conducted a dynamic test using a smartphone and Kim et al. [19] installed a vision-based monitoring system equipped with a digital camera with a zoom lens on a three-span cable-stayed bridge. However, these two studies were conducted in low light and completely dark settings without additional lighting, so the SHM was unable to identify the monitored object [19] and a significant quantity of time-signals were missing in the data [18]. To solve these problems, a small number of studies that added additional components to the vision-based system have been reported. An SHM using a smartphone camera with a laser device was reported by Li et al. [20]. Choi et al. [21] proposed a night vision camera equipped with an IR pass filter to remove the red-eye effect in the infrared region. Digital cameras with LED lights as targets were used for night monitoring as evaluated by Feng et al. [22]. In terms of accuracy, these studies showed good precision and promising results; however, they only validated their works using low-amplitude testing.
Post-processing underexposed images using image enhancement algorithms is also a solution as it improves the image quality. Histogram equalization [23,24] was used to enhance image gray resolution for crack detection [25] and crack monitoring from thermal imaging [26]. Wavelet transforms [27,28] were used to correct vision-based images for damage and crack detection [29,30] and fatigue crack detection [31]. Contrast enhancement was conducted on vision-based images to separate the crack and the background area [32], and the advanced deep learning method was capable of autonomously detecting concrete cracking, steel corrosion, and delamination [33]. A recent study by Zollini et al. [34] deployed UAV monitoring and applied a contrast enhancement technique on imaging photogrammetry to enable monitoring on the deteriorated concrete area. Image enhancement is also commonly integrated into other remote sensing fields such as satellite imagery [35] and aerial system imagery [36]. However, at the present time, almost no relevant works can be referred to in this study that are related to vision-based system image enhancement with specific implementation for vibration SHM purposes.
Although prior studies successfully conducted SHM under low-light settings and night environments with good accuracy and by integrating image enhancement methods, several research gaps can still be identified. First, an alternative SHM framework can be proposed to improve the real vision-based SHM data under the complexity of a dark environment without assistance from additional equipment or hardware. Second, a specific study of vibration-based SHM in a dark environment should be conducted because available studies were only proposed for the damage-detection SHM. Third, more experiments are necessary to identify SHM accuracy, ranging from a very small displacement to higher amplitudes, as the prior works only validated their SHM framework under a very low amplitude of dynamic excitation. To fill these gaps, this study proposes the integration of image enhancement algorithms for low-light settings and dark environments. The objective of this study is to modify the underexposed and low-contrast image characteristics to improve their quality before implementation into automatic processing of bundle adjustment in close-range photogrammetry. The goal is to assess the accuracy of the enhanced images in measuring displacement and in identifying structural dynamic properties through system identification.

2. Methods

Remotely operated vision systems equipped with cameras, sensors, lighting, and natural or artificial features form an image through a process. This process starts from a light source with an intensity, polarization, and color spectrum that travels through a medium, then hits and is scattered on the surface of an object. The reflected rays on the object surface are captured by a camera sensor, and are then converted into electrons to form a two-dimensional pixel intensity map, i.e., an image [37]. Therefore, lighting directly affects the pixel intensity map and significantly simplifies the applied classical to advanced matching algorithms procedure, if the object is illuminated adequately [18,38,39,40,41,42]. Matching principals from the classical correlation coefficient [1,10,43,44,45,46], intensity interpolation [47,48,49,50], Newton–Raphson method [51,52,53], gradient-based method [54,55,56,57], and genetic algorithm [58,59,60] to the advanced artificial and convolutional neural network [61,62,63,64,65,66] requires a high dynamic range of grayscale values, sharp edges, and high contrast images. These approaches can be challenging to implement because automatic and robust measurement identification and matching at either the pixel or the sub-pixel level is difficult for large image data captured under a low-light setting. Therefore, as shown in Figure 1, image enhancement is proposed in this study as an SHM framework specifically for monitoring in low-light and dark environments. It is implemented in close-range photogrammetry and SHM imaging that requires feature detection and computes displacement based on template matching.

2.1. Feature Detection Problems in Low-Light Setting and Dark Environment

Examples of SHM images captured by two types of cameras in the laboratory environments are shown in Figure 2. The monitoring of these structures completely relied on the ambient light. Without extra lighting, it is difficult to stop fast action or to maximize the depth of field, and these factors impact the brightness of the captured images, resulting in the underexposed images shown in Figure 2a,c. When using commercial DSLR cameras, a higher ISO should be set to compensate for the dim light. Without proper lighting, the DSLR system will capture a low-contrast image, as shown in Figure 2e, especially when a fast shutter speed is required in high-speed testing.
For a vision-based sensor with a tracking system based on a specifically designed artificial feature or template [67] as shown in Figure 2, separating the black background from the white template rings is the fundamental step before applying a feature detection algorithm. The background is defined as the template region with the lowest gray level intensity (black). The object is identified as the white circle feature that is separated from the background in an area of the whole template Point Spread Function (PSF) size with a higher density. Then, the template is detected based on the principles of the scale-space theory [68,69] such that the center of the circle is identified based on second-order partial derivatives of the Laplacian of Gaussian (LoG). When the template is illuminated sufficiently and the vision system exposure is set appropriately, the circle center and template can be registered and identified automatically as shown in Figure 2g. It is clearly shown in Figure 2a,c that no features on the templates can be identified as there is no distinction between the background and the object. Even though the structure is visible as a higher ISO is set on the DSLR cameras as shown in Figure 2e, the low-level of the dynamic range due to the low-contrast image only detects a few templates and falsely identifies a few backgrounds as the object. Therefore, completely relying on the ambient light without any additional lighting will lose image details and makes it challenging for CV algorithms to automatically extract their important features.

2.2. Image Enhancement Algorithms

Based on the modified area, image enhancement can be categorized as the local and global enhancement methods; more about the application of these methods on grayscale images can be found in Pathak et al. [70]. This study focuses on improving the image characteristics using the global instead of the local method, with the explanation as follows. Vision-based SHM has the capability of measuring multiple locations at the same time by tracking the movement of the artificial templates. In monitoring large-scale structures, these templates are distributed in the entire structure component as shown in Figure 2f. This means that these templates should be correctly identified in the image after the enhancement process. Local operations will be less efficient for this purpose as processing multiple targets is more time-consuming. These operations also result in noise and other types of spatial artifacts that will affect the background separation and feature detection, which requires clarity of the processed images.
The mathematical fundamental of global image enhancement is to find the mapping function, , to improve the quality of input image, I ( x , y ) , to the optimum output image, O ( x , y ) as shown in Equation (1):
O ( x , y ) = ( I ( x , y ) )
In this study, five global image enhancement algorithms as shown in Figure 1 are implemented to improve vision-based image quality. The algorithms are contrast stretching (CS) [71], contrast limited adaptive histogram equalization (CLAHE) [72], histogram equalization (HE), haze removal with an inverted operation (HRIO) [73], and with single dark channel prior (HRDC) [74]. To visualize how each method improves image characteristics, the examples of an underexposed input and enhanced output images are given in Figure 3 with their associated gray level histogram. The processed image size is of width, X = 2560 pixels, and height, Y = 2048 pixels. The histogram bins for monochrome images with bit n = 8 are defined as 2 n = 256 ranging from the darkest gray value of zero to the brightest value of N = 2 n 1 = 255.
The gray distribution of the input image in Figure 3a shows a very low gray level intensity with several localized peaks near the top corner of the image from the light background. The contrast stretching (CS) algorithm in Figure 3a linearly scales these underexposed image pixel values between specified upper l i m u p   and lower limit l i m l o w . The mathematical relationship of CS operation is given in Equation (2):
O ( x , y ) = I ( x , y ) l i m l o w l i m u p l i m l o w × N
The example in Figure 3a is the output of the CS algorithm that defines the l i m l o w = 0.01 pixel and l i m u p = 0.99 pixel for 255 gray level intensity. This block finds these pixels and saturates the values above and below this limit. Of all the proposed methods, histogram equalization (HE) is the most commonly selected algorithm to improve monochrome images. The HE operation on a dark image can be expressed in Equation (3) as follows:
O ( x , y ) = { ( I ( x , y ) ) | I ( x , y ) I }
The transform function in Equation (3) is based on the cumulative density function (CDF) that maps the input image I ( x , y ) to the entire dynamic range ( I 0 , I N ) . The enhanced image using the HE method in Figure 3c shows that this method redistributes the probability of occurrence of the input gray level to make it uniform in the output image using the entire range of intensity level N . The modification of the HE method that also supports its potential for image enhancement is the contrast limited adaptive histogram equalization (CLAHE). The method limits the contrast amplification by histogram clipping at a specified value before computing the CDF. Therefore, the resulting output image from this method as given in Figure 3b is not brightened excessively because the peaks that are present in the input image are still clearly visible in the output image.
Images captured in a hazy environment have high-intensity pixels in the background for each channel, either in monochrome or RGB images, whereas the object is mainly disturbed by shadows, streaks, etc., causing it to have low intensity. The goal of haze removal in CV is given in Equation (4), in which I ( i ) is the image intensity, J ( i ) is the scene radiance, A is the global atmospheric light, and t ( i ) is the light portion that is not dispersed and reaches the sensor [74]. The direct attenuation of   J ( i ) t ( i ) decays in the air as a multiplicative distortion of the scene radiance, whereas the air light term from A ( 1 t ( i ) ) is the additive of the scene radiance that shifts the image colors.
I ( i ) = J ( i ) t ( i ) + A ( 1 t ( i ) )
He et al. [74] proposed a modification of Equation (4) that is based on statistics of haze-free images. The concept is defined as haze removal using dark channel prior (HRDC) and is expressed by Equation (5) below. The transmission of t ( i ) is restricted by the lower bound t 0 so a small amount of haze is preserved in the dense haze region.
J ( i ) = I ( i ) A max ( t ( i ) , t 0 ) + A
Previous studies conducted by Dong et al. [73] discovered that low-lighting video or image enhancement has similarities with dehazing or haze removal operation. Equation (4) is modified by Dong et al. [73] following the haze removal procedure that is started by inverting the low-lighting image as R ( i ) . The global atmospheric light A is selected from the highest intensity pixel from the input image I ( i ) . A multiplier P ( x ) is introduced to adjust t ( i ) because the brightness of the object is still low when t ( i ) is being applied directly to the low-light image. The multiplier   P ( x ) is set following the assigned t ( i ) value to avoid over- or under-enhancement of the input image. This procedure is expressed in Equation (6) and is defined as haze removal with an inverted operation (HRIO) in this study.
J ( i ) = R ( i ) A P ( i ) t ( i ) + A
A clear difference between the dehazing algorithms expressed in Equations (5) and (6) can be observed in Figure 3d,e. The output image from the HRDC algorithm is almost similar to that of CLAHE, resulting in a more natural image without oversaturated colors. Furthermore, the improvement of white level intensity is more visible in the HRIO method such that the separation between the background and the object is more obvious.

2.3. Image Quality Assessment

When a field deployment of the vision-based system is conducted in a low-light or dark environment, no input image can be used as a reference image, i.e., an image that is captured under normal lighting conditions, so it is assumed to have good visual quality. Therefore, the assessment of output image quality from enhancement operations in this study is conducted based on the no-reference quality metrics, namely, the blind-referenceless image spatial quality evaluator (BR) [75], naturalness image quality evaluator (NQ) [76], and perception-based image quality evaluator (PQ) [77]. Essentially, BR, PQ, and NR metrics use similar NSS features but BR and PQ metrics use trained features based on natural and distorted images, in addition to human interpretation. Therefore, BR and PQ scores are restricted to the assigned types of distortion, whereas NS is more independent in predicting the image quality.
No reference quality metrics as described previously are used to estimate the quality of the output image from enhancement procedures. Meanwhile, the classical quality metrics, i.e., image entropy ( E ) , peak-signal-to-noise ratio ( P S N R ) , and structural similarity index ( S S I M ) are still used in this study to measure how each of these indexes changes following the enhancement process. The difference in image characteristics before and after implementing the enhancement algorithms can be estimated using these metrics.

2.4. Automated Identification of Object Features and Significance in Close-Range Photogrammetry and SHM Procedures

In automated close-range photogrammetry, the object detection is tested as a homogenous white area based on the predefined search window. Template matching based on normalized cross-correlation coefficient (NCC) computes all possible radii of the center of the white area in two directions within the search window. Overall, the adapted photogrammetry procedure in this study is computed automatically within all photogrammetry images by the self-calibrating bundle adjustment. When the photogrammetry is completed without error, the SHM is conducted, and the recorded videos or images are processed to generate the data. The sub-pixel registration of the pattern or template matching method [78] based on NCC is also used to track the object locations within the image sequences. Finally, using the relationship between two cameras (as a full-projection matrix) and the change in object location in each image (from the template matching method) as outlined in Figure 1, images are translated into time-domain response signals, i.e., displacement, velocity, or acceleration. The SHM accuracy is computed based on the difference between the vision-based measurement and reference values as the absolute or relative error based on the experiments.

3. Implementation and Validation of the Proposed Framework through a One-Inch Block Experiment

3.1. Experimental Setup

The proposed image enhancement framework was experimentally evaluated using a one-inch steel block test in the Earthquake Engineering Laboratory at the University of Nevada, Reno. For the largest field of view, the vision-based system monitored the test approximately at a 5 m distance and was set on the top of a shake table as shown in Figure 4. The deployed vision systems consisted of two digital cameras with specifications listed in Table 1. Two high-speed (HS) cameras that required a host computer were triggered from the control room, while the second set consisted of two DSLR cameras that were operated manually (standalone DSLR, SD). A total of 28 templates were glued to the specimen as shown in Figure 4, with a radius of the white circle of 21 mm. They were not illuminated by extra lights so the monitoring completely relied on the ambient lighting. The HS camera exposures were also set such that the captured image was completely dark and underexposed as shown previously in Figure 2a. They were set as f / 14 and 1 / 3940 for the f -stop number and shutter speed settings, respectively. Regarding the SD cameras, the general setting for the ambient light environment was selected as given in Table 1 with ISO 400 f -stop number of f / 14 , and shutter speed of 1 / 50 , resulting in a normal image, as shown previously in the example in Figure 2g.
The main component of the validation test model shown in Figure 4 is a sliding bar attached to a concrete column–capital–slab specimen. The sliding bar consists of a Novotechnik displacement sensor, an aluminum plate, and six circular templates. Other templates shown in Figure 4 were used for other static experiments; however, the minimum target constraints in the bundle adjustment process required them to be included in the photogrammetry images. A one-inch magnetic block was used in the static test by inserting it to the sliding bar that displaced the six templates by exactly one inch as read by the Novotechnik sensor. Three still images were recorded in the tests, i.e., two images without the block inserted (before and after) and one when the templates were moved by exactly one inch when the block was placed. Therefore, the accuracy measured from this test was based on an absolute single value of 25.4 mm; this value was compared with the six-points measurement shown in Figure 4.

3.2. Output Object Visualization

A total of 50 photogrammetry images, 25 captured by each camera of the HS system, was taken from different positions and orientations towards the specimen. The underexposed input images were improved first before the automatic object detection and close-range photogrammetry, in addition to the SHM procedures. The global histogram for input and associated output images for each enhancement method are shown above in Figure 3. Because the measurement accuracy was conducted based on the displacement of the six templates shown in Figure 5, the detailed modification of each point after enhancement at their 2D locations is given in Figure 5. This figure displays the change in gray level and the results clearly show that the intensity is evenly stretched for all points. The clipping effect is observed from the CS and HE methods, i.e., the pure white block is clipped at a maximum of 255 intensity. The gray values at this specific region are outside the sensor dynamic range after enhancement so they are set as the maximum (255) and appear as the clipped peaks in the histogram bins. Another observation is that the HRDC method effectively separates the white and black background, such that the low-level intensity of the dark background is visually clear in Figure 5e. Meanwhile, CLAHE softens the clipping effect that is evident in the HE method. It limits template brightness by setting a threshold of 0.01 pixel, thus avoiding oversaturation. The HRIO method also confines the gray level distribution within the sensor dynamic range without the clipping effect.
The radii of each point in Figure 5 that are identified as an object from NCC template matching are listed in Table 2 for each enhancement algorithm. The search window for the object is set as a 5.0-pixel minimum to allow automatic detection of the center. From Table 2, the pixel length in each direction is not uniform and there is approximately a scale factor of 1.2 due to the difference. Because the images were taken from different angles, the appearance of the object was not always in a circular shape. Therefore, instead of detecting a circle feature, an ellipse threshold of 2.0 pixels was selected to check the similarity of the radius in each direction. When an ellipse feature was detected, the center of the search window defines the ellipse center based on the minimum threshold average length of 2.0 pixels in each direction. The results given in Table 2 show the range of 19–21 pixels and 15–17 pixels for the first and second radius, respectively. The variations within each enhancement method are at the largest using the HRIO method, and at the minimum when CS and HE methods are used. The radii also have variations within each point due to the applied enhancement method, with slightly more percentages for points 2 and 3.

3.3. Image Quality Assessment

The effectiveness of each algorithm in modifying image quality was measured based on the classical entropy   ( E ) , P S N R , and S S I M , and the output image quality was estimated using the no-reference image quality index metrics, i.e., B R , N Q , and P Q . They focused only on the quality and index changes due to image enhancement procedures applied to the underexposed images captured by the vision-based HS system. Quality assessment was conducted on all enhanced photogrammetry images with the statistics shown in Table 3. The coefficient of variations ( C V ) were computed from 50 output images for each enhancement method and index, and the index change ( Δ i n p u t ) was measured from the mean difference between each algorithm and input index.
Although uniform boundary conditions of enhancement algorithms were applied to the 50 input images, some variations based on C V percentage were observed in the output images, especially when the enhancement was conducted using the HRDC method. No reference index metrics also measure input image C V within 1.6–9.4% with more variants computed by N Q index, as listed in Table 3. The input images were visually dark and underexposed; however, they were taken from different positions and orientations towards the specimen. Because the monitoring depends entirely on the ambient lights and the lighting cannot be controlled to evenly illuminate the templates, changing the camera positions while taking pictures affected the images captured by the camera sensor. Overall, the observation based on the output image statistics in Table 3 shows that the implementation of the image enhancement algorithm modifies the input image characteristics, and some metrics detect major changes compared to other indices. These variations cannot be identified merely from the output image perception or gray level histogram.

3.4. Effect of Image Enhancement on the Object Identification in the Close-Range Photogrammetry

The automatic object identification procedure using the ellipse assumption and NCC matching was described previously in Section 3.2. The example of accurate identification is shown in Figure 6a, in which the object center is detected and positioned at the center with correct registration following the white rings. As a result of the predefined window search and threshold, these rings were sometimes detected as objects, so their center coordinates and residuals were computed in the preliminary object orientation, as shown in Figure 6c. The white rings were detected separately from the circular center and are considered objects. Therefore, these rings cannot be grouped into a single object or correctly registered as a single template. The object center can also be detected but when the matching is not convergent, the detected object is unable to be registered, like the example in Figure 6e. Ellipse shape assumption also may detect a few non-object features, such as the bolts or stair reflections in Figure 6d. These features may cause the photogrammetry image to fail in bundle adjustment computation if they are more dominant in the image plane, as shown in Figure 6f. This image was excluded from automatic bundle adjustment and the photogrammetry cannot be completed when most objects are incorrectly identified in each image plane.
The results of object identifications in photogrammetry based on each enhancement algorithm are listed in Table 4. If the 28 templates are all visible in 50 photogrammetry images, the correct object identification should result in 1400 objects. However, these images were taken from different positions, with some pictures taken closer to the specimen. The templates for the top specimen are lost in this position; therefore, the correct identifications in Table 4 comprise fewer than 1400 objects. The total numbers given in Table 4 are the results from the summation of the correct, incorrect, and non-object identification, from which are then subtracted by the unidentified objects. It is observed that haze removal-based algorithms identified 80% of the objects correctly but have slightly higher percentages of non-object identification. Almost 20% of the images enhanced by the HRDC procedure failed as these images consist of incorrect object detection. Furthermore, histogram-based enhancements have lower accuracy of approximately 60% with 0.4% or less unidentified objects. Incorrect object identification also has a higher percentage than in the haze removal-based algorithm. However, they are not concentrated into one image but rather distributed within all 50 images, so there are no failed images from either CS, CLAHE, or HE methods.
The enhanced images from each procedure were carefully analyzed, confirming that all failed images were excluded in the bundle adjustment procedure. Therefore, despite the failed images and variations in object identification observed in each enhancement method, the bundle adjustment could still reach convergence. The photogrammetry using output images from each enhancement method was completed, with results given in Table 5. Principal point locations u 0 and v 0 were determined by the projected positions of the light rays through the lens center that are perpendicular to the image plane. The length of the perpendicular line is the principal distance that is equal to the focal length at infinity focus. It is related to the HS hardware system setting for the validation tests; therefore, the variations are negligible and estimated as 0.23% within all methods. Furthermore, the principal point locations are known to have correlations with other internal camera parameters such as distortion coefficients. A strong correlation with other camera internal parameters resulted in higher variations in the principal point locations within each method, which were computed as −18% and −24.64%, respectively.
Overall, the global image enhancement method implemented in the study may still have some limitations in automatically identifying and registering objects. In addition, the careful selection of the images that need to be enhanced is required early in the process in the bundle adjustment procedure. However, as demonstrated from the results in this section, the method is still valid when the enhanced images are carefully selected to be included in the bundle adjustment procedure so that the process is complete and proper camera system parameters can be obtained (Table 5).

3.5. Effect of Image Enhancement on the Vision System Measurement Accuracy

Similar enhancement methods were applied in three static images taken from the validation experiments. The measurement accuracy was assessed by computing the absolute error, Δ a b s , of the displacement of the six points, δ , with respect to the absolute value of 25.4 mm. The results are shown in Table 6 and the average absolute error, Δ a b s , m e a n , is computed from all points. A high accuracy of less than 1% error is observed from all measurements using enhanced images. Only HRDC output images achieve a slightly higher error of 1.37%. Overall, the results shown in Table 6 provide the ultimate validation and verification for implementing image enhancement using either histogram-based or haze-removal-based algorithms, where displacement measurement absolute errors can be less than 1%.

4. Implementation in Seismic Monitoring of a Large-Scale Building Using Two Vision-Based Systems

4.1. Monitoring Setup and Building Description

The accuracy of image enhancement in measuring seismic vibrations and identifying structural dynamic characteristics was evaluated using a large-scale seismic shake table test of a three-story reinforced concrete (RC) building, as shown in Figure 7. The test was part of the Tokyo Metropolitan Resilience Project Subproject C and was performed in December 2019 at the National Research Institute for Earth Science and Disaster Resilience (NIED) in Kobe, Japan. The tests were dedicated to improving the resiliency of buildings and developing SHM techniques that could rapidly assess the safety of the buildings after major seismic shaking due to their post-disaster functions. More information related to the project or the building system can be found in Yeow et al. [79].
Two vision sensor systems and their configurations used in seismic monitoring are shown in Table 7, in which both systems used CMOS sensors. The first system comprised two high-speed (HS) cameras shown as Cam 1 and Cam 2 in Figure 7, which were similar to those used in the validation test. The second system used a standalone DSLR (SD), shown as Cam A and Cam B in Figure 7, which recorded monochrome videos of the tests. The SD system test videos were later converted into continuous images with a resolution of 1920 × 1080 pixels. The sampling rates for the seismic test were selected as 32 frames-per-second for the HS system and the default setting of 30-frames-per-second was selected for the SD system. Both vision systems completely relied on the ambient light sources in the test environment and the setting adjustments in each camera. Therefore, the captured images for photogrammetry and the SHM required image processing to improve their dynamic range.

4.2. Output of Image Enhancement

Given the promising results obtained from the simple 1-inch block test, it was desired to extend the study to more realistic cases, including full-scale building vibration monitoring, which is the focus of the next section. The validation test described previously highlights several enhancement algorithms that result in less error compared to other methods. An example of the enhanced image histogram using the CLAHE method and the quality index metric for both sensor systems is shown in Figure 8. The input images initially captured by each system were initially underexposed for the HS system and low in contrast for the SD system, as shown by their histogram in Figure 8c,d. As previously highlighted in Figure 2, the identification algorithms are unable to locate any features from the original HS image, whereas the low-contrast SD image identifies a small number, but their total is inadequate for bundle adjustment convergence. After processing using the CLAHE method, the histogram of the HS vision system clearly shows the stretching of pixel distribution within the gray level intensity as the effect of image enhancement. The reduction in the pixel counts is also observed, especially in darker areas. Regarding the SD system, the CLAHE algorithm relaxes the pixel counts so the separation between dark and bright areas is more evident in the output image. Similar to the static test image, the entropy of the seismic test output image also measures higher values due to the applied image enhancement. It is more noticeable in the HS output, whereas less change is computed for the low-contrast image as recorded by the SD system. From the metrics shown in Figure 8, the enhancement procedure is observed to affect underexposed images more than low-contrast images, especially when the quality is estimated by the PQ metric.

4.3. Seismic Behavior and System Identification of the Three-Story Building

Several ground motion excitations ranging from low to high wave amplitude were applied to the RC building. White noise excitations in terms of low amplitude vibration were applied to the building between the seismic tests with a loading duration of 180 s. A sample of the displacement history from high amplitude (150% scale of a synthetic ground motion seismic excitation [79]) and white noise are given in Figure 9 and Figure 10 for the measured template marked in the figures. The HS system is selected as the reference sensor and the relative difference between each system measurement is presented in detail.
The seismic response of the building under high-amplitude excitation is shown in Figure 9 based on the measurement of the two vision systems, together with their relative difference. More details are presented in Table 8, which provides a summary of the peak displacement values from both monitoring systems. The peaks observed from the displacement measurement of HS systems were computed as 1118.3 and −787.4 mm, respectively, whereas the SD system shows the peaks at a maximum of 1113.3 and −779.2 mm. The relative maximum error in the SD system measurement relative to the HS system was computed as −28.57 mm (3.63%), which shows that both consumer-grade and high-end high-speed sensor systems are comparable.
The building response in three principal axes based on low-amplitude white noise excitation is given in Figure 10 for the two vision systems. These data were analyzed using the SSI-COV algorithm to enable frequency and modal identifications of the building system. The HS sampling rate was 32 Hz, so the SD system acceleration signal was resampled to increase the computational efficiency of the identification algorithm. The signals from both systems were filtered using a 4th order Butterworth bandpass filter with cutoff frequencies of 3 and 13 Hz. A model order of six was selected for all signals to enable the identification of the first three fundamental modes, with the fitting computed up to the 30th order to show the stability of the poles at a higher level. The frequency response function of each signal is plotted in Figure 11 together with the stability of the poles. With a model order of six, only the transverse and longitudinal response data are able to extract three stable poles in frequency and damping. Higher modes and different filtering can also be selected to extract more modes in the vertical direction. For the uniformity in signal processing, the filtering and model order selection were set to be similar in this case study.
The comparison between HS and SD systems in extracting the first mode of vibration in the terms of frequency, f 1 , and damping, ζ 1 , is given in Table 9. The difference in measuring frequency, Δ f , shows the lowest difference is computed in measuring the transverse frequency and a slightly higher difference is observed in other directions. The relative difference within the range of 3% is observed in measuring structural damping properties, Δ ζ Overall, similar to what was demonstrated for high-amplitude displacement measurement, the structural modal properties computed from both vision systems are also very comparable.

5. Conclusions

This work presents a framework of improving underexposed images using image enhancement algorithms for feature identification with implementation in close-range photogrammetry and structural health monitoring. An experimental validation with systematic evaluation was conducted using a one-inch steel block text which measured the absolute difference between two vision-based systems and the one-inch block displacement. The framework was also tested in measuring the seismic response and modal properties of a three-story building tested under high-amplitude seismic excitation and a white noise test. Based on these laboratory experiments, the key findings and main conclusions can be drawn as follows:
  • Image enhancement efficiently improves the quality of image data collected from vision-based sensors and needs to be adopted more often in infrastructure and large-scale SHM applications. The proposed algorithms can modify the underexposed and low-contrast input images captured by high-speed or commercial DSLR cameras, thus allowing automatic feature identification. Their efficiency can be estimated through the classical image quality metrics, and their output quality can be assessed by more advanced blind image quality metrics.
  • The precision of the enhanced images in measuring static displacement shows a very high accuracy as observed by the two vision systems in the one-inch block test. Comparable results from both systems were also assessed in measuring high-amplitude displacement from the large-scale seismic tests, and in estimating structural modal properties through the system identification procedure.
  • Overall, it is concluded that image enhancement does have a significant effect on feature identification and implications for the close-range photogrammetry and SHM accuracy. The applied enhancement algorithms were shown to be computationally effective and are recommended for vision-based SHM image enhancement applications.
  • On a specific note, automatic feature detection in enhanced images may be a limitation of this method. Thus, future users are cautioned against selecting the search window and the threshold options for enabling automatic detection of the features on the output images when the global enhancement algorithms are implemented. Instead, a careful check is recommended of the number of obsolete objects identified within each enhanced image plane to allow the bundle adjustment to converge in the photogrammetry process. Measurement accuracy seems to slightly deteriorate when more failed images are identified from the bundle adjustment procedures. With due care, successful monitoring using underexposed and low-contrast images is still possible, not only for different vision system hardware, but also for a wide range of experimental works, through a proper selection of the image enhancement algorithm.

Author Contributions

Conceptualization, L.N. and M.A.M.; methodology, L.N. and M.A.M.; formal analysis, L.N.; data curation, L.N.; writing—original draft preparation, L.N.; writing—review and editing, M.A.M.; supervision, M.A.M.; project administration, M.A.M.; funding acquisition, M.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation (NSF) Award # 2000560.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the laboratory staff at the University of Nevada, Reno, and the hosting/collaborative team in Japan at the E-Defense facility.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ngeljaratan, L.; Moustafa, M.A. Structural Health Monitoring and Seismic Response Assessment of Bridge Structures using Target-Tracking Digital Image Correlation. In Engineering Structures; Elsevier: Amsterdam, The Netherlands, 2020; Volume 213, p. 110551. [Google Scholar]
  2. Ngeljaratan, L.; Moustafa, M.A. System Identification of Large-Scale Bridge Model using Digital Image Correlation from Monochrome and Color Cameras. In Proceedings of the 12th International Workshop on Structural Health Monitoring, Stanford, CA, USA, 10–12 September 2019; DEStech Publications: Pennsylvania, PA, USA, 2019; Volume 2019. [Google Scholar]
  3. Ngeljaratan, L.; Moustafa, M.A. System Identification of Large-Scale Bridges using Target-Tracking Digital Image Correlation. Front. Built Environ. 2019, 5, 85. [Google Scholar] [CrossRef] [Green Version]
  4. Ngeljaratan, L.; Moustafa, M.A. Novel Digital Image Correlation Instrumentation for Large-Scale Shake Table Tests. In Proceedings of the 11th NCEE, Los Angeles, CA, USA, 25–29 June 2018; pp. 25–29. [Google Scholar]
  5. Ngeljaratan, L.; Moustafa, M.A. Digital Image Correlation for Dynamic Shake Table Test Measurement. In Proceedings of the 7th AESE, Pavia, Italy, 6–8 September 2017; pp. 6–8. [Google Scholar]
  6. Feng, D.; Feng, M.Q. Computer Vision for Structural Dynamics and Health Monitoring; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  7. Feng, D.; Feng, M.Q. Experimental Validation of Cost-Effective Vision-Based Structural Health Monitoring. In Mechanical Systems and Signal Processing; Elsevier: Amsterdam, The Netherlands, 2017; Volume 88, pp. 199–211. [Google Scholar]
  8. Feng, D.; Feng, M.Q. Vision-based multipoint displacement measurement for structural health monitoring. In Structural Control and Health Monitoring; Wiley Online Library: Hoboken, NJ, USA, 2016; Volume 23, pp. 876–890. [Google Scholar]
  9. Feng, D. Cable Tension Force Estimate Using Novel Noncontact Vision-Based Sensor. In Measurement; Elsevier: Amsterdam, The Netherlands, 2017; Volume 99, pp. 44–52. [Google Scholar]
  10. Brownjohn, J.M.W.; Xu, Y.; Hester, D. Vision-Based Bridge Deformation Monitoring. Front. Built Environ. 2017, 3, 23. [Google Scholar] [CrossRef] [Green Version]
  11. Dong, C.-Z.; Catbas, F.N. A Review of Computer Vision–Based Structural Health Monitoring at Local and Global Levels. In Structural Health Monitoring; SAGE Publications Sage UK: London, UK, 2020. [Google Scholar]
  12. Ribeiro, D. Non-contact measurement of the dynamic displacement of railway bridges using an advanced video-based system. Eng. Struct. 2014, 75, 164–180. [Google Scholar] [CrossRef]
  13. Ji, Y.F.; Zhang, O.W. A novel image-based approach for structural displacement measurement. In Proceedings of the Sixth International IABMAS Conference, Lake Maggiore, Italy, 8–12 July 2012; pp. 407–414. [Google Scholar]
  14. Lee, J.J. Evaluation of bridge load carrying capacity based on dynamic displacement measurement using real-time image processing techniques. Int. J. Steel Struct. 2006, 6, 377–385. [Google Scholar]
  15. Lee, J.J.; Shinozuka, M. A vision-based system for remote sensing of bridge displacement. Ndt. E Int. 2006, 39, 425–431. [Google Scholar] [CrossRef]
  16. Olaszek, P. Investigation of the dynamic characteristic of bridge structures using a computer vision method. Measurement 1999, 25, 227–236. [Google Scholar] [CrossRef]
  17. Ngeljaratan, L.; Moustafa, M.A.; Pekcan, G. A compressive sensing method for processing and improving vision-based target-tracking signals for structural health monitoring. In Computer-Aided Civil and Infrastructure Engineering; Wiley: Hoboken, NJ, USA, 2021; Volume 36, pp. 1203–1223. [Google Scholar]
  18. Li, J.; Xie, B.; Zhao, X. Measuring the interstory drift of buildings by a smartphone using a feature point matching algorithm. In Structural Control and Health Monitoring; Wiley Online Library: Hoboken, NJ, USA, 2020; Volume 27, p. 2492. [Google Scholar]
  19. Kim, S.-W. Stay cable tension estimation using a vision-based monitoring system under various weather conditions. J. Civ. Struct. Health Monit. 2017, 7, 343–357. [Google Scholar] [CrossRef]
  20. Li, J.; Xie, B.; Zhao, X. A Method of Interstory Drift Monitoring Using a Smartphone and a Laser Device. Sensors 2020, 20, 1777. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Choi, I.; Kim, J.; Jang, J. Development of marker-free night-vision displacement sensor system by using image convex hull optimization. Sensors 2018, 18, 4151. [Google Scholar] [CrossRef] [Green Version]
  22. Feng, M.Q. Nontarget vision sensor for remote measurement of bridge dynamic response. J. Bridge Eng. 2015, 20, 04015023. [Google Scholar] [CrossRef]
  23. Yu, Z.; Bajaj, C. A fast and adaptive method for image contrast enhancement. In Proceedings of the 2004 International Conference on Image Processing, Singapore, 24–27 October 2004; Volume 2, pp. 1001–1004. [Google Scholar]
  24. Thomas, G.; Flores-Tapia, D.; Pistorius, S. Histogram specification: A fast and flexible method to process digital images. IEEE Trans. Instrum. Meas. 2011, 60, 1565–1578. [Google Scholar] [CrossRef]
  25. Xu, Y. Building crack monitoring based on digital image processing. Frat. Ed Integrità Strutturale 2020, 14, 1–8. [Google Scholar]
  26. Omar, T.; Nehdi, M.L. Remote sensing of concrete bridge decks using unmanned aerial vehicle infrared thermography. Autom. Constr. 2017, 83, 360–371. [Google Scholar] [CrossRef]
  27. Russo, F. An image enhancement technique combining sharpening and noise reduction. IEEE Trans. Instrum. Meas. 2002, 51, 824–828. [Google Scholar] [CrossRef]
  28. Rong, Z.; Jun, W.L. Improved wavelet transform algorithm for single image dehazing. Optik 2014, 125, 3064–3066. [Google Scholar] [CrossRef]
  29. Abdel-Qader, I.; Abudayyeh, O.; Kelly, M.E. Analysis of edge-detection techniques for crack identification in bridges. J. Comput. Civ. Eng. 2003, 17, 255–263. [Google Scholar] [CrossRef]
  30. Zhang, R. Automatic Detection of Earthquake-Damaged Buildings by Integrating UAV Oblique Photography and Infrared Thermal Imaging. Remote Sens. 2020, 12, 2621. [Google Scholar] [CrossRef]
  31. Andreaus, U. Experimental damage evaluation of open and fatigue cracks of multi-cracked beams by using wavelet transform of static response via image analysis. Struct. Control. Health Monit. 2017, 24, e1902. [Google Scholar] [CrossRef]
  32. Song, Y.-Z. Virtual visual sensors and their application in structural health monitoring. Struct. Health Monit. 2014, 13, 251–264. [Google Scholar] [CrossRef] [Green Version]
  33. Cha, Y.J. Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 731–747. [Google Scholar] [CrossRef]
  34. Zollini, S. UAV Photogrammetry for Concrete Bridge Inspection Using Object-Based Image Analysis (OBIA). Remote Sens. 2020, 12, 3180. [Google Scholar] [CrossRef]
  35. Ablin, R.; Sulochana, C.H.; Prabin, G. An investigation in satellite images based on image enhancement techniques. Eur. J. Remote Sens. 2020, 53, 86–94. [Google Scholar] [CrossRef] [Green Version]
  36. Sidike, P. Adaptive trigonometric transformation function with image contrast and color enhancement: Application to unmanned aerial system imagery. IEEE Geosci. Remote Sens. Lett. 2018, 15, 404–408. [Google Scholar] [CrossRef]
  37. Brunelli, R. Template Matching Techniques in Computer Vision: Theory and Practice; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  38. Cowan, C.K.; Modayur, B.; DeCurtins, J.L. Automatic Light-Source Placement for Detecting Object Features; International Society for Optics and Photonics: Bellingham, DC, USA, 1992; Volume 1826, pp. 397–408. [Google Scholar]
  39. Kopparapu, S.K. Lighting design for machine vision application. Image Vis. Comput. 2006, 24, 720–726. [Google Scholar] [CrossRef]
  40. Luo, L.; Feng, M.Q. Edge-enhanced matching for gradient-based computer vision displacement measurement. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 1019–1040. [Google Scholar] [CrossRef]
  41. Xu, Y.; Brownjohn, J.M.W. Vision-based systems for structural deformation measurement: Case studies. In Proceedings of the Institution of Civil Engineers-Structures and Buildings; Thomas Telford Ltd.: London, UK, 2018; Volume 171, pp. 917–930. [Google Scholar]
  42. Morgenthal, G.; Hallermann, N. Quality assessment of unmanned aerial vehicle (UAV) based visual inspection of structures. Adv. Struct. Eng. 2014, 17, 289–302. [Google Scholar] [CrossRef]
  43. Zhu, C. Error estimation of 3D reconstruction in 3D digital image correlation. Meas. Sci. Technol. 2019, 30, 025204. [Google Scholar] [CrossRef]
  44. Acikgoz, S.; DeJong, M.J.; Soga, K. Sensing dynamic displacements in masonry rail bridges using 2D digital image correlation. Struct. Control. Health Monit. 2018, 25, e2187. [Google Scholar] [CrossRef] [Green Version]
  45. Poozesh, P. A Multiple Stereo-Vision Approach Using Three Dimensional Digital Image Correlation for Utility-Scale Wind Turbine Blades. In Proceedings of the IMAC XXXVI, Orlando, FL, USA, 12–15 February 2018; SEM: Orlando, FL, USA; Volume 12. [Google Scholar]
  46. Niezrecki, C.; Baqersad, J.; Sabato, A. Digital image correlation techniques for NDE and SHM. In Handbook of Advanced Non-Destructive Evaluation; Springer: Berlin/Heidelberg, Germany, 2018; pp. 1–46. [Google Scholar]
  47. Sutton, M.A. Effects of subpixel image restoration on digital correlation error estimates. Opt. Eng. 1988, 27, 271070. [Google Scholar] [CrossRef]
  48. Wattrisse, B. Analysis of strain localization during tensile tests by digital image correlation. Exp. Mech. 2001, 41, 29–39. [Google Scholar] [CrossRef]
  49. Bing, P. Performance of sub-pixel registration algorithms in digital image correlation. Meas. Sci. Technol. 2006, 17, 1615. [Google Scholar] [CrossRef]
  50. Gruen, A. Development and status of image matching in photogrammetry. Photogramm. Rec. 2012, 27, 36–57. [Google Scholar] [CrossRef]
  51. Bruck, H.A. Digital image correlation using Newton-Raphson method of partial differential correction. Exp. Mech. 1989, 29, 261–267. [Google Scholar] [CrossRef]
  52. Schreier, H.W.; Braasch, J.R.; Sutton, M.A. Systematic errors in digital image correlation caused by intensity interpolation. Opt. Eng. 2000, 39, 2915–2921. [Google Scholar] [CrossRef]
  53. Lu, H.; Cary, P.D. Deformation measurements by digital image correlation: Implementation of a second-order displacement gradient. Exp. Mech. 2000, 40, 393–400. [Google Scholar] [CrossRef]
  54. Zhou, P.; Goodson, K.E. Subpixel displacement and deformation gradient measurement using digital image/speckle correlation. Opt. Eng. 2001, 40, 1613–1621. [Google Scholar] [CrossRef]
  55. Zhang, J. Application of an improved subpixel registration algorithm on digital speckle correlation measurement. Opt. Laser Technol. 2003, 35, 533–542. [Google Scholar] [CrossRef]
  56. Luo, L.; Feng, M.Q.; Wu, Z.Y. Robust vision sensor for multi-point displacement monitoring of bridges in the field. Eng. Struct. 2018, 163, 255–266. [Google Scholar] [CrossRef]
  57. Tian, Y.; Zhang, J.; Yu, S. Vision-based structural scaling factor and flexibility identification through mobile impact testing. Mech. Syst. Signal Process. 2019, 122, 387–402. [Google Scholar] [CrossRef]
  58. Pilch, A.; Mahajan, A.; Chu, T. Measurement of whole-field surface displacements and strain using a genetic algorithm based intelligent image correlation method. J. Dyn. Syst. Meas. Control 2004, 126, 479–488. [Google Scholar] [CrossRef]
  59. Jin, H.; Bruck, H.A. Pointwise digital image correlation using genetic algorithms. Exp. Tech. 2005, 29, 36–39. [Google Scholar] [CrossRef]
  60. Jin, H.; Bruck, H.A. Theoretical development for pointwise digital image correlation. Opt. Eng. 2005, 44, 067003. [Google Scholar] [CrossRef]
  61. Pitter, M.C.; See, C.W.; Somekh, M.G. Fast Subpixel Digital Image Correlation Using Artificial Neural Networks; IEEE: Piscataway, NJ, USA, 2001; Volume 2, pp. 901–904. [Google Scholar]
  62. Pitter, M.C.; See, C.W.; Somekh, M.G. Subpixel microscopic deformation analysis using correlation and artificial neural networks. Opt. Express 2001, 8, 322–327. [Google Scholar] [CrossRef] [PubMed]
  63. Wu, R.-T.; Jahanshahi, M.R. Deep convolutional neural network for structural dynamic response estimation and system identification. J. Eng. Mech. 2019, 145, 04018125. [Google Scholar] [CrossRef]
  64. Zhang, Y. Autonomous bolt loosening detection using deep learning. Struct. Health Monit. 2020, 19, 105–122. [Google Scholar] [CrossRef]
  65. Chen, S. UAV bridge inspection through evaluated 3D reconstructions. J. Bridge Eng. 2019, 24, 05019001. [Google Scholar] [CrossRef] [Green Version]
  66. Yin, Z.; Wu, C.; Chen, G. Concrete crack detection through full-field displacement and curvature measurements by visual mark tracking: A proof-of-concept study. Struct. Health Monit. 2014, 13, 205–218. [Google Scholar] [CrossRef]
  67. Schneider, C.T. 3-D Vermessung von Oberflächen und Bauteilen durch Photogrammetrie und Bildverarbeitung. Proc. IDENT/VISION 1991, 91, 14–17. [Google Scholar]
  68. Lindeberg, T. Feature detection with automatic scale selection. Int. J. Comput. Vis. 1998, 30, 79–116. [Google Scholar] [CrossRef]
  69. Lindeberg, T. Edge detection and ridge detection with automatic scale selection. Int. J. Comput. Vis. 1998, 30, 117–156. [Google Scholar] [CrossRef]
  70. Pathak, S.S.; Dahiwale, P.; Padole, G. A Combined Effect of Local and Global Method for Contrast Image Enhancement; IEEE: Piscataway, NJ, USA, 2015; pp. 1–5. [Google Scholar]
  71. Jain, A.K. Fundamentals of Digital Image Processing; Prentice-Hall: Hoboken, NJ, USA, 1989. [Google Scholar]
  72. Lidong, H. Combination of contrast limited adaptive histogram equalisation and discrete wavelet transform for image enhancement. IET Image Process. 2015, 9, 908–915. [Google Scholar] [CrossRef]
  73. Dong, X. Fast Efficient Algorithm for Enhancement of Low Lighting Video; IEEE: Piscataway, NJ, USA, 2011; pp. 1–6. [Google Scholar]
  74. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  75. Mittal, A.; Moorthy, A.K.; Bovik, A.C. Blind/Referenceless Image Spatial Quality Evaluator; IEEE: Piscataway, NJ, USA, 2011; pp. 723–727. [Google Scholar]
  76. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  77. Venkatanath, N. Blind Image Quality Evaluation Using Perception Based Features; IEEE: Piscataway, NJ, USA, 2015; pp. 1–6. [Google Scholar]
  78. Stephen, G.A.; Brownjohn, J.M.W.; Taylor, C.A. Measurements of static and dynamic displacement from visual monitoring of the Humber Bridge. Eng. Struct. 1993, 15, 197–208. [Google Scholar] [CrossRef] [Green Version]
  79. Yeow, T.Z.; Kusunoki, K.; Nakamura, I.; Hibino, Y.; Fukai, S.; Safi, W.A. E-Defense Shake-table Test of a Building Designed for Post-disaster Functionality. J. Earthq. Eng. 2021, 2, 1–22. [Google Scholar] [CrossRef]
Figure 1. Proposed framework for underexposed and low-contrast image enhancement that incorporates a close-range photogrammetry procedure for application in SHM.
Figure 1. Proposed framework for underexposed and low-contrast image enhancement that incorporates a close-range photogrammetry procedure for application in SHM.
Applsci 11 11086 g001
Figure 2. Underexposed and low-contrast images with their associated enhanced versions and identified gray level intensity (green): (ad) images from high-speed cameras, (ef) images from digital cameras, (g) normal image from a digital camera with a zoomed view of detected templates and gray level intensity.
Figure 2. Underexposed and low-contrast images with their associated enhanced versions and identified gray level intensity (green): (ad) images from high-speed cameras, (ef) images from digital cameras, (g) normal image from a digital camera with a zoomed view of detected templates and gray level intensity.
Applsci 11 11086 g002
Figure 3. Underexposed input (top-left) and enhanced output images with their associated gray level histogram (ae).
Figure 3. Underexposed input (top-left) and enhanced output images with their associated gray level histogram (ae).
Applsci 11 11086 g003
Figure 4. Experimental setup using a standard one-inch (25.4 mm) block inserted into a sliding verification bar.
Figure 4. Experimental setup using a standard one-inch (25.4 mm) block inserted into a sliding verification bar.
Applsci 11 11086 g004
Figure 5. Gray level distribution of measured templates before (top-left) and after enhancement procedures (ae).
Figure 5. Gray level distribution of measured templates before (top-left) and after enhancement procedures (ae).
Applsci 11 11086 g005
Figure 6. Examples of an automatic object and non-object detection, identification, and positioning in the photogrammetry procedure.
Figure 6. Examples of an automatic object and non-object detection, identification, and positioning in the photogrammetry procedure.
Applsci 11 11086 g006
Figure 7. Seismic shake table test monitoring setup using two vision-based sensor systems.
Figure 7. Seismic shake table test monitoring setup using two vision-based sensor systems.
Applsci 11 11086 g007
Figure 8. Input and output image with gray level intensity from CLAHE methods.
Figure 8. Input and output image with gray level intensity from CLAHE methods.
Applsci 11 11086 g008
Figure 9. Comparison between high-speed and commercial DSLR system measurements of high-amplitude seismic excitation as measured from the marked template.
Figure 9. Comparison between high-speed and commercial DSLR system measurements of high-amplitude seismic excitation as measured from the marked template.
Applsci 11 11086 g009
Figure 10. White noise response as measured by high-speed and commercial DSLR system measurements in three principal axes as measured from the marked template.
Figure 10. White noise response as measured by high-speed and commercial DSLR system measurements in three principal axes as measured from the marked template.
Applsci 11 11086 g010
Figure 11. Stabilization plots from output-only SSI-COV method measured for building dynamic modal properties in three directions.
Figure 11. Stabilization plots from output-only SSI-COV method measured for building dynamic modal properties in three directions.
Applsci 11 11086 g011
Table 1. Vision-based system configuration sets for one-inch block validation experiment.
Table 1. Vision-based system configuration sets for one-inch block validation experiment.
Camera TypeHigh-Speed (HS)Standalone DSLR (SD)
StandaloneNoYes
SensorCMOSCMOS
ColorMonochromeMonochrome
Depth, n 88
Input image size, w   ×   h (pixel)2560 × 20485184 × 2912
f-stop number f / 14 f / 8
Shutter speed 1 / 3940 1 / 50
File type.tiff.jpg
Table 2. Two-directional radii, R 1 and R 2 , for identification of object points.
Table 2. Two-directional radii, R 1 and R 2 , for identification of object points.
R 1   ( pix . ) R 2   ( pix . )
Point123456 C V  
( % )
123456 C V
( % )
CS (a)19.0919.1818.7519.319.3219.382.4115.9515.9115.1916.2416.4116.315.56
CLAHE (b)19.8119.7319.6920.3320.4620.283.4216.5916.217.1717.417.417.195.75
HE (c)20.892120.720.9720.9420.991.0817.6517.5717.0117.7117.7817.633.17
HRIO (d)19.118.5518.5820.3920.6820.9611.0816.115.4115.217.2617.6217.8814.00
HRDC (e)20.7320.9820.6620.8521.0720.931.4817.4417.5117.0217.5917.817.613.00
C V     ( % ) 4.335.485.133.243.403.40 4.605.896.293.393.323.57
(a) contrast stretching, (b) contrast limited adaptive histogram equalization, (c) histogram equalization, (d) haze removal with an inverted operation, and (e) haze-removal with single dark channel prior.
Table 3. Coefficient of variations ( C V ) and index change concerning input image quality ( Δ i n p u t ) .
Table 3. Coefficient of variations ( C V ) and index change concerning input image quality ( Δ i n p u t ) .
C V   ( % ) Δ i n p u t   ( % )
MethodIndexMethodIndex
E P S N R S S I M B R N Q P Q E B R N Q P Q
CS2.748.0811.011.785.04 (b)5.19 (a)CS0.964.9612.6689.31
CLAHE0.86 (b)2.62 (b)4.43 (b)1.966.024.46CLAHE23.4612.43 (a)8.90 (b)17.04 (b)
HE2.712.7514.902.075.934.45HE0.19 (b)4.72 (b)10.4087.04
HRIO1.615.737.441.65 (b)6.611.74 (b)HRIO37.88 (a)6.3710.78103.62 (a)
HRDC6.10 (a)15.36 (a)15.18 (a)5.25 (a)9.41 (a)3.35HRDC22.315.7514.16 (a)83.58
(a) max. (b) min.
Table 4. Object identification results from 50 photogrammetry images.
Table 4. Object identification results from 50 photogrammetry images.
MethodTotalCorrect%Incorrect%Non-Object%Unidentified%Failed Images (Out of 50)
CS1963123362.8166834.03683.46−6−0.300
CLAHE1897121564.0563633.53492.58−3−0.160
HE2040127862.6567833.24854.17−1−0.050
HRIO1466117480.0820113.71916.21001
HRDC117896581.9217815.11352.970010
Table 5. High-speed system internal parameters measured from the photogrammetry process using enhanced images.
Table 5. High-speed system internal parameters measured from the photogrammetry process using enhanced images.
Method c (pix.) u 0 (pix.) v 0 (pix.)
CS7135.09−35.01−7.06
CLAHE7154.47−24.22−8.28
HE7156.03−28.39−9.81
HRIO7181.46−31.85−12.01
HRDC7160.54−38.92−12.95
C V (%)0.23−18.00−24.64
Table 6. Measurement accuracy from one-inch steel block experiments using enhanced images.
Table 6. Measurement accuracy from one-inch steel block experiments using enhanced images.
MeasurementPointMeasurementPoint
123456123456
(1) δ (mm)25.8324.9225.3525.4725.7925.32(4) δ (mm)24.7024.9625.1825.4025.4825.43
Δ a b s (%)1.711.890.180.271.550.31 Δ a b s (%)2.761.750.850.010.300.14
Δ a b s , m e a n (%)0.98 Δ a b s , m e a n (%)0.97
(2) δ (mm)24.9525.0225.2025.5625.5925.41(5) δ (mm)25.8124.9125.1425.6525.8225.14
Δ a b s (%)1.791.500.780.650.740.06 Δ a b s (%)1.631.941.021.001.641.01
Δ a b s , m e a n (%)0.92 Δ a b s , m e a n (%)1.37
(3) δ (mm)24.8325.0125.2025.4425.5825.51(6) δ (mm)25.0625.0325.1425.2225.1525.51
Δ a b s (%)2.261.530.780.180.710.45 Δ a b s (%)1.341.461.020.710.980.43
Δ a b s , m e a n (%)0.98 Δ a b s , m e a n (%)0.99
(1) CS method; (2) CLAHE method; (3) HE method; (4) HRIO method; (5) HRDC method; (6) SD system.
Table 7. Vision-based system configuration using two sets of cameras for the seismic shake table test.
Table 7. Vision-based system configuration using two sets of cameras for the seismic shake table test.
TypeHigh-Speed (HS)Standalone DSLR (SD)
ColorMonochromeMonochrome
Format.tiff.jpg
Input image size w × h (pixel)2560 × 20481920 × 1080
Sampling rates, f s (frame-per-second)3230
Seismic record duration (s)120120
White noise record duration (s)180180
Table 8. Measurement difference between high-speed (HS) system and standalone DSLR (SD) system in assessing high-amplitude seismic excitation.
Table 8. Measurement difference between high-speed (HS) system and standalone DSLR (SD) system in assessing high-amplitude seismic excitation.
System δ m a x
(mm)
δ m i n
(mm)
System δ m a x
(mm)
δ m i n
(mm)
Δ m a x
(mm)(%)
HS1118.3−787.4SD1113.3−779.2−28.573.63
Table 9. Building fundamental frequency f 1 , and damping, ζ 1 , with their differences, Δ f and Δ ζ , as measured by two-vision systems.
Table 9. Building fundamental frequency f 1 , and damping, ζ 1 , with their differences, Δ f and Δ ζ , as measured by two-vision systems.
Transverse Mode
System f 1 (Hz)System f 1 (Hz) Δ f (%)System ζ 1 (%)System ζ 1 (%) Δ ζ (%)
HS6.47SD6.440.46HS4.65SD4.513.01
Longitudinal Mode
System f 1 (Hz)System f 1 (Hz) Δ f (%)System ζ 1 (%)System ζ 1 (%) Δ ζ (%)
HS6.12SD5.913.43HS2.53SD2.613.16
Vertical Mode
System f 1 (Hz)System f 1 (Hz) Δ f (%)System ζ 1 (%)System ζ 1 (%) Δ ζ (%)
HS8.59SD8.842.91HS2.70SD2.622.96
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ngeljaratan, L.; Moustafa, M.A. Underexposed Vision-Based Sensors’ Image Enhancement for Feature Identification in Close-Range Photogrammetry and Structural Health Monitoring. Appl. Sci. 2021, 11, 11086. https://doi.org/10.3390/app112311086

AMA Style

Ngeljaratan L, Moustafa MA. Underexposed Vision-Based Sensors’ Image Enhancement for Feature Identification in Close-Range Photogrammetry and Structural Health Monitoring. Applied Sciences. 2021; 11(23):11086. https://doi.org/10.3390/app112311086

Chicago/Turabian Style

Ngeljaratan, Luna, and Mohamed A. Moustafa. 2021. "Underexposed Vision-Based Sensors’ Image Enhancement for Feature Identification in Close-Range Photogrammetry and Structural Health Monitoring" Applied Sciences 11, no. 23: 11086. https://doi.org/10.3390/app112311086

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop