^{*}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

This paper presents the development and implementation of three image-based methods used to detect and measure the displacements of a vast number of points in the case of laboratory testing on construction materials. Starting from the needs of structural engineers, three

Deformation measurement during laboratory testing on construction materials aims at determining the intrinsic characteristics of the considered object. The examination of the deformation and the knowledge of the applied load (e.g., a mechanical or thermal load) allows the analysis of the mathematical model that describes the behaviour of a construction element.

Several instruments can be used to measure object deformations during loading tests. However, the most widely adopted tools are

Image-based methods can analyse the whole deformation field of a body by tracking a vast number of points distributed on the object. Images contain all the information to derive 3D measurements from multiple 2D image coordinates with limited cost and good accuracies. In fact, image-based techniques have been used in several applications which involve the determination of the shape of a body and its changes, with satisfactory results in terms of completeness, precision and time [

The goal of image-based methods in material testing is the estimation of accurate 3D coordinates starting from 2D measurements in the images through a perspective mathematical formulation between the object and its projection into several images. Some commercial software allow the analysis of the dynamic changes of several targets distributed on the object in an fully automatic way, but if markerless images are employed no commercial automatic solutions are available on the market. Moreover, the procedure becomes a full-field non-contact technique only without targets, when the natural texture of the object is directly used (generally after a preliminary enhancement with filters that modify the local contrast of the image). For instance, this kind of analysis provides the detection and the measurement in fluids, where LVDTs and strain gauges cannot be employed.

Basically, the precision achievable with image-based techniques depends on the size of the investigated elements [

As the technological development of commercial low-cost cameras is rapidly increasing, image-based methods and low-cost software are commonly used in several sectors (e.g., archaeology [

Some low-cost digital cameras and targets can be a convenient solution for the analysis of the whole surface of an object. The employed targets can be really inexpensive (a piece of white paper with a black mark is sufficient for many applications), while in the case of more exhaustive experiments they can be printed on metal plates or can be made of retro-reflective materials. The centre of the target can be automatically measured with a high precision (up to ±0.01 pixel) in a fully automated way, improving the precision of the corresponding 3D coordinates.

A group of targets permanently fixed on the object provides a regular mesh for all deformation analyses. These dense points can approximate the deformation field of the whole body. A fundamental advantage of an image-based method is the possibility of analysing more targets than those strictly necessary, without increasing the cost of the test and with a limited worsening of the processing time. However, in some applications targets cannot be employed (e.g., for fluid elements) and automatic methods based on the natural texture of the body must be developed. This kind of analysis is more complicated, especially in the case of bad surfaces without details. This fact limits the use of image-based methods inside civil engineering laboratories.

This paper presents three image-based algorithms capable of analysing the deformation field of a generic object during a loading test. These methods work with targets but also with markerless images and can determine the 3D coordinates of a huge number of points in an automatic way. They are currently employed in some civil engineering laboratories, where several building materials and structural elements are tested with satisfactory results in terms of accuracy. In several applications these methods integrate or substitute traditional sensors and provide additional information, which are useful for more complete and detailed investigations.

We focus on the measurement of a finite number of points with a good distribution, while other existing approaches present the extension of the measurement problem to the whole surface of the body [

The first tool here presented allows the estimation of crack variations in fluid fibre-reinforced specimens (Section 2). This is a new non-conventional application for which there are no commercial solutions. This task required the development of an

Cracks are expected for several construction materials during their service ability [

Laboratory testing on fibre-reinforced specimens allow one to study the effect of different fibres and mixture components (water, cement, sand…), in order to determine the best compromise for real applications. A traditional analysis is based on the study of the aperture, shape, location and orientation of cracks with small specimens that simulate the behaviour of the real object.

To monitor the aperture of a crack during standard tests strain gauges are generally used. However, civil engineers needed more exhaustive and specific measurements than those achievable with these standard sensors. In fact, strain gauges allow only one-point and one-dimensional measurements [

Another issue regards the state of the body: all measurements on the specimens begin after the casting, when the specimens is liquid, and preliminary data about the number of expected cracks and their positions are not available. For these reasons, a new solution capable of analysing the deformations in these particular working conditions was necessary.

The developed tool for such measurements is composed of a mechanical arm carrying a digital camera (

The estimation of the crack aperture is carried out with an automated algorithm capable of detecting a crack in each single image by measuring its border coordinates (in pixels). Then, a procedure based on simple geometric considerations between the camera and the specimen allows the estimation of the crack aperture in metric units.

Image coordinates can be automatically measured with the methodology proposed in [_{min}, _{min}, _{min} of the functions (in the middle of the crack) and the values _{A} _{B} _{C} in which the slope of the functions changes. These values can be estimated with the analysis of some cross-section and then used for the completion of the test. A “global level”

The creation of the filtered image is carried out by comparing

pixel ∈ CRACK;
pixel ∈ BORDER;
pixel ∉ {CRACK or BORDER}.

This method uses the whole RGB content of an image, while other existing techniques (e.g., [

The main advantage during laboratory testing, with repetitive working conditions, is the possibility of estimating a preliminary global level, which can be considered a constant for specific applications. In fact, if illumination conditions are stable (in this case a LED is permanently employed for all images) the global level does not vary significantly during the test. In addition, small errors in this phase can be considered systematic errors and can be removed during the estimation of the aperture variations. After some tests we estimated an optimal level for fibre-reinforced concrete elements equal to 0.19.

To estimate the crack aperture a transformation between image and object spaces must be employed. As the analysis starts with a fluid specimen (its external surface is horizontal), the robotic arm was assembled in order to generate 2D horizontal movements. With this particular configuration, image and object planes (or camera sensor and specimen surface) are parallel and the camera maps the object through a similarity transformation where the scale is the only unknown. This simple solution allows an easy computation of object coordinates without using more complex transformations requiring the knowledge of several parameters. A more detailed description about this procedure is shown in Section 3.2, because an extension of this transformation is used in another tool, while for this particular application the scale number is the ambiguity.

The scale factor was estimated by measuring the size of a pixel projected onto a reference object (a small metal plate) placed on the specimen. The size of this object was measured with a calliper: it is sufficient to divide the width of the plate by the number of pixel picturing the object to determine the scale factor. With the INFINITY camera a pixel covers an area of 9 μm × 9 μm, which is also the accuracy of the implemented tool (see next section for further details). The output interface of the tool, which gives a graphical and numerical visualization of the crack aperture, is shown in

To check the accuracy of the implemented method a comparison with other sensors is mandatory. Nowadays, a system capable of measuring the aperture variations in fluid elements with an accuracy and a density better than the implemented tool is not available. This means that accuracy cannot be checked with experiments on fluid specimens. To overcome this drawback we developed an alternative solution with a solid object and a special micrometric sledge (

A Nikon D80 camera equipped with a 90 mm lens was placed over the sledge in order to determine the simulated variation with the implemented tool. The mathematical relation between image and object spaces was estimated with a special calibration frame, composed of points with known coordinates (see Section 3.2). From a theoretical point of view, the precision of object coordinates _{XY}_{xy}

However, _{1} equal to 600 mm, then we reduced the distance to _{2} = 220 mm. Both mechanical and image-based measurements were compared and the results showed a standard deviation of the differences of ±0.037 mm (_{1} = 600 mm) and ±0.012 mm (_{2} = 220 mm). Supposing that the precision of the filtering algorithm is equal to ±1 pixel, a theoretical precision of ±0.04 mm and ±0.014 mm can be estimated with the camera used in both configurations (pixel size is 0.0061 mm). This means that the precision of the implemented tool is equal to the GSD (Ground Sampling Distance), which represents the projection of a pixel onto the object. To improve the precision of the object coordinates the camera-object distance can be reduced or the focal length can be increased. However, in both cases the angle of view is progressively reduced and a smaller part of the object can be imaged. The best choice is a compromise between precision and imaged area. Several other comparisons validated the proposed results and confirm the expected accuracy in the case of the INFINITY camera (a pixel projected onto the object is ±0.009 mm).

During some tests the analysis of 3D movements is not strictly necessary. In fact, if the analyzed object is flat (e.g., the external surface of a beam), the estimation of a 2D motion is more than sufficient for several experiments. This fact leads to a simplification of the measurement problem, with a reduction of the degrees of freedom for a generic point of the object. Moreover, there is an advantage in terms of cost: a single image for each epoch becomes sufficient to analyze the movements of all points. Starting from the image coordinates of point (_{i}_{i}_{i}_{i}

The equipment includes a camera placed on a tripod and an algorithm able to track all image points and to estimate real movements. The acquisition frequency depends on several factors and varies with the investigated object and the selected load. For this reason it is not possible to fix an optimal value for every experiment. This means that an

The implementation of an

Several targets distributed on the object are a valid support in image-based deformation measurements. A regular mesh of targets allows one to analyze the whole surface of the body, while the use of traditional sensors (e.g., strain gauges or LVDTs) increases the cost and needs complex connections with control units. For these reasons, photogrammetric targets are a cheap solution with a simple connection on the analyzed body. During several real surveys, all targets can be printed (e.g., a black dot with a white background), while for more advanced and extensive analysis they can be made of metal.

All targets can be automatically matched by using a 2D normalized cross-correlation technique between a target template and the image [

An automated search of the target(s) in the whole image can be carried out by comparing the template with the local content of the image (a preliminary conversion of the original RGB image to a new grayscale one must be performed). The measurement of the centre of the target in the image is carried out by moving the template _{f}_{g}

This method is easy to implement and fast from a computational point of view [

Starting from a perfect similarity between the template and the patch:

To operate with the Gauss-Markov Least Squares estimation model

The parameters _{0} and _{0} (shifts) are unknown values that indicates the centre of the target, while the other coefficients can be used to adjust shape deformations.

Finally, the unknown parameters can be grouped into a vector:
_{k}_{0}, _{0}). The solution is given by:

In order to complete the linearization with a Taylor’s expansion, a set of initial approximations for the unknowns is chosen as follows:

The LSM method ensures high precision measurements (up to ±0.01 pixels) and is an optimal choice in the case of targets. However, it cannot be considered as an alternative to cross-correlation: cross-correlation provides good approximate values about target locations and LSM refines center coordinates. Thus, the combined use of both these techniques is strictly mandatory in order to automate the whole analysis.

Object coordinates can be calculated by using image coordinates and a transformation between image and object spaces. In the case of flat objects all points lie on the same plane and the mathematical transformation between image and object spaces can be described with a 2D homography.

The relation between an image point in homogenous coordinates (_{i}_{i}^{T}_{i}_{i}^{T}

_{3} equal to 1). To obtain inhomogeneous coordinates it is sufficient to divide image and object coordinates by their third coordinate.

This leads to the inhomogeneous form of the planar homography:

To estimate the eight coefficients of

The measurements of the object points needed to estimate

A better strategy to visualize the results is based on the removal of the perspective effect from the images. Here, the homography

Targets are very useful to monitor deformations emerging in loading tests: they can be easily measured with a high precision and their application onto the body is simple and cheap. However, in some cases targets cannot be permanently installed or can be lost during the test. To overcome this drawback a synthetic texture can be generated (e.g., by painting the object) but we developed a new solution capable of working with target-less images. It uses the natural texture of the object after a preliminary image enhancement. Interest operators can be used to detect a sufficient number of features in the first image of the sequence. Then, these features are tracked with the proposed methodology based on cross-correlation and LSM along the sequence.

Before the beginning of the test it is highly recommended to process some images. This operation is really useful to verify the quality of the images and the possibility to use the natural texture (i). In the case of a failure with the natural features, a procedure based on synthetic corners (ii) can be used. The application of targets onto the object remains the last choice when the previous method cannot be employed.

Several features can be detected in an image (e.g., corners, edges, regions…) and several operators are available (probably too many to be listed here). For a more exhaustive review the reader is referred to [_{p}_{p}

The choice of this operator is supported by the impressive number of corners that can be extracted from an image. However, corners are extracted only for the first image of the sequence, while for the next ones a tracking

In some cases images might present a bad texture and a limited number of corners could be extracted. In addition, the distribution of points could be inhomogeneous. To solve this problem a procedure based on a preliminary image enhancement can be used. Many methods are today available and generally work by considering global parameters: most software for image enhancement have automatic functions capable of modifying the contrast of the image, but the same level is used for the whole image. If a homogenous distribution of all points is needed this can lead to a poor solution. This is the reason why we prefer to optimize the contrast locally. Wallis [

The Wallis filter has the form:

_{f}_{o}_{0} and _{1} the additive and multiplicative parameters, _{o} and _{o} the mean and standard deviation of original images, _{t}_{t} the target mean and standard deviation for the filtered images,

For each single block _{o} and _{o} are estimated and the resulting values are assigned to the central pixel of each block, while for other pixels these values are estimated with a bilinear interpolation. The target mean and standard deviation _{t}_{t}

The dynamic analysis is carried out by filtering all images and tracking the original FAST corners with cross-correlation and LSM along the image sequence, which must be filtered with the same parameters. It is also recommended to use stable illumination conditions during the analysis (e.g., external light sources like lamps), a very high acquisition frequency according to the duration of the test (to limit the differences between consecutive images) and small blocks (e.g., 9 × 9 pixels) for the filtering process (to reduce the effect of local deformations during the test). Moreover, this procedure should be used when limited deformations are expected. With these experimental conditions we verified that only a limited number of points is lost during the sequence analysis.

The mathematical analysis proposed in Section 3.3 demonstrated that no information about the camera used is required when the relation between image and object spaces is a planar homography. Thus, image coordinates and few reference object points are adequate to complete the elaboration. Camera calibration is intended as the process to estimate the intrinsic parameters of the camera, comprehending the principal distance, principal point and distortion coefficients. A good calibration is an essential prerequisite for precise and reliable measurements from images, and is widely adopted in several surveys where high accuracies must be achieved. Several software use an 8-terms model derived from the original formulation for image distortion proposed by Brown [_{p}_{p}^{2} = ^{2} − ^{2} is the squared radial distance.

The coefficients _{1}, _{2}, _{3} model the radial distortion. In particular, the coefficient _{1} is generally sufficient during most surveys, but when a high accuracy is needed, the coefficients _{2} and _{3} have to be used as well. Tangential distortion, that is due to a misalignment of the camera lenses along the optical axis, can be modeled with _{1} and _{2}. The magnitude of tangential distortion is limited if compared to radial distortion, especially with wide-angle lenses.

Digital cameras should be calibrated periodically, because several issues about the stability of the sensor could arise. A standard camera calibration procedure can be performed by using known points (and few images), or without any external information and special coded targets. The former (termed as field calibration) needs external 3D information provided through a framework with several targets, whose 3D coordinates have been previously measured (e.g., with a total station). The latter (self-calibration) is based on a free-net adjustment [

When 3D measurements are necessary at least two images for each epoch are needed. Images must be taken at the same time, thus all cameras must be synchronized. Several images can be used to improve the precision of the object coordinates, however a more expensive instrumentation becomes necessary. Fraser [_{xy}

The mathematical model for image orientation is based on collinearity equations [_{ij}_{ij}_{j}_{j}_{j}_{i}_{0i}, _{0i}, _{0i}) are the coordinates of the perspective centre, _{i}_{pi}_{pi}_{ij}_{ij}_{0}^{2}) gives the final quality of the adjustment. The functional model of the system of _{0}^{2} can be estimated as:

The precision of the estimated unknowns can be retrieved from the covariance matrix:

The diagonal elements of the _{xx}

To invert the semi-definite positive matrix ^{T}

the use of an orientation frame (

a free-net adjustment based on inner constraints [

After processing the images of the first epoch with the developed methodology, if cameras are placed on stable supports exterior orientation parameters can be considered constant. Then, the dynamic analysis is based on the measurement of the image coordinates by tracking the points along the image sequences with cross-correlation and LSM. The computation of object coordinates is performed by using the fixed orientation parameters.

An important point is related to occlusions. However, during this kind of analysis the deformation emerging is limited with respect to the size of the object. Therefore a good initial setup of the cameras around the object avoids the creation of occlusions during the test.

In the case of 2D dynamic measurements an image point must be tracked along the image sequence. When multiple views must be analyzed, it is necessary to determine the same point among the images captured at the same epoch, then the point can be tracked along the sequence. The determination of the image correspondences can be carried out by using targets or with the texture of the object. Targets can be automatically detected for all images, but it is often necessary to (manually) select homologous points for the images of the first epoch. However, an opportune coding can be added to each target to automate the whole process.

However, during some analysis targets cannot be fixed, thus we implemented a solution based on the texture of the object and projective geometry. This new method is based on detectors and descriptors capable of determining tie points among the images. In our implementation we use two operators able to extract and match these image correspondences: SIFT (Scale Invariant Feature Transform) [

At the end of the matching phase several outliers can be found, especially in the case of repetitive patterns. We remove all these wrong correspondences with the robust estimation of the fundamental matrix

Given a set of image correspondences _{i}_{i}_{i}^{T}_{i}_{i}^{T}_{i}_{i}^{T}_{i}_{i}_{i}^{iT}_{i}

In this work robust techniques play a fundamental role. They allow an efficient detection of all mismatches and are mandatory in the case of fully automated techniques. Normally, these procedures are based on the selection of minimal dataset and the following estimation of several _{1}, …, _{9} are the nine elements of _{1} + (1 − α)_{2} = 0, which coupled with the determinant constraint gives det|α_{1} + (1 − α)_{2}| = 0. This last equation is a cubic polynomial equation in α that can be easily solved.

The LMedS technique evaluates each solution with the median symmetric epipolar distance to the data [_{S}

The method does not need a preliminary threshold to classify a point as inlier (or outlier). A robust estimation of the standard deviation can be derived from the data with the relation:
_{k}_{i}_{0} is determined for each correspondence and is used to detect outliers (_{i}

After all these steps outliers can be removed and a final LSM refining is carried out to improve the precision of image coordinates.

If ^{2} −

To check the accuracy of the implemented tool a comparison with external sensors was carried out. In this section the results related to the analysis of the examples proposed in

A preliminary analysis of the markerless method was carried out for the example proposed in

In this paper three image-based tools for measurement of deformations in laboratory testing on construction materials were presented. Because of their user-friendliness they can be used by people who are not necessarily skilled in image analysis, computer vision, photogrammetry and vision metrology. These methods provide more information than that obtainable with traditional sensors. In addition, when targets cannot be applied the natural surface of the object can be used. In the case of bodies with a bad texture a synthetic texture can be created by painting the object. However, some new techniques based on a preliminary image enhancement of the local radiometric content of the image and feature extraction and matching can be applied to extract a sufficient number of points to complete the analysis.

The implemented image-based methods can provide 2D and 3D measurement for a vast number of points and allows the analysis of the whole body. Moreover, the procedure is highly automated and only few semi-automatic measurements for the image(s) of the first epoch are needed (e.g., target localization, visual check, removal of the scale ambiguity,…). The dynamic analysis can be considered a fully automatic phase and 2D or 3D measurements can be rapidly estimated after the end of the test. All these tools were implemented to work with building materials, in which the global deformation is limited with respect to the object size. During several standard tests on construction elements (e.g., pillars, beams…) the methods demonstrated good results even in the case of strong deformities of the body. These experimental tests demonstrated an accuracy of these methods similar to that achievable with traditional electrical or mechanical sensors, however, the use of digital cameras allows the elaboration of a larger number of 2D or 3D points with a better spatial distribution.

The developed system for crack aperture estimation.

Some results with the implemented software: crack borders and the estimated aperture.

The sledge used to check the accuracy of the image-based tool.

A beam can be considered a flat object.

Some rectified images of the sequence and the magnitude of the displacements.

Target displacements projected onto the initial rectified image.

Results in the case of markerless image sequences: (a) original image and (b) extracted corners, (c) filtered image and (d) extracted corners, (e) corner reduction according to a quasi regular grid.

A target-based survey with two synchronized cameras.

Matching results during a markerless 3D survey with two cameras: (a) point matched with the descriptors and (b) points after the robust estimation of the fundamental matrix.

Comparison between image-based and LVDT displacements for the target-based test.

Vertical displacements (at different epochs) measured with the image-based method.

Comparison between mechanical and image-based measurements for a target-less test.