GNSS-Assisted Low-Cost Vision-Based Observation System for Deformation Monitoring

: This paper considers an approach to solve the structure monitoring problem using an integrated GNSS system and non-metric cameras with QR-coded targets. The system is deﬁned as a GNSS-assisted low-cost vision-based observation system, and its primary application is for monitoring various engineering structures, including high-rise buildings. The proposed workﬂow makes it possible to determine the change in the structure geometric parameters under the impact of external factors or loads and in what follows to predict the displacements at a given observation epoch. The approach is based on the principle of relative measurements, implemented to ﬁnd the displacements between pairs of images from non-metric cameras organized in a system of interconnected chains. It is proposed to determine the displacement between the images for different epochs using the phase correlation algorithm, which provides a high-speed solution and reliable results. An experimental test bench was prepared, and a series of measurements were performed to simulate the operation of one vision-based observation system chain. A program for processing the sequence of images in the MatLab programming environment using the phase correlation algorithm was implemented. An analysis of the results of the experiment was carried out. The analysis results allowed us to conclude that the suggested approach can be successfully implemented in compliance with the requirements for monitoring accuracy. The simulation of the vision-based observation system operation with accuracy estimation was performed. The simulation results proved the high efﬁciency of the suggested system.


Introduction
State-of-the-art development of materials production, construction technologies, their automation, and the growth of land prices in large cities has led to a new way of thinking about constructing and assembling complex geometric structures, especially high-rise buildings.This fact complicates the building monitoring task due to the necessity of creating the sensors' monitoring network, the application of various types of measuring equipment, and the high frequency of observations to acquire and study the kinematic and dynamic properties of the structure.On the other hand, having such a monitoring system will operate remotely without time-consuming and laborious field work, and expensive equipment is preferable.Among the different kinds of building monitoring, the observation of geometric parameters or deformation monitoring, in other words, has an important role.The change in the building geometry causes a reduction of its functional determination and crack emergence and propagation, which may lead to the structure's collapse.The effect of such external loads as wind, snow, ice, solar radiation, unstable foundation, etc., leads to oscillations and torsion of structures, their bending, and their roll.These parameters may change their values daily and seasonally [1] and can cause spatial displacements at the tens of centimeters level.The considered geometric deformation parameters that need to be determined during deformation monitoring of the high-rise building are given in Figure 1.Evidently, the deformation parameters presented above are subject to monitoring by different geospatial methods and technologies as long as the parameters are functions of linear and angular displacements.Such monitoring is carried out using various methods, geodetics particularly.Today, the global navigation satellite systems (GNSS) are the most widespread element of any monitoring system.GNSS provides reliable and high-frequency information about the monitoring target coordinates' changes.However, due to the impossibility of measuring coordinates along the structure axis from the ground to the top, GNSS data reveals only the total displacement ∆ of the structure.This displacement may portray the structure's top displacement without reference to the ground floor.Thus, the reason for this displacement is unknown and maybe the simple spatial displacement, displacement and bending, or displacement and roll combinations (see Figure 2).It has become clear that GNSS observations alone cannot accurately depict the deformation process.That is why GNSS may primarily be used as a complementary data source and to help detect the structure's vertical displacement, but additional measurements along the structure are needed to figure out the reasons for the structure displacement and to obtain an accurate picture of the deformation process.The simplest way to overcome the GNSS restriction is the integration of GNSS with other geodetic or non-geodetic equipment and measurements.
Geodetic science has developed many methods to measure structure deformation in given directions.Today, we deal with different terrestrial geodetic measurements, satellite measurements, and photogrammetric technologies and measurements.Terrestrial geodetic measurements are the most widespread [2,3].There is no point in discussing these methods in detail as anyone may find their description in the geodetic literature.Moreover, these methods were the first ones applied for deformation monitoring and, consequently, are well studied.The primary geodetic methods are spirit and hydrostatic/dynamic leveling for vertical displacements, and total stations, including image-assisted total stations [4,5], for spatial displacement determination.Other kinds of terrestrial geodetic methods are terrestrial laser scanning [6,7], depth cameras [8], and ground-based radar interferometry [9].The papers [10,11] further develop InSAR technology.Publication [8] is focused on comparing a depth camera and terrestrial laser scanner to estimate structural deflection.The comprehensive analysis of the ground-based radar interferometry is given in [11], where the structural monitoring and damage assessment of constructions are considered as the GIS integration of differential InSAR measurements, geological investigation, historical surveys, and 3D modeling.However, those technologies are laborious, require skilled personnel, and are hard to automate.Especially critical for geodetic methods is the question of the observation frequency.It is clear that leveling, terrestrial laser scanning, or ground-based radar interferometry cannot provide more than one observation epoch per day.That is why pure geodetic methods are being used as an additional information source in combination with other methods.Notable success has been achieved in the joint application of GNSS with terrestrial geodetic measurements and sensors.GNSS was applied for monitoring tasks for the first time more than twenty years ago [12][13][14][15].Despite the high observation frequency and relatively high accuracy, the GNSS application is restricted to the number of observation points (i.e., it is impossible to install the necessary number of GNSS antennas on a structure), observation sites (i.e., the necessity to have a relatively open sky for satellites), and sophisticated processing algorithms.The GNSS was combined with other sensors to overcome these shortcomings to detect displacements in many points, e.g., accelerometers, hydrostatic levels, inclinometers, etc. [16][17][18][19].The study [17] presents a good case of GNSS integration with low-cost accelerometers, but the solution is not a monitoring system.Paper [19] is a different approach to integrating GNSS with other devices.GNSS aids non-overlapping images from two cameras to determine the three-dimensional displacements of high-rise buildings.This case is an example of vision-based technology assistance.The main branch of such technologies is photogrammetry, where the image is the primary data source.Here and below, only close-range photogrammetry is considered.
Close-range photogrammetry can be separated into terrestrial and aerial cases.Terrestrial photogrammetry has been known since its invention, whereas close-range aerial photogrammetry took its place by deploying effective and cheap unmanned aerial vehicles (UAVs).This is why the latter is often called UAV/UAS photogrammetry.On the one hand, its traditional concept applies close-range photogrammetry for structure deformation monitoring.On the other hand, the new achievements in computer vision technologies and digital image processing [20][21][22][23][24] have transformed classical photogrammetry into digital photogrammetry, where the opportunities for measurement automatization have risen significantly.Among the applications of the traditional terrestrial close-range photogrammetry for monitoring, it is worth mentioning the studies published recently: [25] proposes a kind of one-image photogrammetry and its integration with geodetic alignment measurements; [26,27] explore photogrammetric deformation monitoring using low-cost cameras, particularly for bridges; [28,29] explore photogrammetric deformation monitoring using a target tracking approach (the studies demonstrate one step toward measurement automation); [30] presents a pan-tilt-zoom camera-based displacement measurement sys-tem for the detection of building destruction; and [31] is a case of monitoring using the roving camera technique.Of course, the main advantage of photogrammetric technologies is the opportunity to measure as many points on the structure as necessary.However, just a simple monitoring task of a 100 m building creates an unsolvable problem for terrestrial photogrammetry due to the impossibility of capturing the building surface from the ground.Even if it were possible, the errors due to perspective distortions and resolution would downgrade the overall accuracy to an unacceptable level.
Unlike terrestrial photogrammetry, UAV photogrammetry, thanks to its higher mobility, permits the collection and, consequently, the reconstruction of a more detailed building model.However, technically, UAV photogrammetry presents the same terrestrial photogrammetry case but with higher data redundancy [31].Today, UAV photogrammetry has a versatile application for structural monitoring.A wide range of publications similar to [32] have been published, e.g., paper [33], where UAV photogrammetry was applied for vibrations monitoring, and [34], a presentation of a lab-scale test for a six-story building model where displacement determination was carried out using UAV image correlation.Once more, the automation of UAV data processing is not simple stuff.Besides spatial displacement determination, crack monitoring is a top-rated application of close-range photogrammetry, primarily due to the simplicity of the cracks' identification in images and their measurements.A comprehensive review of crack detection is given in [35,36].A feature of the crack measurements is the sufficiency of only one image for measurements.On the other hand, the data for crack detection are easy to process and automate.The papers [37,38] present monitoring solutions for automated crack detection using machine learning.These papers are based on computer vision principles.The photogrammetric principles for crack monitoring are deployed in [38,39].Except for terrestrial-based monitoring of cracks, UAV-based technologies have become very popular recently [38,40,41].The paper [40] studies a new computationally efficient vision-based crack inspection method implemented on a low-cost UAV with a new algorithm designed to extract useful information from the images.The disadvantage of UAV-based observations is the low accuracy that does not satisfy current requirements.So, UAV data can be used as a supplementary data source for structure deformation monitoring.
Since we are on the way to developing a vision-based system, let us pay more attention to the photogrammetric methods and approaches.Considering terrestrial close-range photogrammetry, it is necessary to mention the classic books [42,43] that provide the most comprehensive review of close-range photogrammetry.Despite the versatility of the presented photogrammetric instances, they are all based on a standard algorithm and math background.This means that, regardless of the structure, any study comprises a geodetic network creation or assignment of some reference base (coordinate system), target marking and coordinating, and compliance with the general requirements of the photogrammetric survey [43].The knowledge that came from computer vision treats and handles images differently than classical photogrammetry.The computer vision approach focuses chiefly on different image refinement methods, a digital correlation between images, etc. Math models for geometric information extraction are less strict but more robust.Computer vision approaches have become very popular thanks to their high robustness and automation [44][45][46][47][48][49][50][51][52][53][54][55][56][57][58].Let us analyze some of them with unique features regarding the study goal.The study [44] considers computer vision methods tested on a four-story steel frame in lab conditions.These methods comprise the optical flow with the Lucas-Kanade method, the digital image correlation with bilinear interpolation, and the phase-based motion magnification using the Reisz pyramid.The authors of [46,47] developed a novel sensor for displacement measurements using one camera.They suggested an advanced template-matching algorithm.Paper [48] compares two noncontact measurement methods: the vision-based method underpinned by image correlation and the radar interferometer.The vision-based system uses one or two cameras mounted on a tripod.The paper [51] proposes an approach based on measurements of retro-reflective targets, while [52] offers the same approach free of targets.In [53,54], the vision-based monitoring task is considered as a problem of the best methods and algorithms for image compression and processing under dark-light conditions.It is worth noting that the computer vision approach can also be used for vibration measurements, as claimed in [55].The study was conducted in a lab environment.In virtue of the high level of automation, the computer vision approach can be fused with other sensors, e.g., the fusion between a vision-based system and accelerometers.Another sample of integration is [58], where computer vision technology is integrated with terrestrial laser scanning.For all the considered cases, the measurements are based on one or two cameras, without external reference to some stable basis, insofar as the detected displacements are relative and do not present the total (global) structure deformation.
The discussion about monitoring will not be comprehensive without the small sensors mentioned.These sensors have recently become one of GNSS's main integration elements.These sensors supplement GNSS and ensure good results for monitoring high-rise buildings.Among the different measuring sensors, it is worth mentioning the following ones that determine structure deformation and their applications, e.g., inclinometers [59,60] to detect inclinations in a particular direction, high-resolution lasers [61,62], tilt meters [63] to measure plane inclination, low-cost radars [64], and 3-D inclinometers [65] to determine a spatial rotation.A set of papers and reports give an overall review regarding the use of various sensors [66][67][68][69][70]. Whatever sensor is used, it provides superior accuracy.Still, the main drawback is the need to organize them into one system and reference this system to some external coordinate system because any sensor provides relative displacements.
Despite the importance of deformation monitoring, this task is just a small unit of a more significant problem.The technologies mentioned above have become a part of a giant branch named structural health monitoring (SHM).Structural health monitoring has become a pretty widespread problem recently.This problem is highly complex and comprises many methods and technologies to monitor a massive range of various building parameters.Therefore, any vision-based low-cost video observation system should become an integrated part of the SHM system.Many papers regarding this problem have been published recently [71][72][73][74][75][76][77][78][79].A subject of SHM can be temperature variation inside and outside of a structure, air conditioning, humidity inside or underlying soils, and the status of various structure elements, e.g., cracks, damages, and so on.One of the essential applications of SHM is structure deformation monitoring.The given list [71][72][73][74][75][76][77][78][79] is just about geometric parameters deformation monitoring.Thanks to the development of digital technologies and computer science, it is possible to use different small sensors and combine them into one system that may operate automatically.However, modern SHM systems may also include satellite-based interferometry [80][81][82][83], UAV technologies [84], and terrestrial and/or aerial laser scanners [6].Moreover, the SHM system is also a part of another more complex system named the building information model (BIM) [85][86][87][88].BIM comprises all possible building life-cycle stages and consequently the monitoring steps.The paper [88] outlines the liaison between monitoring problems and BIM.Thus, the creation and operation of any monitoring system should be considered an inseparable part of the building life cycle and must be embedded into BIM.This premise imposes the conditions of easy installation, repair, operation, relocation, and renovation of the monitoring system.Such complicated requirements can be fulfilled for a system that consists of low-cost sensors.On the other hand, the system design must be as simple as possible and reliable.
Based on the given analysis, it was suggested to deploy low-cost digital cameras operating in an automated mode and organized in a system of chains connected with each other.GNSS is supposed to be used as a supplementary data source that provides external reference and control.The system is assumed to be installed inside the building and integrated into BIM.Such a system is easy to install and use, has high reliability due to the vast redundancy of measurements, and does not need professional users.This study aimed to introduce the GNSS-assisted low-cost vision-based observation system (VOS) concept and demonstrate the system's preliminary analysis results.This system includes the ideas and approaches from close-range photogrammetry to calibrate and orient images; computer vision to process digital images; geodesy to assign coordinate systems and external control (including the possible application of total stations and GNSS); and adjustment calculus to process and analyze the measurement results.The suggested approach provides complex information about a structure deformation and reduces error accumulation and the effect of external errors, increasing the resulting accuracy in determining the displacement values.The study of [89][90][91] has shown that to carry out complex deformation research, it is recommended to locate the system chains along the principal axes of the structure.
A couple of papers demonstrate a very tentative approach with a similar idea.In [92], the approach of relative displacement measurements from the inside using an in-room camera is presented.The paper considers just a particular case of observations using one camera from one point.A more detailed review is outlined in [93], with good analysis and ideas a bit similar to our study.However, the work did not consider the case of combining a set of cameras into a network.The significant contribution of this paper is the analysis of the measuring accuracy ensured by the computer vision monitoring systems and target tracking algorithm examination.The study [94] demonstrates a concept that is close to the VOS idea but is mostly about camera calibration and does not consider cameras organized into a system without external control.
Therefore, the existing approaches and methods have some similarities to the VOS concept that will be presented and studied in what follows.The following stages have to be examined to achieve the primary goal of the study: 1.
General idea and concept description.

2.
Design of the VOS.At this stage, the distance effect between the VOS elements is considered based on the camera's technical capabilities and the geometric parameters of the test structure.

3.
Determination of displacements between VOS elements for a single chain.In-field simulation of the displacement measurements for a single chain.A phase correlation algorithm is suggested as a primary processing strategy.

4.
Preliminary analysis of the VOS accuracy for the test structure.The investigation is carried out using statistical simulation and results from stage 3.

5.
Determination of the monitoring parameters for the actual structure.Relative displacements of the VOS elements are used to model the structure frame model and compare with the design model of the structure.6.
Prediction model.Based on the structure frame model (values of monitoring parameters), a prediction model is built for a given point in time.
In this article, the features of the first four implementation stages will be described and studied in detail, and the simulation and experimental measurement results will be presented.The paper is structured as follows.Section 2 outlines the general concept of the VOS, its design, and its displacement determination approach.Section 3 describes the results of experimental studies and simulation of the VOS for a high-rise building.A comprehensive analysis of the results after the simulation and discussion are presented in Section 4. Section 5 presents the conclusions.

Design Concept of the VOS
Regarding the idea mentioned above, it is proposed to place the sensors of the VOS along the mutually perpendicular axis and planes of the structure to be monitored.Such a configuration will allow at the stage of data analysis the further separation of the effects of torsion from deflection and roll, which is a tricky issue for high-rise buildings monitoring [1].The VOS design concept supposes the determination of both relative and absolute displacements.An external coordinate system has to be established to monitor the absolute displacements.This coordinate system is fixed by the system of targets placed on the surrounding objects considered stable and via the GNSS observations at the building top.The coordinates of these targets are determined using total station measurements or, in some cases, GNSS measurements.The design concept of the GNSS-assisted VOS is presented in Figure 3.Each particular sensor of the VOS consists of a set of elements.These elements may be organized in a different manner depending on the position of the sensor.However, in the general case, the sensor contains the components depicted in Figure 4.
-- In addition to standard modules, the camera must contain a module for data transmission, allowing the quick transmission of the target images from all the sensors.It is proposed to make the target a QR code with embedded LEDs.This code will enable it to include the necessary information, for example, target ID, coordinates, etc., and increase its visibility.Two sensors can be combined into a chain.There are two ways the sensors may be organized in the chain (Figure 5).Therefore, the VOS can be installed inside the building along its principal axes.There are different methods of VOS installation.Two of them are given in Figure 6.Both cases demonstrate the VOS sensors' placement in a structure's vertical plane.For a horizontal plane, the scheme will be the same; the difference is only in the directions of the coordinate axes.According to the presented schemes, each sensor-target pair is considered a chain regardless of the placement of the target (on the camera or separately).In the first scheme, the observations are performed from sensor to sensor, where each one is equipped with a QR target.This scheme is valid for structures that have relatively small sizes, otherwise a more complicated observation scheme is suggested.The sensors are interconnected with each other throughout the system of two-sided QR targets.In any case, for the first observation epoch, both the sensors and targets must be aligned in horizontal and vertical directions.The installation of the VOS is possible in different ways.VOS can be embedded inside the building communication lines or attached outside and adequately covered.
It is necessary to study the technical characteristics of the optical system to determine the effect of the distance between two sensors or the sensors and the QR target.Since low-cost cameras are going to be used, the main issue is the size of the QR target in the image.The target must be recognizable in the image, which is why the resolution plays a significant role.The conventional scheme of image acquisition in a simple camera is given in Figure 7.In a simplified form, the visible area of size a × b is formed on an m × n matrix.The size of the visible area depends on the camera field of view γ and the focal length f and is described by the following relationship: where d is the frame size.
Using such a parameter will be incorrect for an image with a rectangular shape.As a result, the geometry of the initial square pixel will be distorted, or its size will change if the average value of the matrix resolution is applied.Consequently, this fact will affect the quality of digital image processing.Instead of the camera's field of view, the angle of the visible area along the image side ϕ is recommended: where a is the size of the visible image area along a axis, b is the size of the visible image area along the b axis, and S is the distance from the camera (sensor) to the target.This approach preserves the uniqueness of determining the resolution of the camera, which is denoted by the value c: where n and m are the sizes of the camera matrix.The camera's resolution was determined using expressions and (2).The necessary parameters were obtained from the typical camera specification and by calibration stand surveying from a fixed distance.The CMOS matrix size is 4320 × 3240 px (6.17 mm × 4.55 mm); the f-number equals 3.9.The results for the camera, a General Electric G100 with f = 72 mm, k = 5.62, and the surveying distance S = 1 m, are presented in Table 1.As expected, the resolution for the angle γ along each of the axes of the image takes different values, while the average value of the camera resolution is reduced.
There is a serious flaw in the given concept.Such resolution determines the ideal case when the lens resolution is perfect.However, in real life the actual resolution is the sum of image resolution plus lens resolution.It is obvious that the lens also downgrades the final image resolution.The approximate resolution can be obtained under the premise that the standard lens provides the object discerned when the object has a size of at least 3 pixels.Thus, the resolution ϕ a,b for the distance of 1 m equals to 0.058 mm.This result allows calculating the resolution error m γ for different distances.
The results for expression (4) are given in Figure 8.It worth mentioning that these values can be decreased using sub-pixel processing algorithms.Thanks to the long focal length, the resolution error does not affect measurement accuracy significantly.
The other factor that affects the accuracy, which should be accounted for to understand the real achievable accuracy that can be retrieved from the camera images, is the defocusing error.The defocusing error leads to image unsharpness that is described via the blur circle.
The defocusing error is defined through the aperture that is presented as the relationship where N is well known as an f -number, and for the selected camera N = f D = 3.9.To calculate the defocusing error, let us use the main optics equation: The designations are clear from Figure 8. From Equation (4), we obtain the defocusing error δ: The actual error due to the defocusing error is determined as The defocusing error for the previous camera parameters was calculated and is presented in Figure 9.The resulting error can be determined via expression (8): By expression (8), the following error distribution was obtained (see Figure 10).The obtained errors will be used for the comparison analysis in Section 3.
With the camera resolution, it is possible to calculate the ground coverage to find the ranges of displacements that a single measuring chain can measure.The relationship between the camera geometric parameters and the ground coverage is presented in Figure 11.The size of the ground coverage can be calculated by (9): where d x,y is the half-size of the CMOS matrix in pixels, S is the distance in mm, L x,y is the physical size of the CMOS matrix in mm, m, n is the size of the CMOS matrix in pixels, and f is the focal distance.If one supposes that the VOS sensors were aligned during the installation, then every single chain has a ground coverage from ±1000 mm up to ±3000 mm (Figure 12).However, these figures determine the total size of the field of view.Actually, the range of the detectable displacements is restricted by the QR target size.Expression ( 9) is also used for QR target size calculation.If we want the QR target to take at least 200 × 200 px in the image, then the necessary size of the QR target can be retrieved in Figure 13.Therefore, the single chain needs a QR target size from 60 mm to 140 mm for distances 15-35 m.These values determine the size of the possible detectable displacements for different distances.

Displacement Determination between Sensors
Under the effect of the structure's bending R or torsion θ, each VOS sensor will be subject to vertical and horizontal displacement (Figure 14).The VOS chain I in the vertical plane determines two displacement components depending on the chain orientation δx iI (y iI ), δz iI , whereas in the horizontal plane it is δx iI , δy iI .The final deformation relative to the ground floor is the sum of partial ones: Therefore, the displacements that VOS measurements can establish have the following form: ∆x I = ∆x + ∆L x + ∑ δx iI , ∆y I = ∆y + ∆L y + ∑ δy iI , ∆z I = ∆z + ∑ δz iI . (11)  According to Figure 14, displacement in the horizontal plane for a chain sensor to sensor will be determined as shown in Figure 16.It is proposed to determine the VOS sensor displacement in the horizontal plane using an image comparison algorithm based on the phase correlation method [95].Considering a pair of images, we will take one of them as the reference, denoted as A, and the second as the target, denoted as B. Let f A (x, y) and f B (x, y) be images, one of which is shifted by (x 0 , y 0 ) relative to the other, and F A (u, v) and F B (u, v) are their Fourier transforms, then: where R is the cross-spectrum, and F * A is a complex conjugate of F. Calculating the inverse Fourier transformation of the cross-spectrum, we obtain the impulse function: Having found the maximum of this function, we determine the required displacement.Now let us find the rotation angle θ 0 under the premise of displacement (x 0 , y 0 ) using polar coordinates: It is possible to determine the target and camera displacements using this algorithm.To do this, the first image from the camera is taken as the reference (A), all subsequent ones are considered as input ones (B), and then the image center is searched (n 0 , m 0 ).The shift of each new image relative to the reference (δ n , δ m ) will describe the camera movement (Figure 17a).To find the target displacement, each new image from the camera is taken as the reference image (A), and the target template is used as the input one (B).Therefore, by finding the target center (n 0 , m 0 ) in the first and subsequent images, it is possible to determine the displacement of the target in the image coordinate system (Figure 17b).To obtain the data independent of the camera displacement, the displacement has to be transformed regarding the center of each image in the image set.In both cases, knowing the image resolution, the displacements in pixels (δ n , δ m ) can be transformed into displacements in millimeters (δ a , δ b ).

In-Field Experimental Study of the Displacement Measurements for a Single Chain
It was decided to prepare and perform experimental measurements for a single chain of the VOS sensors to verify the above theoretical considerations.As a sensor of the VOS, a General Electric G100 digital camera was used.The images were collected under the conditions in Section 2.1 but with an f-number of 3.9.To simulate the target movements, the QR target was on a solid plate and a test bench, which allowed moving the target in a horizontal plane by small given values (δa i , δb i ), which were manufactured and applied.
The operation of the VOS sensor chain in a horizontal plane was simulated for different distances between the target and the camera (Figure 18).The test bench (Figure 19a) was developed at Kyiv National University of Construction and Architecture, Department of Applied Geodesy.The test bench generally allows the precise movement of any attached device (QR target) in space with accuracy of ±0.1 mm.Initially, this device was deployed for precise alignment measurements and setting-out but was revamped for our study purpose.The simulation of the single chain was performed at the Department of Applied Geodesy (Figure 19b).Since the primary element of the VOS is a camera, it must be calibrated before usage.The calibration was accomplished with Photomodeler software.However, the various software that calculates the necessary parameters can be used for the calibration procedure.The calibration quality is presented in Figure 19 Three simulation tests were performed for different distances of the camera (sensor)target.The camera was focused to infinity, and the standard parameters for daylight conditions image capturing were set up.No additional light sources were necessary.Image processing was performed using the phase correlation algorithm.The processing was carried out using MatLab software.
The image resolution parameter for each distance was determined according to the previous propositions (Table 2).In the first step, the stability of the camera position during the experiment was determined.The results showed that the camera was stable during the image capturing.This result is due to setting the camera in the automatic mode for images captured during the surveying time.The images were taken automatically with a time interval of 30 s.The ISO speed values were changed from ISO-100 to ISO-800.The results are similar between the series, and the data are given in Table 3.In the course of the displacement study, the images were taken in automatic mode.The interval between consequent images was set up to 60 s.Every minute, the QR target displacement was changed manually to the values presented in Tables 4-6.When determining the target displacement, the target image was cut from the first image and assigned as an input image.The target template of the same size was used as a reference image (Figure 21c).The QR target displacements were obtained by processing the image series.The displacement accuracy determination was found as the difference between measured and manually fixed displacement.The results for each test are presented in Tables 4-6.
Displacement determination errors allow calculating the root mean square errors ( 19) of the displacement determination for each test measurement and tentatively estimating the accuracy of the phase correlation algorithm.
The results of the accuracy estimation are summarized in Table 7.The results in Table 7 are pretty close to the values in Figure 10 which describes a total error of measurements for the single chain.It was found that the larger the displacement, the larger the error of its determination.However, this dependency must not mislead, as for shorter distances we manually assigned larger displacements.In general, we may accept the RMS error equal to 6.5 mm and use this value for the simulation of the VOS that contains multiple chains with interrelated measurements.

Preliminary Analysis of the VOS Accuracy for the Test Structure
The results obtained in the previous subsection permit determining the VOS operation's accuracy for a typical building.Insofar as this is a step in the preliminary accuracy estimation, it is possible to simulate the VOS operation for a building with simple geometry.It is supposed that the sensors are placed 30 m apart from each other.The simulation was performed for two buildings with different heights, which were 90 m and 420 m.The first building is presented in Figure 22.The VOS scheme was implemented according to Figure 5a.As the test building height was insignificant, it was decided not to use GNSS measurements for this test analysis.
In Figure 22, the blue points are the places of sensor installation.The arrows indicate the measurement directions.Points 10, 11, 12, and 13 are the targets on the surrounding objects that determine the external coordinate system.These points are accepted as errorless, but the accuracy of their determination was also estimated.The root mean square errors from Table 7 were used as descriptive statistics for the statistical simulation of measurement errors.The simulation was performed by the Monte-Carlo method for normal (Gaussian) distribution.The measurement errors were considered random with zero mean value, so there were no systematic or gross errors (blunders).The simulation results are presented in Figure 23 and summarized in Appendix A, Tables A1 and A2.In the figures throughout the text, the error ellipses are given for the confidence level of 95% for better presentation.The correspondence with the tables below can be found using the t-coefficient equal to 2 for the probability of 95%.
In Figure 23a, the spatial error ellipses for the sensor-target system are given, while in Figure 23b, the relative spatial error ellipses are presented.These ellipses describe the relative accuracy between two sensors.The sense of the spatial ellipse elements is clear from Figure 24.The numerical values for the ellipses in Figure 23 are presented in Appendix A, Tables A1 and A2.
Such an approach allows determining the whole set of monitoring parameters: the vertical displacement of the building, roll, bending, and torsion.Suppose one needs to find the relative displacements of the building elements.In that case, it is required to rule out the external control targets and simulate the VOS output in the internal coordinate system.The simulation was carried out for the same test building in Figure 22.The simulation results are presented in Figure 25.The numerical results are summarized in Appendix A, Tables A3 and A4.The values in Tables A1-A4 allow assessing the probable accuracy of the monitoring parameter determination in the external and internal coordinate systems.
The second simulation analysis deals with a tall building (420 m).The simulation parameters were the same as for the previous case.However, this case allows us to check the effectiveness of the combined solution, namely, the integration of the VOS and GNSS.Thus, the GNSS accuracy for the top of the building was integrated with the accuracy of the simulated VOS.The simulation results are presented in Appendix B, Figure A1, where the absolute accuracy is given, and in Figure A2 for the relative observations.The accuracy of the GNSS measurements was assigned using standard values, e.g., ±2-3 mm along the horizontal axis and ±5 mm for elevation.For the first case (Figure A1a), the GNSS measurements are considered non-fixed.In other words, these measurements are supposed to be included in the adjustment procedure.Two GNSS antennas on the building top are suggested as the most widespread case.The second case (Figure A1b) presents the measurements without GNSS observations.Therefore, the errors of the VOS are not restricted by GNSS measurements and propagate with the building height.
The numerical results are summarized in Figures 26 and 27 as the preliminary accuracy of the massive number of points is better portrayed in the graphs.Figure 26 describes the RMS errors of coordinate determination along the coordinate axis for the case of absolute measurements accompanied by GNSS, i.e., the measurements referenced to some external coordinate system.

Discussion
The data were analyzed using two approaches: experimental studies analysis and simulation analysis.So far, the obtained results have just demonstrated the opportunities of the VOS.However, what about the acquired accuracy?Is it enough to monitor various engineering structures, especially high-rise buildings?First, let us analyze the results of the experimental studies accomplished in Section 3.1.To do that, it is necessary to propagate the accuracy for one chain in the case of a multi-chain.This question is essential for the case of strict requirements on the accuracy of monitoring parameter determination.Whereas the requirements for the accuracy of vertical displacement determination are not so severe, the demands for roll or bending measurements are pretty tight.The most widespread condition for the roll and bending determination is based on the requirements for ensuring an allowable deviation δ from the building's vertical axis during construction.The expression defines this requirement as: where H is a building height in meters, and δ will be millimeters.
The allowable deviation δ is turned into monitoring accuracy using the expression: where t is the Laplace coefficient that depends on the probability level.Typically, t equals 2 or 2.5, corresponding to 95% and 99% probability values.However, sometimes, in monitoring practice, it is suggested to use t equals 5 to increase the reliability of the measurement results.Let us suppose that the accuracy along the x and y axis is equal to m.The resulting accuracy will depend on the number of chains k used for measurements.Under this premise, the final accuracy M can be determined as: The expressions (22) permit us to compute the accuracy for multi-chain VOS and compare these values with the allowable values from (20).The following calculations for the various heights have been performed (Table 8), taking the figures from Table 7.The accuracy calculations have been carried out based on the VOS installation scheme along the entire height of the structure with a step of 17, 25, and 33 m.The experimental results and further calculations yielded some interesting findings.The calculations by ( 22) prove the impossibility of leveraging the suggested VOS for monitoring alone.The general picture emerging from the results is that the multi-chain VOS can probably ensure the necessary accuracy for monitoring buildings higher than 500 m.The principal stress should be pointed out on the rising efficiency of the VOS with a building height.The inclusion of GNSS measurements changes the final distribution of the RMS errors.However, our findings are not generalizable beyond the subset examined because the calculation approach suggested above does not account for the effect of the interrelated measurements, as seen in Figure 22.
Thus, it is essential to simulate the VOS measurements to account for the redundancy of measurements.Therefore, the second step is the analysis of the simulation results in Section 3.2.
Let us summarize the results presented in Tables A1-A4.The accuracy at each block was averaged, and the mean accuracy value was accepted as final for analysis.These values were compared with allowable values (20).Moreover, the simulation results allow one to estimate two measurement modes: relative and absolute.
As seen in Figure 29, the simulation results provide a more lifelike picture of the VOS accuracy.The final accuracy has improved thanks to accounting for the measurement redundancy.Therefore, the VOS provides a reliable determination of monitoring parameters starting from the height of 90 m for absolute measurements.That is obvious; the installation of the VOS for such a small building is useless, and the conventional geodetic methods provide the necessary accuracy and are well studied.Things get much more complicated for the higher buildings.To estimate the efficiency of the VOS for tall structures, with the inclusion of the GNSS measurements, the simulation was performed for the building's 420 m height.Let us analyze the results in Figures 26 and 27.Again, the analysis is better presented graphically.To do so, the RMS errors over each floor were averaged and compared in Figure 30.The results of the GNSS-assisted VOS simulation look different from the VOS-only simulation.One may infer a couple of interesting findings.At first, thanks to the GNSS observations, the accuracy of the VOS is saved almost at the same level for the whole structure.This effect grows with the structure height as far as the GNSS restricts the error propagation in the VOS.Secondly, as was expected, the GNSS-assisted VOS may ensure the necessary accuracy starting from 60 m.We obtained a weighted accuracy value for the high structures thanks to combined adjustment.Therefore, the simulation results proved the high capability of the developed GNSS-assisted low-cost vision-based observation system for deformation monitoring.
The specific structure of the VOS puts forward some restrictions on the application of this system.These restrictions are defined by the geometry and construction technology of the monitoring objects.Considering the geometry, one needs to pay attention to the VOS scheme.It is clear that for chains we need straight lines between the sensors.Curvilinear structures require modification of single-chain construction.Moreover, the measurement processing is not straightforward.As an example, let us consider the simplest case, namely, the VOS for horizontal monitoring of a curvilinear structure (e.g., dams, tunnels, shells, etc.) between two reference points (Figure 31).The QR targets are placed perpendicularly to the sensors but with angles (α, ϕ) between each other.The measured displacements must be converted regarding coordinate axes.The manufacturing and installation of the system gets complicated.Therefore, the idea of VOS has to be developed and studied in the future for the case of curvilinear structures.So far, the considered and examined scheme applies to high-rise buildings.
sensors but with angles (α, φ) between each other.The measured The second condition is the material of the monitoring structure that was built.This condition especially makes sense when we deal with temperature deformation.The temperature influence leads to structure bending.In the Introduction, it was pointed out that bending is one of the primary issues of monitoring, and the VOS is the solution to this problem.The bending values will be different for different heights (Figure 32).In the simplest case, the bending due to temperature is described by where α is a linear extension coefficient of material (α = 12.1 × 10 −6 1/ • C for structures made of steel, α = 10.8 × 10 −6 1/ • C for structures made of concrete), ∆t is the temperature difference for different sides of the structure, H is a height, and D is a mean structure size in the plane.For the structure with D = 100 m and H = 420 m, we obtained the values given in Table 9.The VOS measurement range for displacements was taken from Figure 13.Regardless of the material, the VOS measurement range covers the possible deformation by almost three times.So, in this case, there are no special requirements or restrictions on the VOS application.

Conclusions
This study proposed a new approach for monitoring high-rise buildings using a GNSS-assisted low-cost non-metric camera system.The accuracy examination of the VOS single chain on a test bench confirmed the possibility of ensuring the necessary monitoring accuracy.The suggested method for determining the displacement of a pair of images based on the phase correlation algorithm showed stable results in a series of field experiments.The adequate distances between the sensor and target were studied and determined based on the experiment's results, providing reasonable accuracy.The simulation of the VOS was performed for two cases: GNSS-free for low-story buildings; VOS with additional GNSS observations.The simulation showed the necessary accuracy for deformation monitoring in the case of the GNSS-assisted VOS.The results presented in this paper were mainly limited to a simulation study.Therefore, the findings are not fully generalizable to the actual VOS operation.However, it gives clues to further research directions.Future studies will focus on the research of camera calibration, changes in target illumination, and optical beam distortion due to refraction on the resulting accuracy.We must address the issues of determining the monitoring parameters for real structure and prediction model deployment.Future research will have to assess the extent to which the VOS application is

Figure 2 .
Figure 2. The structure displacement caused by either roll or bending in combination with spatial displacement.

Figure 3 .
Figure 3.The general concept of the VOS and its installation.

Figure 6 .
Figure 6.VOS installation schemes: (a) the scheme with sensor-to-sensor observations; (b) the scheme with sensor-target-sensor.

Figure 7 .
Figure 7. Image acquisition via system camera CCD matrix.

Figure 8 .
Figure 8.The graph of the resolution error.

Figure 9 .
Figure 9.The graph of the defocusing error.

Figure 11 .
Figure 11.The relationship between camera parameters and size of ground coverage.

Figure 12 .
Figure 12.The relationship between camera parameters and size of ground coverage.

Figure 13 .
Figure 13.The QR target size at the different distances.

Figure 14 .Figure 15 .
Figure 14.Displacements along vertical chains and monitoring parameters (top view) in the horizontal plane.

Figure 16 .
Figure 16.Displacement geometry in the horizontal plane.

Figure 19 .
Figure 19.The equipment for the single-chain simulation: (a) mechanical equipment on a tripod for the precise displacement of QR target; (b) place for the test bench setup and single-chain test; (c) the sampling images from the testing site (left image: ISO-800, right image: ISO-100).

Figure 20 .
Figure 20.Calibration errors for different photos.

Figure 21 .
Figure 21.Finding the target center based on the reference and input images: (a) image acquisition; (b) input image of 100 × 100 pixels; (c) reference image.

Figure 22 .
Figure 22.The test building geometry, sensor and target placement, and measurement directions.

Figure 23 .
Figure 23.The error ellipses of coordinate determination: the accuracy ellipses of the point coordinate determination; (b) the relative accuracy ellipses of the point coordinates.

Figure 25 .
Figure 25.The error ellipses of coordinate determination for relative observations: (a) the accuracy ellipses of the point coordinate determination; (b) the relative accuracy ellipses of the point coordinates.

Figure 26 .Figure 27 .
Figure 26.Accuracy of absolute coordinate determination with additional GNSS observations.

Figure A2 .
Figure A2.The relative error ellipses of coordinate determination (a) with additional GNSS observations; (b) without additional GNSS observations.

Table 1 .
The results of resolution determination.

Table 2 .
Resolution values for different distances.

Table 3 .
The results of camera shift determination during image capturing.

Table 4 .
Displacement determination for QR target at distance S = 33 m.

Table 5 .
Displacement determination for QR target at distance S = 25 m.

Table 6 .
Displacement determination for QR target at distance S = 17 m.

Table 7 .
Accuracy of displacement determination.

Table 8 .
Accuracy dependency between the chain's length and building height.Figure28is a graphic summary of the results from Table8.The horizontal axis describes a building's height, while the vertical axis highlights the accuracy propagation.

Table 9 .
Bending values depending on structure material and height.

Table A2 .
The relative accuracy of the point coordinates (95% confidence level).

Table A3 .
The accuracy of the point coordinate determination.

Table A4 .
The relative accuracy of the point coordinates (95% confidence level).