New Quality Control Algorithm Based on GNSS Sensing Data for a Bridge Health Monitoring System

This research introduces an improvement plan for the reliability of Global Navigation Satellite System (GNSS) positioning solutions. It should be considered the most suitable methodology in terms of the adjustment and positioning of GNSS in order to maximize the utilization of GNSS applications. Though various studies have been conducted with regards to Bridge Health Monitoring System (BHMS) based on GNSS, the outliers which depend on the signal reception environment could not be considered until now. Since these outliers may be connected to GNSS data collected from major bridge members, which can reduce the reliability of a whole monitoring system through the delivery of false information, they should be detected and eliminated in the previous adjustment stage. In this investigation, the Detection, Identification, Adaptation (DIA) technique was applied and implemented through an algorithm. Moreover, it can be directly applied to GNSS data collected from long span cable stayed bridges and most of outliers were efficiently detected and eliminated simultaneously. By these effects, the reliability of GNSS should be enormously improved. Improvement on GNSS positioning accuracy is directly linked to the safety of bridges itself, and at the same time, the reliability of monitoring systems in terms of the system operation can also be increased.


Introduction
Despite significant achievements by numerous researchers in the field, the establishment of Bridge Health Monitoring Systems (BHMS) based on Global Navigation Satellite Systems (GNSS) has been limited due to the reliability issues of GNSS positioning solutions. The information provided by GNSS is based on both the received data and the processing of the data and both are critical for obtaining proper information. If any apparent deflection from GNSS position solutions is detected, it is necessary to verify that it is, in fact, a real deflection behavior resulting from external loading, or if it is an outlier from the data itself. This is an important part from the monitoring system operation perspective. For the former case, potential structural defects can be incurred that pose a great threat to bridge users and to the safety of the bridge itself. For the latter case, false information transfer due to outliers cannot be avoided, thereby having an adverse effect on the reliability of the total monitoring system. In particular, the latter case is a potential risk factor for a GNSS-based BHMS.
In line with such a trend, the importance of the GNSS-based BHMS has been emphasized more than ever. Hence, it is essential to consider the accuracy and reliability of GNSS positioning solutions to construct GNSS-based monitoring systems. This investigation aims to address these issues by

Quality Control Algorithm and Quality Measures
The quality of information from a monitoring scheme is typically characterized by its precision, reliability, sensitivity, and economy [1]. In this investigation, monitoring analysis is implemented through a GNSS positioning solution. Accordingly, the precision and reliability of this solution, which can directly affect the monitoring system's performance, must be discussed. The question to be addressed is whether all GNSS measurements can be converged as a normal distribution by applying least squares estimation to the GNSS measurements. If not, there is a significant possibility that any existing outliers are within a normal distribution range. To assess the precision and reliability of the final GNSS positioning solution, errors should be screened, or other prior-variance calculated, with the remaining measurements; moreover, these errors should be assigned to the adjustment procedure. To estimate the GNSS positioning solution, the two mathematical models mentioned previously are considered. However, outliers cannot be simply considered through functional and stochastic models, because the position accuracy should be assessed with regard to the true value or most probable value (MPV). In processing the GNSS data, systematic errors that were not considered in the functional model and outliers should be isolated before conducting the final least squares estimation to maximize the positioning accuracy and to secure the reliability of the positioning solution. Such a series of inquiries and answers are typically analysed through statistical testing and the answers can thereby be determined. This process is required because of the high sensitivity of the least squares estimation to outliers, which can dangerously reduce the quality of the positioning solution. At this stage, the monitoring analysis quality is vaguely associated with the concepts of precision and accuracy from the GNSS positioning solutions. Therefore, by optimal estimation, an accurate positioning solution can be achieved when proper quality control exists, thereby ensuring that the outliers are excluded.

Least Squares Estimation and Outliers
For detecting and eliminating outliers, a quality control algorithm, specifically, the Detection, Identification and Adaptation (DIA) technique, was deployed in this investigation. Teunissen at Delft University systemized DIA [2,3]. The definitions or concepts of statistics terms used in this study follow the theory of Baarda, who first proposed this algorithm [4]. Prior to the description of the DIA technique, the relationship between the least squares estimation and outliers is briefly examined.
The linearized Gauss-Markov model using dual frequency data can be expressed as follows: The adjusted unknown parameters (x) are obtained through least squares estimation as follows: where A denotes the design matrix, P represents the weight matrix, and L is the measurement matrix, and v is the residual, and: where C x is the covariance matrix of unknown parameters. From this, the residuals are estimated as: v " Ax`l with P " The covariance matrix of residuals C v is singular: From Equation (7), the adjusted parameter can be derived: From Equations (7) and (8), C v is rearranged: Equation (6) is then rearranged: where C v is the covariance matrix of covariance. Thus, the posteriori variance is: where v is the vector of n residuals, and m is the number of unknown parameters. At this point, an explanation regarding Equation (9) is warranted. Matrix C l has only diagonal elements, which means that measurements are uncorrelated but residuals are correlated. In other words, a variation of one measurement affects only one unknown parameter and the corresponding residual. At this stage, the relationship between residuals and outliers is required for further consideration. First, true error ε and residual v can be defined as: where it is assumed that each measurement true error ε i consists of one random error. ε r i and one outlier, where: v r " Q v Pε r is only influenced by the random error of residuals, and ∇v " Q v P∇l is affected by outliers of residuals.
Through this expression, it can be argued that residuals can affect the random error and outlier. The least squares estimation cannot distinguish either of them. Consequently, after conducting the adjustment process, outlier detection is very difficult to perform through the evaluation of the residuals. The stochastic model used for the least squares process is simple; only the variation of estimated coordinates is attained through the model [5]. If the covariance matrix can be obtained from the residuals, it can then be inverted; accordingly, the observational error can be directly calculated. Unfortunately, the J matrix is singular; therefore, the most realistic way to detect the presence of outliers is to test derived quantities, which are functions of them. Details are introduced in the following sub-sections. The main point is that the residuals do not reflect the quality of the measurements; they are not consistently good indicators of outliers. To address this problem, the commonly used DIA technique is applied. As mentioned previously, when it is assumed that a systematic error is eliminated from measurements and no outliers exist, the residuals of measurements estimated through least squares estimation converge as a normal distribution, whereas if outliers exist, they are related to the variance estimated from the measurements. The variance, which is estimated from the measurements, including those of the outliers, therefore plays an important role in detecting the outliers. For this process, an F-distribution is used.
The F-distribution consists of an a posteriori variance factor and variance ratio of the population (statistic:σ 2 0 {σ 2 0 ). This is also called the global test on the variance factor (VF). It is applicable only when the a priori information on the precision of measurement is available and hypothesis testing is applied for the positioning solution, which has been through a previous adjustment. To this end, hypothesis testing is conducted. The relation between the F-distribution and chi-square is as follows: Null hypothesis H 0 : Alternative hypothesis H a : σ 2 0 ‰σ 2 0 (17) where σ 2 0 is the variance factor andσ 2 0 is the a posteriori variance factor. When this test is used for the detection of outliers, it is usually expected thatσ 2 0 ą σ 2 0 with regard to most outliers induced by GNSS positioning. Therefore, the null hypothesis test is: A one-tailed test is recommended, where: Substituting Equation (19) into Equation (20), we can summarize : If the variance factor exceeds the test limit (rejection criteria), the adjustment model is considered invalid. The variance factor test may fail because of input or programming errors, and the presence of outliers in the measurements or model errors, or with poor estimation of the a priori covariance matrix. In this study, it is assumed that the variance factor test fails only under the presence of outliers because the variances of the measurements are known. Assuming that outlier ∇l i exists, an alternative hypothesis can be written, and the linearized adjustment model may be defined as: where: A is the nˆm design matrix, x is the vector of m least squares parameter estimates, l is the vector of n measurements, v is the vector of n least squares residual estimates and We can express this by the probability distribution of the observation vector l as follows: Null hypothesis H 0 : E rl|H 0 s " Ax Alternative hypothesis H a : Ep l|H a q " Ax`e i ∇l i " Ep l|H 0 q`e i ∇l i Under the null hypothesis, the expectation of the residuals is 0: Therefore, the alternative hypothesis is: The relationship between ∇l and ∇v from the previous equation is as follows: In the alternative hypothesis, for measurements containing outliers, the expectation ofσ where the variance ratio containing the outlier is redeployed as: where: and λ denotes a non-centrality parameter.
Under the alternative hypothesis, statisticσ 2 0 σ 2 0 has a non-central F r,8,λ distribution with the above non-centrality parameter, λ. This can be converted by the non-centrality boundary value pλ 0 q. Moreover, it can be determined by establishing a different value of confidence level and power of test, depending on the application purpose. According to Cross et al. [6], the level of confidence can be different depending on each application, while the power of test is normally set to 80%: If the least squares do not converge with the values determined by α and β, it can be said that the measurements may have outliers. H a is dependent on the relationship between residuals and gross ∇l defines simpler and more specific values; therefore, the boundary values should be estimated for vector ∇l when a process is conducted at a given probability level β; H a introduces unidimensional tests on the residuals.

Identification
An outlier, detected through the previously described process, is identified using the w-test data snooping technique [4]. To identify which satellite measurements are responsible, this stage is implemented with new alternative hypothesis testing, which has a standard normal distribution under the null hypothesis. However, under the alternative hypothesis, the distribution of the statistic will have non-centrality. In other words, the w-test for outlier ∇l i relies on the null hypothesis, which shows that the measurements are outlier-free, as well as on the alternative hypothesis which, when proved true, indicates the existence of an outlier of magnitude ∇l: Null hypothesis H 0 : Er∇l i s " 0 (34) Alternative hypothesis H a : Er∇l i s " e∇l i ‰ 0 To consider a critical value of λ 0 by the determined α and β, λ 0 is rewritten as follows: It is then easy to derive a statistic, which tests the alternative hypothesis H a : On the basis of the previous information, the one-dimensional (one-tailed) test statistic can test the alternative hypothesis, whereby the w-test is defined by the ratio between the size of the outlier and its variance.
Calculation of the test statistic is as follows: Fixing the significance level α and the power of test β, the non-centrality parameter can then be estimated.
The test statistic for w i is N 1´α{2 p0, 1q for the situation where: If the ith measurement is a supposed outlier, the test is carried out with respect to each measurement; the greatest value that exceeds the critical value is deemed an outlier and is removed from the model. The test statistic is iterated from the first element of the vector; each element has a value of 1. This method can screen for the existence of any potential outliers in individual measurements. The process is repeated until an absolute value or the greatest value is detected. The w-test is then repeated to determine if any further outliers exist. If another outlier is found, it is removed from the model; moreover, the measurement that was first deemed an outlier is reinstated and the model is retested.
From Figures 1 and 2 it is evident that, from the significance level and the power of test that the non-centrality parameter can be estimated, ? λ 0 . This procedure can then be repeated until no further outliers are detected. This type of procedure for screening each individual measurement for the presence of an outlier is known as data snooping.
If the ith measurement is a supposed outlier, the test is carried out with respect to each measurement; the greatest value that exceeds the critical value is deemed an outlier and is removed from the model. The test statistic is iterated from the first element of the vector; each element has a value of 1. This method can screen for the existence of any potential outliers in individual measurements. The process is repeated until an absolute value or the greatest value is detected. The w-test is then repeated to determine if any further outliers exist. If another outlier is found, it is removed from the model; moreover, the measurement that was first deemed an outlier is reinstated and the model is retested.  From Figures 1 and 2, it is evident that, from the significance level and the power of test that the non-centrality parameter can be estimated, . This procedure can then be repeated until no further outliers are detected. This type of procedure for screening each individual measurement for the presence of an outlier is known as data snooping.

Adaptation
To facilitate adaptation of the null hypothesis through estimation using the least squares method, the adaptation step is required to remove measurements, including outliers, which are identified through the data snooping technique. Figure 3 depicts the whole of the quality control procedure.

Adaptation
To facilitate adaptation of the null hypothesis through estimation using the least squares method, the adaptation step is required to remove measurements, including outliers, which are identified through the data snooping technique. Figure 3 depicts the whole of the quality control procedure.

Adaptation
To facilitate adaptation of the null hypothesis through estimation using the least squares method, the adaptation step is required to remove measurements, including outliers, which are identified through the data snooping technique. Figure 3 depicts the whole of the quality control procedure.

Quality Measures
The quality control outcome is decided on the basis of the number of redundant measurements and the geometry of the satellites. At this stage, the impact of the two factors on the quality control procedure should be checked, as well as the quality of the measurements. The indications of confidence for quality control are redundancy, correlation, and quality dilution of precision (QDOP).

Redundancy Number
Equation (5) illustrated that both random errors and outliers affect the residuals of measurements contained in the outliers. Unfortunately, these two errors cannot be differentiated by the least squares method, which is limited to random error. Therefore, it is difficult to detect outliers by using residuals estimated by the least squares method. However, J = Q v w has a singular and idempotent matrix structure. With this matrix, it is possible to check for the redundancy number if the sum of the diagonal elements of the matrix (known as the trace of the matrix) is the same as the degrees of freedom: where r is the total redundancy. If r i denotes the diagonal elements of J = Q v P, from Equation (6), r i is redefined as follows: Because of the nature of the idempotent matrix, the elements of this matrix are distributed in the range of 0 ď r i ď 1. The closer the redundancy is to 1, the closer the variance of residuals is to the variance of the 'measurements. Thus, in conclusion, this relation is the same with random errors of residual and measurements; therefore, the estimated coordinates (GNSS positioning solution) with a high level of precision can be obtained. However, if the redundancy is closer to 0, the variance of residuals has a smaller value close to 0; therefore, this value cannot be a correct comparison with the value of the measurements. In other words, a low level of redundancy is insufficient for separating outliers. Furthermore, the measurements have a higher possibility of containing outliers; moreover, a high level of redundancy means that the outliers can be easily detected because the outliers have an impact on the residual. Nevertheless, if the redundancy becomes 0, the outliers cannot be detected. Therefore, estimated coordinates may not be accurate. In general, when the level of redundancy is higher than 0.5, it can be considered well measured [6]. If ∇v is reformed through Equation (46), it can be expressed as ∇v " Q v w∇l, and the individual ∇v i can be defined as: This can be seen as the contribution of a single measurement l i against the whole redundancy r. Relative redundancy can then be defined as the average of the diagonal elements of M: The U matrix is also an idempotent matrix, which can be described as: If an outlier (∇l i ) occurs in only one measurement (l i ), it will be reflected in the corresponding residual (v i ), as much as ∇v i " r i ∇l i . Hence, it is now clear that an outlier (∇l i ), coupled with a large number r i , will more greatly affect the corresponding v i .

Quality Dilution of Precision (QDOP)
QDOP is a quality measure that determines how much influence the geometric constellation of satellites has on internal reliability and redundancy. By using the redundancy represented in Equation (50), this can be rearranged as: In this equation, a coefficient matrix representing the geometry of the tracking satellites enables the precision of measurement to be expressed as the QDOP value: QDOP is in the range of 0 to 1. It is evident that if the measurement estimated from a satellite constellation with good geometry is closer to 0, the ability to detect an outlier can be enhanced because the redundancy was increased and the minimal detection bias (MDB) was decreased. However, if QDOP is closer to 1, in the relation between internal reliability and redundancy, the redundancy is closer to 0. This is because the denominator of the internal reliability is also 0 and the result of the MDB is infinity. The coefficient matrix then becomes the non-singular matrix. In other words, to achieve a higher redundancy, QDOP should be minimized [7].

Reliability
Deformation analysis involves two types of errors: measurement and deformation models [1]. To avoid misinterpretation of systematic errors or outliers in the measurements as deformation phenomena, screening of the measurements for outliers or systematic errors should be performed prior to the estimation of the deformation parameters. The concept of reliability originates from Baarda [4]. Generally, reliability assesses the capability of detecting outliers. In this respect, "internal reliability and external reliability" are distinguished. The internal reliability of a GNSS positioning solution is its ability to detect outliers by testing a hypothesis made with a specific confidence level (1´α) and the power of test (1´β). In other words, higher reliability means that even small errors can be found, whereas lower reliability means that even larger errors cannot be detected. The other way of testing is through the external reliability of the GNSS positioning solution in informing of the impacts of undetected errors against a detected unknown vector or estimated coordinates. Hence, high external reliability means that the undetected errors from the quality control procedures have minimal impact on coordinates. The number of redundancy measures, the geometry of the tracking satellites, and error propagation all affect reliability. Therefore, if numerous redundant measurements or many tracking satellites exist, the reliability should improve. -

Internal reliability
Internal reliability refers to the maximum number of undetectable errors and is simply represented by MDB. This variable is assessed by the lower values ∇ 0 l i of outlier ∇l i , which can detect, through the data-snooping test, if any remaining errors heavily affect the final estimation of coordinates. MDB is the magnitude of the smallest bias detectable and is determined for a given confidence level. The critical value mentioned earlier can be simplified by diagonal elements of the weight matrix P (Q): According to this formula, the detectable outlier directly depends on the precision of the measurement or its variance (σ l i ). Outliers occur when it is in the significance level of α in type 1 errors and β in type 2 errors.ˇˇˇˇv The power of test was suggested by Cross [6] to be typically 20%. The result of the previous two rearranged equations is the following: According to this equation, MDB is simultaneously dependent on the redundancy number and the covariance of the residuals, not on the measurement and residual. -

External reliability
External reliability relates to the maximum effect of a possibly undiscovered observational outlier on the estimates of unknown parameters. It verifies whether an undetected error exists, and what impact the error would have on estimated coordinates [6]. Thus, external reliability can show the impact of MDB on estimated coordinates: where Qx is the posterior variance-covariance matrix of the estimated parameters.

Introduction
In the application of a BHMS-based GNSS, monitoring is proactively performed and is completely dependent on the GNSS positioning solution. The developed quality control algorithm should therefore be applied to improve the reliability of the monitoring system. To test it, the authors applied the developed algorithm to GNSS data collected from an in-service bridge, the SeoHae cable-stayed bridge located in South Korea, which is a test-bed on which the application of GNSS has already been demonstrated [8,9]. However, although previous investigations have involved improvement of the GNSS solution, case studies on removing outliers to improve the accuracy of a GNSS solution have rarely been performed.

Evidence of Existing Outliers
A long-span bridge that is built for expanding trade, transport, and logistics, and which can be used by a number of different vehicles, often interferes with GNSS signals because of the usual amount of passing vehicles, the bridge structure itself, and environmental factors. Figures 4 and 5 graphically depict the results of vertical deflection, estimated from two GNSS receivers installed at both sides of the mid-span such that, in the lateral direction, the two receivers were installed at the same position. It is normal that no significant difference occurred in the amount of vertical deflection and its trend, as shown in the figures.
outlier on the estimates of unknown parameters. It verifies whether an undetected error exists, and what impact the error would have on estimated coordinates [6]. Thus, external reliability can show the impact of MDB on estimated coordinates: where is the posterior variance-covariance matrix of the estimated parameters.

Introduction
In the application of a BHMS-based GNSS, monitoring is proactively performed and is completely dependent on the GNSS positioning solution. The developed quality control algorithm should therefore be applied to improve the reliability of the monitoring system. To test it, the authors applied the developed algorithm to GNSS data collected from an in-service bridge, the SeoHae cable-stayed bridge located in South Korea, which is a test-bed on which the application of GNSS has already been demonstrated [8,9]. However, although previous investigations have involved improvement of the GNSS solution, case studies on removing outliers to improve the accuracy of a GNSS solution have rarely been performed.

Evidence of Existing Outliers
A long-span bridge that is built for expanding trade, transport, and logistics, and which can be used by a number of different vehicles, often interferes with GNSS signals because of the usual amount of passing vehicles, the bridge structure itself, and environmental factors. Figures 4 and 5 graphically depict the results of vertical deflection, estimated from two GNSS receivers installed at both sides of the mid-span such that, in the lateral direction, the two receivers were installed at the same position. It is normal that no significant difference occurred in the amount of vertical deflection and its trend, as shown in the figures.

-Test Description
To conduct this research, GNSS data was collected through an array of six receivers for 24 h from 09:00 on 25 March 2010. Data was received at a sampling rate of 1 Hz (1 s). The antenna used at the reference station and rovers was a Zephyr Geodetic Model 2 produced by Trimble Navigation, Ltd., (Sunnyvale, CA, USA, Figure 6 left). Receivers installed on the bridge as a rover were identical, i.e., the same model, namely a Trimble NetR5, produced by Trimble Navigation, Ltd., (Sunnyvale, CA, USA, Figure 6, right). These data were initially received at the GNSS receiver before being transmitted to the SeoHae Bridge management office through a wireless modem. The positioning

Test Description
To conduct this research, GNSS data was collected through an array of six receivers for 24 h from 09:00 on 25 March 2010. Data was received at a sampling rate of 1 Hz (1 s). The antenna used at the reference station and rovers was a Zephyr Geodetic Model 2 produced by Trimble Navigation, Ltd., (Sunnyvale, CA, USA, Figure 6 left). Receivers installed on the bridge as a rover were identical, i.e., the same model, namely a Trimble NetR5, produced by Trimble Navigation, Ltd., (Sunnyvale, CA, USA, Figure 6, right). These data were initially received at the GNSS receiver before being transmitted to the SeoHae Bridge management office through a wireless modem. The positioning methodology employed for processing GPS data from bridge is single baseline data processing which is commonly used for BHMS based GNSS. Details are provided in Roberts et al. [10]. excluded outlier; (b) Vertical coordinate at MSEA excluded outlier.

-Test Description
To conduct this research, GNSS data was collected through an array of six receivers for 24 h from 09:00 on 25 March 2010. Data was received at a sampling rate of 1 Hz (1 s). The antenna used at the reference station and rovers was a Zephyr Geodetic Model 2 produced by Trimble Navigation, Ltd., (Sunnyvale, CA, USA, Figure 6 left). Receivers installed on the bridge as a rover were identical, i.e., the same model, namely a Trimble NetR5, produced by Trimble Navigation, Ltd., (Sunnyvale, CA, USA, Figure 6, right). These data were initially received at the GNSS receiver before being transmitted to the SeoHae Bridge management office through a wireless modem. The positioning methodology employed for processing GPS data from bridge is single baseline data processing which is commonly used for BHMS based GNSS. Details are provided in Roberts et al. [10]. -

Test-Bed
The SeoHae cable-stayed bridge Figure 7), which is selected by the test-bed of this research, is part of the total length of the 7.31-km SeoHae Bridge. It is a continuous bridge that is composed of a steel box girder with five spans totaling 990 m (60 + 200 + 470 + 200 + 60). From the point of the main span, it is the second longest cable-stayed bridge in service in Korea following the Incheon cable-stayed bridge.
(a) Figure 6. Antenna of the reference station and rover. -

Test-Bed
The SeoHae cable-stayed bridge Figure 7), which is selected by the test-bed of this research, is part of the total length of the 7.

-Test Description
To conduct this research, GNSS data was collected through an array of six receivers for 24 h from 09:00 on 25 March 2010. Data was received at a sampling rate of 1 Hz (1 s). The antenna used at the reference station and rovers was a Zephyr Geodetic Model 2 produced by Trimble Navigation, Ltd., (Sunnyvale, CA, USA, Figure 6 left). Receivers installed on the bridge as a rover were identical, i.e., the same model, namely a Trimble NetR5, produced by Trimble Navigation, Ltd., (Sunnyvale, CA, USA, Figure 6, right). These data were initially received at the GNSS receiver before being transmitted to the SeoHae Bridge management office through a wireless modem. The positioning methodology employed for processing GPS data from bridge is single baseline data processing which is commonly used for BHMS based GNSS. Details are provided in Roberts et al. [10]. -

Test-Bed
The SeoHae cable-stayed bridge Figure 7), which is selected by the test-bed of this research, is part of the total length of the 7.31-km SeoHae Bridge. It is a continuous bridge that is composed of a steel box girder with five spans totaling 990 m (60 + 200 + 470 + 200 + 60). From the point of the main span, it is the second longest cable-stayed bridge in service in Korea following the Incheon cable-stayed bridge.

Pre-Analysis on the Status of GPS Signal
To apply the developed quality control algorithm, we used a 5% significance level and 80% power of test. During the observation session, some previously defined outliers occurred for approximately 7 min (05:52 to 05:59 p.m.). To detect and eliminate the outliers, we extracted approximately 2 h of data based on the time when the outliers occurred. Figure 8    It is well known that the number of tracking satellites is integrally related to the DOP value, however, any change in the number of tracking satellites can likewise affect the subsequent analysis of a variety of quality measures.

Quality Measures
GNSS data obtained from the in-service bridge contained a number of errors that were considered outliers. Through the application of the developed quality control algorithm, a number of DIA procedures were applied that affected the outliers; these procedures might have minimized coordinate errors in the final GNSS positioning solution. In this process, the validation of the algorithm and the effect of outlier removal can be explained by the quality measures described previously. This enables confirmation that we obtained an enhanced GNSS positioning solution.

QDOP and Redundancy
The first considered quality measures were QDOP and redundancy, as shown in Figure 9. The number of changes made to the tracking satellites affected QDOP and redundancy. In the section with a reduced number of satellites, QDOP values were increased and redundancy was decreased. The relationship between the change in the number of tracking satellites and redundancy likewise showed a similar trend; it was determined that QDOP had the opposite trend. approximately 2 h of data based on the time when the outliers occurred. Figure 8 helps describe some of the DOP values and the number of tracking satellites used during the observation session.
It is well known that the number of tracking satellites is integrally related to the DOP value, however, any change in the number of tracking satellites can likewise affect the subsequent analysis of a variety of quality measures.

Quality Measures
GNSS data obtained from the in-service bridge contained a number of errors that were considered outliers. Through the application of the developed quality control algorithm, a number of DIA procedures were applied that affected the outliers; these procedures might have minimized coordinate errors in the final GNSS positioning solution. In this process, the validation of the algorithm and the effect of outlier removal can be explained by the quality measures described previously. This enables confirmation that we obtained an enhanced GNSS positioning solution.

QDOP and Redundancy
The first considered quality measures were QDOP and redundancy, as shown in Figure 9. The number of changes made to the tracking satellites affected QDOP and redundancy. In the section with a reduced number of satellites, QDOP values were increased and redundancy was decreased. The relationship between the change in the number of tracking satellites and redundancy likewise showed a similar trend; it was determined that QDOP had the opposite trend.

Analysis on Posterior Variance
As shown in Figure 10, posterior variance was significantly stabilized or decreased by applying the developed quality control algorithm.

Analysis on Posterior Variance
As shown in Figure 10, posterior variance was significantly stabilized or decreased by applying the developed quality control algorithm.

Reliability
After applying the developed algorithm for quality control, the reliability analysis was divided into external and internal reliability analyses, as shown in Figures 11 and 12 which show the respective results of internal and external reliability. Despite elimination of the outlier, it is evident that its influence remained.

Reliability
After applying the developed algorithm for quality control, the reliability analysis was divided into external and internal reliability analyses, as shown in Figures 11 and 12, which show the respective results of internal and external reliability. Despite elimination of the outlier, it is evident that its influence remained.
-Internal reliability

Improved Accuracy of GNSS Positioning Solution
The aim of this sub-section is to demonstrate that the application of the developed algorithm for quality control increased the accuracy of the horizontal and vertical positioning.

Enhanced Vertical Positioning
With regard to height, a GNSS receiver installed on a bridge can assess the deflection of the deck. This deflection analysis provides information on the changed time-series of vertical deflection to the BHMS. However, if the positioning solution is contained in outliers, the delivery of false information cannot be avoided. Therefore, in implementation of the BHMS, accurate and reliable delivery of information on vertical deflection is critical. Figures 13 and 14 show the results of the comparison before and after applying the developed algorithm for quality control. As shown in the figures, a more normal deflection was monitored with a more significantly reduced RMS due to the elimination of the outliers. Table 1 summarizes the results. - Internal reliability -External reliability

Improved Accuracy of GNSS Positioning Solution
The aim of this sub-section is to demonstrate that the application of the developed algorithm for quality control increased the accuracy of the horizontal and vertical positioning.

Enhanced Vertical Positioning
With regard to height, a GNSS receiver installed on a bridge can assess the deflection of the deck. This deflection analysis provides information on the changed time-series of vertical deflection to the BHMS. However, if the positioning solution is contained in outliers, the delivery of false information cannot be avoided. Therefore, in implementation of the BHMS, accurate and reliable delivery of information on vertical deflection is critical. Figures 13 and 14 show the results of the comparison before and after applying the developed algorithm for quality control. As shown in the figures, a more normal deflection was monitored with a more significantly reduced RMS due to the elimination of the outliers. Table 1 summarizes the results.

Horizontal Direction Positioning Solution
Following the increase in accuracy for vertical deflection, the application of the developed algorithm for quality control was considered in terms of horizontal positioning. This position is regarded as the lateral and longitudinal directions, which are also important in the construction of a BHMS because deflection in the lateral direction is normally induced by wind loading; moreover, deflection in the longitudinal direction is used along with pier inclination monitoring. Figures 15  and 16 show the comparison results of the horizontal solutions before and after applying the developed algorithm for quality control.

. Horizontal Direction Positioning Solution
Following the increase in accuracy for vertical deflection, the application of the developed algorithm for quality control was considered in terms of horizontal positioning. This position is regarded as the lateral and longitudinal directions, which are also important in the construction of a BHMS because deflection in the lateral direction is normally induced by wind loading; moreover,   Following the increase in accuracy for vertical deflection, the application of the developed algorithm for quality control was considered in terms of horizontal positioning. This position is regarded as the lateral and longitudinal directions, which are also important in the construction of a BHMS because deflection in the lateral direction is normally induced by wind loading; moreover,  deflection in the longitudinal direction is used along with pier inclination monitoring. Figures 15  and 16 show the comparison results of the horizontal solutions before and after applying the developed algorithm for quality control.  The figures in Table 2 show that the elimination of outliers was achieved through the developed algorithm for quality control. However, despite applying quality control, seven outliers remained, which accounted for 6.3% of the total 110 outliers. However, this percentage was because the significance level was set to be 5% in the statistical process.

Error Ellipse Comparison
In general, analysis on the error ellipse is additionally used as a statistical indicator. In this case, the major axis of the error ellipse indicates the direction of the larger errors (i.e., the largest standard deflection in the longitudinal direction is used along with pier inclination monitoring. Figures 15  and 16 show the comparison results of the horizontal solutions before and after applying the developed algorithm for quality control.  The figures in Table 2 show that the elimination of outliers was achieved through the developed algorithm for quality control. However, despite applying quality control, seven outliers remained, which accounted for 6.3% of the total 110 outliers. However, this percentage was because the significance level was set to be 5% in the statistical process.

Error Ellipse Comparison
In general, analysis on the error ellipse is additionally used as a statistical indicator. In this case, the major axis of the error ellipse indicates the direction of the larger errors (i.e., the largest standard The figures in Table 2 show that the elimination of outliers was achieved through the developed algorithm for quality control. However, despite applying quality control, seven outliers remained, which accounted for 6.3% of the total 110 outliers. However, this percentage was because the significance level was set to be 5% in the statistical process. -

Error Ellipse Comparison
In general, analysis on the error ellipse is additionally used as a statistical indicator. In this case, the major axis of the error ellipse indicates the direction of the larger errors (i.e., the largest standard deviation); the minor axis indicates the direction of lower errors. The accuracy of the horizontal position in the error ellipse is then represented by considering the correlation between the north-south and east-west directions. Figures 17 and 18 present a comparison of the error ellipses before and after applying the developed algorithm for quality control. Consequently, it was confirmed that the length of the major axis of the error ellipse was reduced by approximately 47.52 mm through the application of quality control. In addition, the azimuth of the error ellipse was changed by 34˝; accordingly, it is considered that the deviation of direction occurred because of the outliers. deviation); the minor axis indicates the direction of lower errors. The accuracy of the horizontal position in the error ellipse is then represented by considering the correlation between the north-south and east-west directions. Figures 17 and 18 present a comparison of the error ellipses before and after applying the developed algorithm for quality control. Consequently, it was confirmed that the length of the major axis of the error ellipse was reduced by approximately 47.52 mm through the application of quality control. In addition, the azimuth of the error ellipse was changed by 34°; accordingly, it is considered that the deviation of direction occurred because of the outliers.    deviation); the minor axis indicates the direction of lower errors. The accuracy of the horizontal position in the error ellipse is then represented by considering the correlation between the north-south and east-west directions. Figures 17 and 18 present a comparison of the error ellipses before and after applying the developed algorithm for quality control. Consequently, it was confirmed that the length of the major axis of the error ellipse was reduced by approximately 47.52 mm through the application of quality control. In addition, the azimuth of the error ellipse was changed by 34°; accordingly, it is considered that the deviation of direction occurred because of the outliers.

Concluding Remarks
This investigation presented an attempt to prove the accuracy of the GNSS positioning solution for use in a BHMS based on GNSS. The applicability of GNSS for BHMS has already been validated through prior research, however, specific problems were exposed in the delivery of a reliable GNSS solution. These problems were due to the lack of consideration of outliers; the occurrence of such outliers can provide false information to the BHMS when the positioning solution is used in deflection and dynamic characteristic analyses, such as for natural frequencies and damping ratios. An experiment was performed on a long-span cable-stayed bridge. The results confirmed that approximately 110 outliers were present in a sample of real data, which had a clear influence on the GNSS positioning solution. However, when the proposed algorithm for quality control was used as a DIA procedure to minimize the effect of outliers, 93.7% (103) of all outliers were eliminated, and the results were shown to be much more reliable.