A Three-Dimensional Visualization Framework for Underground Geohazard Recognition on Urban Road-Facing GPR Data

: The identiﬁcation of underground geohazards is always a di ﬃ cult issue in the ﬁeld of underground public safety. This study proposes an interactive visualization framework for underground geohazard recognition on urban roads, which constructs a whole recognition workﬂow by incorporating data collection, preprocessing, modeling, rendering and analyzing. In this framework, two proposed sampling point selection methods have been adopted to enhance the interpolated accuracy for the Kriging algorithm based on ground penetrating radar (GPR) technology. An improved Kriging algorithm was put forward, which applies a particle swarm optimization (PSO) algorithm to optimize the Kriging parameters and adopts in parallel the Compute Uniﬁed Device Architecture (CUDA) to run the PSO algorithm on the GPU side in order to raise the interpolated e ﬃ ciency. Furthermore, a layer-constrained triangulated irregular network algorithm was proposed to construct the 3D geohazard bodies and the space geometry method was used to compute their volume information. The study also presents an implementation system to demonstrate the application of the framework and its related algorithms. This system makes a signiﬁcant contribution to the demonstration and understanding of underground geohazard recognition in a three-dimensional environment.


Introduction
In the last decade, the visualization platforms of above-ground geographical information have been constructed in various fields [1][2][3][4][5]. Owing to the concealment of the underground space, detecting underground geohazards on urban roads is a difficult problem across the world. The difficulty lies in establishing a thorough workflow and integrated platform of data acquisition, data preprocessing, data interpretation, three-dimensional (3D) modeling and visualization of the interpreted results. Every year it is reported that extensive and expensive property damages were caused by road collapse mainly derived from various underground geohazards [6]. The collapse accidents in urban areas can not only threaten human lives and properties but can also lead to damage of adjacent infrastructures, such as communication or drainage facilities, in the subsurface space surrounding the collapse area [7]. Although ground penetrating radar (GPR) technology enables geologists to explore the evolution mechanism and adopt measures to avoid underground geohazards on urban roads, enhancing the quality of preprocessed GPR data, where the generated data can affect the accuracy of geohazard algorithm and outlines our proposed methods. For 3D geological modeling, the mainstream geological models and our proposed algorithm are discussed in Section 2.3.

GPR Data Acquisition
Due to the high-speed construction of urban roads during the last few decades, road maintenance has become an extremely important issue. Road collapse is one of the main problems and can demonstrate many different characteristics, such as concealment, paroxysm, recurrence and group-occurring [6]. Collapses are mainly caused by various geohazards under the road subgrade, which can be hard to detect. Effective road maintenance needs urgent and cost-effective inspection methods to reduce road deterioration. GPR is the main technology for road subsurface inspection [9], because it can acquire general survey data in a relatively short period of time. Different GPR devices have been designed by different companies and research institutions, for example, SIR series GPR (Geophysical Survey Systems Inc., GSSI), Pulse EKKO series GPR (Canada Sensor & Software Inc., SSI), Explore GPR (Italy INGEGNERIA DEI SISTEMI (IDS) Company), LTD GPR (China Research Institute of Radio Wave Propagation, CRIRP), and GR-Radar series GPR (our research team).
As depicted in Figure 1, our GPR sends high-frequency electromagnetic pulse toward underground. A two-way travel time, 50 nanoseconds for our GPR, is the time between leaving the sender and returning to the receiver for the pulse. When the pulse meets a dielectric interface with different relative permittivity during the propagation, a part of the pulse will be reflected to the surface and the reflected wave can also be received by the receiver. During the 50 nanoseconds, the reflected waves will be received 512 (or 1024) times with equal time interval and generate 512 (or 1024) sample points. After all 512 (or 1024) sample points having been collected at a horizontal position, the GPR collects new samples at next horizontal position until all positions are reached within a pre-defined area. Our GPR has eight channels [25] and the detection speed can reach 60km per hour, which are comparable to other GPR devices. Data acquired and format will be discussed in Section 3.1.3.

GPR Data Preprocessing
Generally speaking, GPR just generates the data of survey line that should be preprocessed in order to obtain more comprehensive data that covers the whole detected area. Of all of the preprocessed methods used to reveal the theory of spatial structure correlation and randomness [26], the Kriging algorithm is one of the most widely used technologies. The current research on this algorithm focuses on the combination of different types of Kriging methods [27][28][29][30], or focuses on the hybridization between Kriging and other algorithms, such as bionic optimization algorithm [31][32][33], the support vector machine method [34], the least-square method [35], and the Gradient information [36]. To our knowledge, only a few articles have discussed how to select data points for interpolation, which is the prerequisite affecting whether the Kriging interpolation results are accurate or not. We have proposed two novel sample points selection algorithms (named D3DGC and DRD) in a previous

GPR Data Preprocessing
Generally speaking, GPR just generates the data of survey line that should be preprocessed in order to obtain more comprehensive data that covers the whole detected area. Of all of the preprocessed methods used to reveal the theory of spatial structure correlation and randomness [26], the Kriging algorithm is one of the most widely used technologies. The current research on this algorithm focuses on the combination of different types of Kriging methods [27][28][29][30], or focuses on the hybridization between Kriging and other algorithms, such as bionic optimization algorithm [31][32][33], the support vector machine method [34], the least-square method [35], and the Gradient information [36]. To our knowledge, only a few articles have discussed how to select data points for interpolation, which is the prerequisite affecting whether the Kriging interpolation results are accurate or not. We have proposed two novel sample points selection algorithms (named D3DGC and DRD) in a previous work [37] that is to be integrated in this study. In addition, we also propose a GPU-PSO-Kriging algorithm that uses a Compute Unified Device Architecture (CUDA) to run a particle swarm optimization (PSO) algorithm in parallel and a PSO that optimizes the parameters in the Kriging algorithm in order to interpolate data more accurately. All of these activities are discussed in Section 3.2.

3D Geological Modeling
Three-dimensional geological modeling is an abstract process involving the reconstruction and reproduction of geological bodies through 3D visualization technologies [38]. The complexity of geological spatial relationships increases the complexity of the data structure, topological relationship and corresponding algorithms, which makes geological modeling very difficult [39]. There are many 3D geological models [40,41], such as triangulated irregular network (TIN), triangular prism (TP), analogical triangular prism (ATP), generalized triangular prism (GTP), analogical right triangular prism (ARTP), tetrahedral network (TEN), and so on. Compared with other models, TIN cannot only change the size and number of triangular patches according to the complexity of the geological surface, but can also eliminate the data redundancy in the visualization step and maintain a high fitting accuracy [37]. Current research on TIN's construction [42][43][44][45][46][47][48] assumes that the point set of the discrete data is static and rarely discusses how to manage dynamic data points in different levels, which means that all the data points are handled together rather than creating connections layer by layer. A layer constrained TIN (LC-TIN) algorithm was proposed in a previous work [49] to construct 3D bodies from layer to layer. The details of the LC-TIN are discussed in Section 3.3.

The Principle of 3DVF4UDR
In this section, we begin by giving an overall description of 3DVF4UDR, including the architecture, the corresponding workflow and the definition of file formats. The proposed algorithms in 3DVF4UDR are then discussed in-depth in the following workflow steps.

Overall Description of 3DVF4UDR
The layered architecture of 3DVF4UDR makes it flexible so that it can configure the workflow steps. The self-defined file formats ensure 3DVF4UDR can conveniently save or load the interpreted results.
3.1.1. The Architecture As described in Figure 2, the architecture of 3DVF4UDR adopts the specification of Model-View-Controller (MVC) including resources layer, services access layer, business logic layer and presentation layer. The layer division can encapsulate different program functions into different containers that give 3DVF4UDR high flexibility and maintainability.
The presentation layer encapsulates the framework's interactive interface to users. The GPR's original and intermediate or final result data are displayed in 2D/3D views and the framework functions are organized as different type windows. The business logic layer is responsible for system logic dispatch. For example, the algorithm scheduling component is in charge of dispatching a series of algorithms, including GPU-PSO-Kriging, LC-TIN, D3DGC, DRD, and so on. The rendering engine is implemented by OpenGL that can execute the 2D/3D rendering and CUDA computing. The self-defined tools component has an ArcBall sub-component that can control the moving direction and zoom of 3D objects, a MathUtils sub-component to realize common mathematical functions, the self-developed RichTree and RichList control sub-components. The data, model, process and rendering management components are derived and dispatched in the management container. In the services access layer, there is an input/output interface of different original, intermediate, or final data. Various basic and complex data types are defined in the data type container. Finally, the raw, processed and management data is stored in a database or file system in resources layer.

Workflow Description
The 3DVF4UDR integrates the whole process of underground geohazard recognition. The workflow can be described in Figure 3, which contains all of the operations from data acquisition to result generation, with the previous process acting as the prerequisite for the next process. The data in each step can be visualized by the rendering engine, including raw GPR data (saved as .txt and .dat files), interpolated data (.G3D files), slice data (.slc and .slm files), and the final result images. Different file formats can analyze the original, intermediate and final results in a more convenient way. They organize data in the services access layer and ensure that the system can start from any step in the operation.

File Format Descriptions
In the 3DVF4UDR, some self-defined file formats are used to describe the GPR source and intermediate data, including .txt, .dat, .G3D, .slc, and .slm. As described in Figure 4 (a), a survey line index file (.txt file) is in charge of organizing a survey line data file (.dat file) for enhancing the transfer efficiency. The .dat file is saved as binary data that only contains the values of amplitudes from dielectric interface collected by an electromagnetic wave. It is defined by its start (Xs, Ys) and end (Xe, Ye) coordinates in Figure 4a. In addition, the uniform information of all of the survey line data is defined in .txt files, such as the number of horizontal positions and sample points, electromagnetic wave speed and two-way travel time. As described in Figure 4d, survey line data files can be interpolated into a .G3D file that contains all of the information covered by the detection area. The

Workflow Description
The 3DVF4UDR integrates the whole process of underground geohazard recognition. The workflow can be described in Figure 3, which contains all of the operations from data acquisition to result generation, with the previous process acting as the prerequisite for the next process. The data in each step can be visualized by the rendering engine, including raw GPR data (saved as .txt and .dat files), interpolated data (.G3D files), slice data (.slc and .slm files), and the final result images. Different file formats can analyze the original, intermediate and final results in a more convenient way. They organize data in the services access layer and ensure that the system can start from any step in the operation.

Workflow Description
The 3DVF4UDR integrates the whole process of underground geohazard recognition. The workflow can be described in Figure 3, which contains all of the operations from data acquisition to result generation, with the previous process acting as the prerequisite for the next process. The data in each step can be visualized by the rendering engine, including raw GPR data (saved as .txt and .dat files), interpolated data (.G3D files), slice data (.slc and .slm files), and the final result images. Different file formats can analyze the original, intermediate and final results in a more convenient way. They organize data in the services access layer and ensure that the system can start from any step in the operation.

File Format Descriptions
In the 3DVF4UDR, some self-defined file formats are used to describe the GPR source and intermediate data, including .txt, .dat, .G3D, .slc, and .slm. As described in Figure 4 (a), a survey line index file (.txt file) is in charge of organizing a survey line data file (.dat file) for enhancing the transfer efficiency. The .dat file is saved as binary data that only contains the values of amplitudes from dielectric interface collected by an electromagnetic wave. It is defined by its start (Xs, Ys) and end (Xe, Ye) coordinates in Figure 4a. In addition, the uniform information of all of the survey line data is defined in .txt files, such as the number of horizontal positions and sample points, electromagnetic wave speed and two-way travel time. As described in Figure 4d, survey line data files can be interpolated into a .G3D file that contains all of the information covered by the detection area. The

File Format Descriptions
In the 3DVF4UDR, some self-defined file formats are used to describe the GPR source and intermediate data, including .txt, .dat, .G3D, .slc, and .slm. As described in Figure 4a, a survey line index file (.txt file) is in charge of organizing a survey line data file (.dat file) for enhancing the transfer efficiency. The .dat file is saved as binary data that only contains the values of amplitudes from dielectric interface collected by an electromagnetic wave. It is defined by its start (Xs, Ys) and end (Xe, Ye) coordinates in Figure 4a. In addition, the uniform information of all of the survey line data is defined in .txt files, such as the number of horizontal positions and sample points, electromagnetic wave speed and two-way travel time. As described in Figure 4d, survey line data files can be interpolated into a .G3D file that contains all of the information covered by the detection area. The file head of .G3D includes the minimum and maximum value information in X, Y, Z directions and the number of interpolated points in the three directions that are defined as IPNX, IPNY and IPNZ. The body of the .G3D file stores all of the data that are interpolated along the survey lines ( Figure 1) according to the obtaining sequence of the sample points. The geohazard information is saved in a slice file (.slc file). This data is selected by broken lines that are mapped to slice data. The slice data is then extracted from the .G3D data. As shown in Figure 4c, the proposed framework supports to identify different geohazard bodies, which are defined by the different headings "Body Type No.", "Body No." and "Body attributes". "Body attributes" refers to the body types of different geohazards, for example, underground cavities, water, or cracks. "Body No." is the geohazard body serial number. Both "Body No." and "Body Type No." can uniquely determine a certain type of geohazard body. Each body has its own attribute information, such as body color, transparency, font, and so on. All of the slice files are managed by the .slm file that is defined in Figure 4b. "Slice direction" indicates that the 3DVF4UDR can divide .G3D data into various slices according to different coordinate planes, such as XOY, YOZ, and XOZ planes. Other statistical information (number of slices and 3D geohazard bodies' amount) is also described in the slice management file.
ISPRS Int. J. Geo-Inf. 2020, 9, x FOR PEER REVIEW 6 of 20 file head of .G3D includes the minimum and maximum value information in X, Y, Z directions and the number of interpolated points in the three directions that are defined as IPNX, IPNY and IPNZ. The body of the .G3D file stores all of the data that are interpolated along the survey lines ( Figure 1) according to the obtaining sequence of the sample points. The geohazard information is saved in a slice file (.slc file). This data is selected by broken lines that are mapped to slice data. The slice data is then extracted from the .G3D data. As shown in Figure 4c, the proposed framework supports to identify different geohazard bodies, which are defined by the different headings "Body Type No.", "Body No." and "Body attributes". "Body attributes" refers to the body types of different geohazards, for example, underground cavities, water, or cracks. "Body No." is the geohazard body serial number. Both "Body No." and "Body Type No." can uniquely determine a certain type of geohazard body. Each body has its own attribute information, such as body color, transparency, font, and so on. All of the slice files are managed by the .slm file that is defined in Figure 4b. "Slice direction" indicates that the 3DVF4UDR can divide .G3D data into various slices according to different coordinate planes, such as XOY, YOZ, and XOZ planes. Other statistical information (number of slices and 3D geohazard bodies' amount) is also described in the slice management file. According to the aforementioned discussion, except the raw and interpolated data file (.dat and .G3D files), the intermediate data files (for example, the .slc and .slm files) can make the 3DVF4UDR load generate geohazard bodies at any time, which can enhance operational efficiency.
The following sections will discuss some of the algorithms related to the workflow in more details. For the data preprocessing step, 3DVF4UDR adopts D3DGC and DRD algorithms to choose the sampling data for the Kriging algorithm more reasonably and uses a particle swarm optimization (PSO) algorithm to optimize the Kriging parameters in a GPU parallel mode (named GPU-PSO-Kriging algorithm). In the process of data modeling, the LC-TIN algorithm was designed to model the geohazard bodies from layer to layer as arbitrary shapes. A space geometric algorithm was applied to compute the volume information of the geohazard bodies in this step of the data analysis.

Data Preprocessing
Generally, survey line data generated by a GPR instrument should be preprocessed to generate adequate data. The Kriging interpolation algorithm is widely applied to perform this task. A key problem is then how to select sampling data for the Kriging method. Our study adopted a GPU-PSO-Kriging interpolation algorithm to handle the survey line data. Two sampling points selection algorithms were used in the framework to enhance interpolation accuracy, namely a dynamic threedimensional globe cover algorithm (named D3DGC) and a double reverse distance algorithm (DRD) [37].

Sampling Point Selection Algorithms
The rational selection of sampling data is a prerequisite for ensuring the accuracy of the Kriging interpolation results. Liu et al. [50] suggested that some factors could affect interpolation results, According to the aforementioned discussion, except the raw and interpolated data file (.dat and .G3D files), the intermediate data files (for example, the .slc and .slm files) can make the 3DVF4UDR load generate geohazard bodies at any time, which can enhance operational efficiency.
The following sections will discuss some of the algorithms related to the workflow in more details. For the data preprocessing step, 3DVF4UDR adopts D3DGC and DRD algorithms to choose the sampling data for the Kriging algorithm more reasonably and uses a particle swarm optimization (PSO) algorithm to optimize the Kriging parameters in a GPU parallel mode (named GPU-PSO-Kriging algorithm). In the process of data modeling, the LC-TIN algorithm was designed to model the geohazard bodies from layer to layer as arbitrary shapes. A space geometric algorithm was applied to compute the volume information of the geohazard bodies in this step of the data analysis.

Data Preprocessing
Generally, survey line data generated by a GPR instrument should be preprocessed to generate adequate data. The Kriging interpolation algorithm is widely applied to perform this task. A key problem is then how to select sampling data for the Kriging method. Our study adopted a GPU-PSO-Kriging interpolation algorithm to handle the survey line data. Two sampling points selection algorithms were used in the framework to enhance interpolation accuracy, namely a dynamic three-dimensional globe cover algorithm (named D3DGC) and a double reverse distance algorithm (DRD) [37].

Sampling Point Selection Algorithms
The rational selection of sampling data is a prerequisite for ensuring the accuracy of the Kriging interpolation results. Liu et al. [50] suggested that some factors could affect interpolation results, including the distribution characteristics of sampled data, the nugget effect, the range and type of variation model, as well as the anisotropy and searching range of space configuration. Deng et al. [51] further pointed out that the sampled data that was close to estimation point had a great influence on the estimation structure and that a spherical model and exponential model should be chosen in most cases. However, they only offered a general rule of sampled data selection and did not discuss how to select sampling data deeply. Based on their findings on minimum variance criterion, the work [26] used multi-distance measures to describe space structures, including Manhattan distance, Euclidean distance and Chebyshev distance measures. However, their research only extended the distance measure and did not discuss the sampled data selection problem.
According to the theory of geological data correlation, the points that are close to the interpolated point should be selected as the sampling points. The D3DGC algorithm generates sampling points by moving a sphere area that took the interpolated point as the center and had a radius of dynamic growth until it satisfied the quantity requirement of sampling points. When the values of survey line data are similar and the data volume is very large, the interpolation effects may not be the best approach when only considering on space correlation theory. By introducing randomness, the proposed DRD method allocates points to different survey line data and further arranged point distribution on each survey line according to proposed correlation and randomness rules.
It has been elaborated that D3DGC algorithm can figure out the best number of sample points, e.g., 210, given a specific dataset [37]. It reduces the error rate from 8% to 1.3% compared with traditional method that only uses horizontal neighbours for interpolation. When the number is taken between 180 and 270 rather than 210, the error rate will be only increasing up to 10% for D3DGC. On the other hand, traditional methods can only reach 10% or lower for a small fraction of selection numbers and can get as much as 31%. The reason is that D3DGC considers the correlations between the sample points and it searches the sample space in a spherical area by gradually increasing the radius. Thus, the complexity of D3DGC is O(m * n * p * u), where m, n, p, u are the number of sample points, total points, survey lines and horizontal positions, respectively. Another option of sample points selection algorithm is DRD that also gives a lower error rate. DRD has a complexity of O(n * p), which is suitable for sampling from large data.

GPU-PSO-Kriging Algorithm
The Kriging interpolation algorithm is the best unbiased linear space prediction method [52], which is based on the variation distribution of space attributes on space positions. Suppose that there are N sampling points pt i , 1 ≤ i ≤ N and Equation (1) defines the ordinary Kriging prediction: where Z * (pt 0 ) is the estimation value of interpolated point pt 0 ,Z(pt i ) the value of the ith sampling point pt i , and λ i is the weight coefficient of pt i indicating the contribution of Z(pt i ) to Z * (pt 0 ). The Kriging algorithm uses a semi-variogram to compute the influenced weights of sampled points on the interpolated point in order to realize its attribute estimation. Among all the semi-variograms, the spherical model is commonly used in geology [51] and it can be described as follows: where C 0 is the nugget,C 0 + C is the sill value, a is the range and h is the increment of regionalized variables. We denote that h i,j is the distance between pt i and pt j . According to an unbiased estimation, Equations (3) formulates the relationship between λ i and γ(·): where µ is the Lagrange multiplier and γ(·) is the variogram. Thus, Equation (3) can be described as the following matrix style: According to Equations (3) and (4), the solved parameters of the Kriging algorithm is the variogram values in A and λ. For the spherical model, the parameters are generally fixed values that depend on expert experience and have a certain subjectivity. Our study adopts a particle swarm optimization (PSO) algorithm to optimize model parameters, which can reflect the best parameter estimation. PSO was suggested by in a previous work [53] that simulated the swarming behavior of birds in an iterative fashion: where ω is the inertia weight, c 1 and c 2 are learning factors, and rand i s are random numbers uniformly distributed between [0,1]. This study adopts an ordinary Kriging algorithm and spherical model to interpolate regionalized points. The PSO algorithm optimizes parameters c and C 0 , which are designed as particle position or velocity vector <c, C 0 , a> with the maximum iterative number iter max = 1000 and the threshold value Value threshold = 10 −6 . The fitness function is defined as follows: where Z * pt j is the estimation value and Z pt j is the actual value. For large amounts of data the PSO computation time is too long. Thus, our work adopted GPU to accelerate the convergence speed of the whole swarm and based it on compute unified device architecture (CUDA). The parallel computing model is described in Figure 5.
CUDA is a single instruction multiple data (SIMD) parallel computing model. As a coprocessor, GPU can assist CPU to finish highly parallelized computation task through generating lots of threads. As described in Figure 5, CUDA adopts a multilayer memory architecture including threads, blocks and grids. All the blocks in one grid carry out the same function and do not need to communicate with each other. All of the threads in one block can implement the sharing of data communication by sharing memory and each thread has a private register and local memory. The steps of the GPU-PSO-Kriging algorithm are defined in Algorithm 1.
where * ( ) is the estimation value and ( ) is the actual value. For large amounts of data the PSO computation time is too long. Thus, our work adopted GPU to accelerate the convergence speed of the whole swarm and based it on compute unified device architecture (CUDA). The parallel computing model is described in Figure 5. . CUDA parallel computing model. Figure 5. CUDA parallel computing model.

1.
Initialize the velocities and positions of all the particles randomly on CPU side; CUP transmits particle information to GPU; 2.
Each particle to compute velocity and position by using the Equations (6) and (7) on GPU in parallel; 3.
Compute fitness function value by formula (8) to update p i,d on GPU in parallel; 4.
On CPU side, update p g,d , if reach iter max or Value threshold , give optimized c, C 0 , a and algorithm finished, else return to step (2).

Data Modeling
Data modeling is an important process in 3DVF4UDR. As shown in 3DVF4UDR in Figure 6, the interpolated geological data can be sliced at any 3D plane and geological information can be recognized by pickup lines, which can represent closed or non-closed areas. Afterwards, the pickup lines are constructed to different 3D geohazard bodies by using the proposed layer-constrained TIN (LC-TIN) algorithm. CUDA is a single instruction multiple data (SIMD) parallel computing model. As a coprocessor, GPU can assist CPU to finish highly parallelized computation task through generating lots of threads. As described in Figure 5, CUDA adopts a multilayer memory architecture including threads, blocks and grids. All the blocks in one grid carry out the same function and do not need to communicate with each other. All of the threads in one block can implement the sharing of data communication by sharing memory and each thread has a private register and local memory. The steps of the GPU-PSO-Kriging algorithm are defined in Algorithm 1.

Algorithm 1 The Steps of the GPU-PSO-Kriging Algorithm
Input: , c1, c2, , Steps 1. Initialize the velocities and positions of all the particles randomly on CPU side; CUP transmits particle information to GPU; 2. Each particle to compute velocity and position by using the Equations (6) and (7) on GPU in parallel; 3. Compute fitness function value by formula (8) to update , on GPU in parallel; 4. On CPU side, update , , if reach or , give optimized c, , a and algorithm finished, else return to step (2).

Data Modeling
Data modeling is an important process in 3DVF4UDR. As shown in 3DVF4UDR in Figure 6, the interpolated geological data can be sliced at any 3D plane and geological information can be recognized by pickup lines, which can represent closed or non-closed areas. Afterwards, the pickup lines are constructed to different 3D geohazard bodies by using the proposed layer-constrained TIN (LC-TIN) algorithm. The basic conceptions are defined in Table 1 and the LC-TIN algorithm can be described in Algorithm 2 that describes the construction process of geohazard bodies. We assume that the data points are evenly distributed on each layer. It has been seen that given fixed number of total data points, the computing time of LC-TIN decreases as the number of layers, , increase because the number of data points on each layer, , decreases. Thus, the complexity is ( ) and < is the The basic conceptions are defined in Table 1 and the LC-TIN algorithm can be described in Algorithm 2 that describes the construction process of geohazard bodies. We assume that the data points are evenly distributed on each layer. It has been seen that given fixed number of total data points, the computing time of LC-TIN decreases as the number of layers, M, increase because the number of data points on each layer, v, decreases. Thus, the complexity is O Mv 2 and M < v is the most common case. For unevenly distributed data points, the running time has been tested to be consistent with O Mv 2 [49].

Data Analysis
For the 3D geohazard bodies, 3DVF4UDR can analyze their attribute information. This includes the space position computation of geohazard objects, projections to any 2D coordinate planes, and the volume computation of 3D bodies. Data analysis can provide the necessary information for disaster treatment. We adopt the space geometric method to estimate the volumes of geohazard bodies and the whole volume is accumulated by the sub-volumes of η. The steps for computing volume for the bth geohazard body are described in Algorithm 3. Table 1. Basic conceptions in the LC-TIN algorithm.

Notations Descriptions
δ Geological data point δ=<ζ, γ >,ζ = x, y, z is the 3D coordinate values and γ is the δ attribute value, for example the reflection value of an electromagnetic wave.  is the ith δ on ϑ b that is the base line of η b c , i ∈ {1, 2, · · · N η bc }, σ Base edge: σ i η bc is the line formed by two adjacent points δ i ϑ bc and δ i+1 3]. δ j cannot be on the same . Φ Pickup triangle set: the T set generated by Ω Pickup 3D body: the 3D body constructed by Φ b = {Φ b c |c = 1, 2, · · · , N b } is named as Ω b , and Ω = {Ω b |b = 1, 2, · · · , t}.
For road maintenance personnel, the estimation of geohazard volumes and positions can assist them to take necessary countermeasures. When W is large enough, the estimation can reflect the real situation. In Section 3, we have discussed the architecture of 3DVF4UDR that adopts the specification of MVC and all the processes are integrated into one workflow. GPR data is organized as various self-defined files in different workflow steps. Some novel algorithms, which construct the whole chain of underground geohazard recognition, are described in detail. They are the GPU-PSO-Kriging algorithm (in Section 3.2), the LC-TIN algorithm (in Section 3.3) and the space geometric method (in Section 3.4).
}//end for (1) 10. Accumulate all the sub-volumes V η bc to get the volume of the bth geohazard body.

Experimental Environment and Data
The proposed framework was implemented by C++, OpenGL, CUDA and MFC architecture. The operating system was Windows 10 professional version (64Bit), the software development environment was VS.NET 2019, and the hardware environment was as follows. The CPU was anIntel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz with 16GB memory and the video card was a NVIDIA Quadro K2100M. The experimental data was acquired by our self-developed GPR instrument and the details information is described in Table 2 where twelve sets of samples were drawn. The unit for the coordinates is meter.

Data Preprocessing
As shown in Figure 7, when picking up the data from each survey line and displaying it in the left "DAT AND G3D" window, the corresponding data will be visualized in a "2D view" window. All of the survey line data needs to be preprocessed before further modeling and analyzing.

Data Preprocessing
As shown in Figure 7, when picking up the data from each survey line and displaying it in the left "DAT AND G3D" window, the corresponding data will be visualized in a "2D view" window. All of the survey line data needs to be preprocessed before further modeling and analyzing. In Figure 8, all of the partial survey line data (file1.dat~file12.dat) has been adopted to execute the interpolation task. The coordinate ranges of the detection area have been set as required. In this example, a 5.5 × 5.5 square area has been set with 26 layers in X, Y, and Z directions separately. The platform configures different Kriging algorithms, such as ordinary Kriging, GPU-PSO-Simple Kriging and GPU-PSO-Ordinary Kriging. Figure 9 shows how their parameters can be set. In Figure 8, all of the partial survey line data (file1.dat~file12.dat) has been adopted to execute the interpolation task. The coordinate ranges of the detection area have been set as required. In this example, a 5.5 × 5.5 square area has been set with 26 layers in X, Y, and Z directions separately.
The platform configures different Kriging algorithms, such as ordinary Kriging, GPU-PSO-Simple Kriging and GPU-PSO-Ordinary Kriging. Figure 9 shows how their parameters can be set. In Figure 8, all of the partial survey line data (file1.dat~file12.dat) has been adopted to execute the interpolation task. The coordinate ranges of the detection area have been set as required. In this example, a 5.5 × 5.5 square area has been set with 26 layers in X, Y, and Z directions separately. The platform configures different Kriging algorithms, such as ordinary Kriging, GPU-PSO-Simple Kriging and GPU-PSO-Ordinary Kriging. Figure 9 shows how their parameters can be set.  Within the platform it is possible to choose different sampling point selection methods and configure the corresponding parameters. Three selection methods are provided: "Min Distance Sort Algorithm", "DRD Algorithm" and "D3DGC Algorithm". All of them have a "Point Number" parameter option while the "DRD Algorithm" also has a "P" and "Q" parameter option. For the Kriging algorithm itself, some common variogram models have been integrated into the platform, including spherical model, exponential model, Gaussian model and linear model. The platform also sets the initial values of the Kriging parameters, such as nugget, sill, range and rise, but these will be optimized by PSO in GPU-PSO-Kriging algorithm if the latter is chosen (as shown in Figure 9). After generating .G3D data (named K_26_26_26 in Figure 10), the data can be arranged for different layers at any coordinate plane, such as XOY, YOZ and XOZ. It can also be visualized, wholly or partially, and displayed at any angle by dragging the mouse as shown in Figure 10.  Within the platform it is possible to choose different sampling point selection methods and configure the corresponding parameters. Three selection methods are provided: "Min Distance Sort Algorithm", "DRD Algorithm" and "D3DGC Algorithm". All of them have a "Point Number" parameter option while the "DRD Algorithm" also has a "P" and "Q" parameter option. For the Kriging algorithm itself, some common variogram models have been integrated into the platform, including spherical model, exponential model, Gaussian model and linear model. The platform also sets the initial values of the Kriging parameters, such as nugget, sill, range and rise, but these will be optimized by PSO in GPU-PSO-Kriging algorithm if the latter is chosen (as shown in Figure 9). After generating .G3D data (named K_26_26_26 in Figure 10), the data can be arranged for different layers at any coordinate plane, such as XOY, YOZ and XOZ. It can also be visualized, wholly or partially, and displayed at any angle by dragging the mouse as shown in Figure 10.

Data Modeling
As shown in Figure 11, users can pick geohazard information from 26 layers of .G3D data, either in part or as a whole. The layers containing geohazard information can be chosen as slices and the interactive operations can pick up this information on closed or unclosed shapes. After the geohazard information has been selected for all of the slices, the attributes of the different 3D geohazard bodies can be configured. These are organized by pickup lines in different slices including body name, color and texture ( Figure 12).

Data Modeling
As shown in Figure 11, users can pick geohazard information from 26 layers of .G3D data, either in part or as a whole. The layers containing geohazard information can be chosen as slices and the interactive operations can pick up this information on closed or unclosed shapes. After the geohazard information has been selected for all of the slices, the attributes of the different 3D geohazard bodies can be configured. These are organized by pickup lines in different slices including body name, color and texture ( Figure 12).    In the proposed platform, different Delaunay criteria of the LC-TIN algorithm can be selected in order to implement the construction of the 3D geohazard bodies, where the bodies are assembled by the pickup lines from layer to layer. As shown in Figure 13, when the common criterion Min Area of Circumcircle is specified, a picture of the constructed 3D geohazard bodies displays in the main window. In the proposed platform, different Delaunay criteria of the LC-TIN algorithm can be selected in order to implement the construction of the 3D geohazard bodies, where the bodies are assembled by the pickup lines from layer to layer. As shown in Figure 13, when the common criterion Min Area of Circumcircle is specified, a picture of the constructed 3D geohazard bodies displays in the main window.

Data Analysis
Geohazard bodies can be visualized and projected to different planes. For example, four geohazard bodies are projected to XOY plane in Figure 14, which can help users to roughly estimate the handled area.

Data Analysis
Geohazard bodies can be visualized and projected to different planes. For example, four geohazard bodies are projected to XOY plane in Figure 14, which can help users to roughly estimate the handled area.

Data Analysis
Geohazard bodies can be visualized and projected to different planes. For example, four geohazard bodies are projected to XOY plane in Figure 14, which can help users to roughly estimate the handled area. In Figure 15, the volume or surface area of 3D bodies can be calculated by the space geometric method mentioned in Section 3.4. Users can confirm the geohazard positions and scopes in the detection areas. Moreover, the geohazard bodies can be displayed wholly or partially in 3D scenes through the visualization of those parts between the different slices as shown in Figure 16. In Figure 15, the volume or surface area of 3D bodies can be calculated by the space geometric method mentioned in Section 3.4. Users can confirm the geohazard positions and scopes in the detection areas. Moreover, the geohazard bodies can be displayed wholly or partially in 3D scenes through the visualization of those parts between the different slices as shown in Figure 16.

Conclusions and Future Work
In this work, we have presented an interactive 3D visualization platform for underground geohazard recognition on urban roads. A self-developed GPR instrument has been used to acquire the underground road data and 3DVF4UDR provides a workflow to integrate the whole recognition process. In the preprocessing step, two sampling point selection algorithms were adopted in the platform to choose the sampled data more accurately. In order to obtain more detailed underground data, an improved Kriging algorithm was used to interpolate sampling data, which applies the PSO

Conclusions and Future Work
In this work, we have presented an interactive 3D visualization platform for underground geohazard recognition on urban roads. A self-developed GPR instrument has been used to acquire the underground road data and 3DVF4UDR provides a workflow to integrate the whole recognition process. In the preprocessing step, two sampling point selection algorithms were adopted in the platform to choose the sampled data more accurately. In order to obtain more detailed underground

Conclusions and Future Work
In this work, we have presented an interactive 3D visualization platform for underground geohazard recognition on urban roads. A self-developed GPR instrument has been used to acquire the underground road data and 3DVF4UDR provides a workflow to integrate the whole recognition process. In the preprocessing step, two sampling point selection algorithms were adopted in the platform to choose the sampled data more accurately. In order to obtain more detailed underground data, an improved Kriging algorithm was used to interpolate sampling data, which applies the PSO algorithm to optimize the Kriging parameters and employs GPU to enhance the convergence speed. In the modeling step, the LC-TIN algorithm has been used to ensure that the 3D geological bodies display more smoothly and precisely. The space geometry method has been applied to estimate geohazard volumes.
For future works, there are four aspects that can be further explored. (1) The proposed 3DVF4UDR adopts the mode of human-computer interaction to recognize geohazard information, which is based on static GPR data, because it is very difficult to differentiate geohazard character automatically. For example, both pipelines and empty cavities are holes in the ground but only one of them is a geohazard. It is still a challenge to distinguish between them. Future work in this area should look to detect the temporal and spatial changes of empty holes through dynamistic GPR data and parallel computer vison technologies on the assumption that geohazards change form but pipelines do not.
(2) In order to detect the change information, a cloud platform could be established and used to store the large amounts of GPR data, which would be necessary to cover the temporal and spatial changes in the same road section. (3) Algorithms charting geological change information could be studied more deeply. Innovative data preprocessing methods, along with the corresponding modeling and analysis technologies, could be studied further so that they can continue to improve the recognition of underground geohazards. (4) More practical comparisons are supposed to be made for further validation and improvement.