Next Article in Journal
Comparison of the Radial Growth Response of Picea crassifolia to Climate Change in Different Regions of the Central and Eastern Qilian Mountains
Next Article in Special Issue
Sustaining Biomaterials in Bioeconomy: Roles of Education and Learning in Mekong River Basin
Previous Article in Journal
Changes in Soil Microbial Community Structure Following Different Tree Species Functional Traits Afforestation
Previous Article in Special Issue
Analysis of Imaging Internal Defects in Living Trees on Irregular Contours of Tree Trunks Using Ground-Penetrating Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Method of Hyperbola Recognition in Ground Penetrating Radar (GPR) B-Scan Image for Tree Roots Detection

1
School of Technology, Beijing Forestry University, Beijing 100083, China
2
Joint International Research Institute of Wood Nondestructive Testing and Evaluation, Beijing Forestry University, Beijing 100083, China
3
Beijing Summer Palace Management Office, Beijing 100091, China
4
China State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing 100012, China
*
Author to whom correspondence should be addressed.
Forests 2021, 12(8), 1019; https://doi.org/10.3390/f12081019
Submission received: 9 June 2021 / Revised: 14 July 2021 / Accepted: 26 July 2021 / Published: 30 July 2021

Abstract

:
Ground penetrating radar (GPR), as a newly nondestructive testing technology (NDT), has been adopted to explore the spatial position and the structure of the tree roots. Due to the complexity of soil distribution and the randomness of the root position in the natural environment, it is difficult to locate the root in the GPR B-Scan image. In this study, a novel method for root detection in the B-scan image by considering both multidirectional features and symmetry of hyperbola was proposed. Firstly, a mixed dataset B-Scan images were employed to train Faster RCNN (Regions with CNN features) to obtain the potential hyperbola region. Then, the peak area and its connected region were filtered from the four directional gradient graphs in the proposed region. The symmetry test was applied to segment the intersecting hyperbolas. Finally, two rounds of coordinate transformation and line detection based on Hough transform were employed for the hyperbola recognition and root radius and position estimation. To validate the effectiveness of this approach for tree root detection, a mixed dataset was made, including synthetic data from gprMax as well as field data collected from 35 ancient tree roots and fresh grapevine controlled experiments. From the results of hyperbola recognition as well as the estimation for the radius and position of the root, our method shows a significant effect in root detection.

1. Introduction

As an important part of the Earth’s ecosystem, the tree plays a positive role in dealing with soil erosion and adjusting climate. Most roots, which are the main organ for absorbing nutrients and for physical support, are underground [1,2]. Because of the opacity of the soil and the complexity of the root spatial structure, the study for roots lags far behind that of other organs of the tree. In recent years, ground penetrating radar (GPR), a new nondestructive geophysical exploration technology, has been widely used in the field of material detection such as in minerals and civil engineering applications [3,4,5]. Compared to traditional root nondestructive detection methods like Electrical Resistance Tomography [6,7] (ERT) and Ultrasonic Pulse Velocity Analysis [8,9,10,11] (UPV), GPR is employed to explore the spatial structure of the root [12,13] because it is efficient, accurate, simple to operate, and high-resolution. It not only promotes research on roots but also provides the related data for tree protection and engineering construction.
Owing to the difference of the relative permittivity between the tree roots and the soil, the roots usually appear in a hyperbola pattern [14,15] in radar B-Scan images obtained from the GPR. So, the root detection is transformed into the recognition of the hyperbola pattern in the B-Scan image. The traditional interpretation of the B-Scan image requires experts with relevant expertise, which is inefficient and difficult to generalize. In the past decade, some researchers proposed processing methods including the techniques of signal processing, image processing, and machine learning for hyperbola detection in B-Scan images. These methods are roughly summarized in three steps [16,17,18,19]: (1) pre-process B-Scan image to reduce the influences of the noise, system effect, and other influence factors; (2) segment image to search for the potential hyperbola region; and (3) distinguish the hyperbola from the candidate area.
In the actual environment, there are often many noises in the B-Scan image caused by the randomness of the soil distribution, the radar hardware, and wave interactions. These factors bring many difficulties to hyperbola recognition. Thus, some signal processing and image processing methods are applied to preprocess the B-Scan image such as removing direct wave, filtering, and adjusting gain. Wen et al. [20] presented shearlet transform to denoise the B-Scan image and achieved higher scores in several image evaluation criteria including the peak signal-to-noise ratio (PSNR), the signal-to-noise ratio (SNR), and the edge preservation index (EPI) than in wavelet, contourlet, and curvelet [21] transform. To obtain the regions which might contain the hyperbola, these researchers [22,23,24] adopted manual threshold, boundary threshold, and edge detection to process the preprocessed image [25] after signal processing [26,27] into a binary image. To detect the hyperbola region with machine learning, Mass et al. [28] adopted Viola-Jones (VJ) [29] to extract hyperbola regions and Pasolli et al. [16,30] designed a method based on Genetic Algorithm (GA) and adopted Support Vector Machine (SVM) to classify the binary region. With the growth of data volume, the traditional learning methods based on manual features show their shortcomings. Deep learning, a new learning algorithm without predefined features, develops rapidly. Some experts [31,32,33,34,35,36,37] proposed hyperbola detection methods with Convolutional Neural Network (CNN) [38]. Pham et al. [39] and Lei et al. [40] employed Faster RCNN (Regions with CNN features) [41], a two-stage object detection framework based on CNN. To separate and recognize the hyperbola from the extracted regions, Dou et al. [42] proposed the column connection clustering (C3) algorithm to segment hyperbola from the binary image and classified the segmentation by the neural network. Zhou et al. [23] considered the down-open characteristic of the hyperbola and proposed an open-scan clustering algorithm (OSCA) to obtain the hyperbola points set, then classified the set by a hyperbola feature, and finally used the restricted algebraic-distance-based fitting (RADF) algorithm to fit a hyperbola by calculating algebraic distance. Lei et al. [40] proposed the double cluster seeking estimate (DCSE) and column-based transverse filter points (CTFP) methods to depart the hyperbola area and fitted the hyperbola based on the potential hyperbola region proposed by Faster RCNN. In these studies [22,24,26,43,44,45,46], the redesigned Hough transform and Least Squares (LS) methods by the hyperbola characteristic were used to detect the hyperbola. Liang et al. [47] estimated the root diameter on gprMax data.
Because of the change of the relative permittivity underground, there are many visible response areas in the radar B-Scan image. In the on-site B-Scan image, the areas like the boundaries of the air–soil and the soil–root show obvious responses. The previous methods made use of the change of A-scan signal at a single detection point in the vertical direction and ignored the difference between adjacent multiple detection points in the horizontal direction. As for separating the intersecting hyperbolas, these methods focused on the connectivity and opening direction without considering the symmetry of the hyperbola. In addition, the results of this framework mainly depend on binary images. The general threshold calculation could be influenced by the outliers that would cause the binary images, ignoring some local change and losing the corresponding information.
In this study, we paid attention to the characters in both the detection direction and scanning direction and introduced symmetry to separate multiple crossed hyperbolas. A novel method of hyperbola recognition for root detection in GPR B-Scan image was proposed. This method can be divided into three parts: hyperbola region detection, connected region extraction, and hyperbola recognition. In the first part, the mixed dataset which is comprised of the synthetic data from the gprMax toolbox [48] and field data from GPR was used to train the Faster RCNN model with three different backbone networks. After that, the hyperbola region were obtained from the trained models. Secondly, the image gradient and connected component analysis were employed to obtain the peak area and the tail area of the hyperbola, and segment the connected region for each top area by symmetry after matching the peak area and the tail area. Finally, the line detection with Hough transform was applied after making coordinate transformations by the key point on the hyperbola. The radius and position of the root was estimated by the parameters of the simplified equation.

2. Theoretical Basis of the Root Detection Method

The radar B-Scan image is composed of multiple A-Scan signals by continuously scanning the tangent plane of a certain position of the tree root system using GPR. From the regional distribution perspective, the B-Scan image could reflect the distribution of media on the scanning plane, such as the upper region of the air–soil boundary and the hyperbola region of the root system at the corresponding position. Due to the different relative permittivity and the position of these mediums, the corresponding area in the radar wave data not only appears in different time-depth but also shows varying degrees of change. In the B-Scan image, these regions have different changing characteristics in scan direction at the same time. In Figure 1, an example of the synthetic model from simulation data generated by gprMax, its B-Scan image, and amplitude graph were shown. In this example, the domain is 0.6 m in depth and 2.5 m in length, the discretization of space in the x, y, and z directions is 0.002 m respectively as well as the radius of the roots is 0.01 m. The relative permittivity values of the domain and root are 6 and 12 [49,50,51]. As for the set-up for the GPR, the antenna frequency is 900 MHz and the sampling number is 512. In Figure 1a, there are five roots with the same radius in the geometric models. Two of the roots are farther apart and the other three are closer together. As shown in Figure 1b, the interface of the air–soil appears as a direct wave and the roots show hyperbola pattern. There are two relatively independent hyperbolas and three intersecting hyperbolas. In Figure 1c, the B-Scan image is converted into the 3D amplitude graph to observe the characteristics from the perspective of the change of response. By comparing the three graphs, two interesting conclusions could be summarized. The first is the boundaries of the air–soil and soil–root have obvious amplitude changes in the corresponding position and in the detecting direction. The other is the amplitude change in the scanning direction does not change significantly compared with the detection direction in the same area. In detail, the electromagnetic wave emitted by GPR shoots from the air into the soil and the soil into the root in the vertical direction. Because it travels through different media, an obvious response was found at the boundaries of the two media. At the same time, because of the same distance between the radar and the boundaries in the scanning direction for each A-Scan, the changes of the amplitude appear at about the same detection time.
To capture these changes, the image gradient which describes the change of an image in a certain direction for image processing was adopted. A novel method for root detection in GPR B-Scan image was proposed based on the difference of the changes in the horizontal and vertical image gradients. Figure 2 shows the flow chart of our method, mainly including three stages for hyperbola region detection, connected region extraction, and hyperbola recognition. In the first stage, Faster RCNN, an object detection framework, was employed to detect the potential hyperbola regions. Based on the proposal region, there were image gradient calculations and symmetry tests in the next stage. In the last stage, coordinate conversions for each symmetrically connected region were made twice by taking the key point from its peak to the down-opening, and checked for hyperbola recognition, by adopting Hough transform to fit the hyperbola and estimate the radius and position of the root. In this section, all these three phases are described in detail.

2.1. Hyperbola Region Proposition via Faster RCNN

Every scanning tangent plane of GPR often has several roots with random positions owing to the complexity of the root distribution. There are multiple hyperbolas in the B-Scan image but they only occupy a small part of the whole image. Some approaches for hyperbola recognition involve calculating on the whole B-Scan image directly, so much calculating time is required on the area without hyperbola. If the calculation could focus on the hyperbola region, the effectiveness of recognition would improve significantly. However, it is also hard to fix the parameters of the window size and slide step for the traditional approach, slide window, due to the difference in the hyperbola pattern. The template matching method is often constrained by the hyperbola template library. In recent years, with the development of deep learning, some algorithms-based neural network for object detection made a breakthrough like RCNN, YOLO, SSD, and their variants [41,52,53,54,55,56,57,58]. Although the one-stage approach had a surprising detection speed, it was slightly less accurate than the two-stage method with twice the adjustments due to the whole process adjusting the proposed region only once. With the region proposal network (RPN) proposed, the part of the generation of the proposal region was accelerated by the GPU. That meant the two-stage approach could achieve nearly real-time detection.
For hyperbola detection of B-Scan images, it is necessary to reduce detection time as much as possible while ensuring high accuracy. Thus, we adopted Faster RCNN, a two-stage framework with three different backbone networks to detect the hyperbola region. Figure 3 shows the framework for obtaining hyperbola regions with Faster RCNN. For feature extraction, we adopted three backbone networks: vgg16, resnet50, and resnet101. The the feature map was sent into the RPN and regression network. Finally, the proposal box for the hyperbola with location and confidence was generated.

2.2. Peak and Tail Extraction of the Hyperbola

In the B-Scan image, when the radar wave passes through the interface of two media with different relative permittivity, there is an obvious amplitude fluctuation. The surface of the soil shows a transverse zonal region and the root in the scanning tangent plane usually shows the hyperbola shape. Figure 1 shows a root model from simulated data for its radar B-Scan image and amplitude diagram obtained from the gprMax tools. From the vertical perspective, the B-Scan image is composed of many A-Scan signals. In each A-Scan signal, there is obvious amplitude fluctuation at the detection time corresponding to the air–soil and soil–root interfaces. From the horizontal perspective, the B-Scan image was viewed in different moments. There is little change at the moments corresponding to the interfaces of the air–soil and the small area at the top of the soil–root, but there are obvious changes at the time corresponding to other parts of the soil–root boundary.

2.2.1. Longitudinal and Transverse Gradient Graphs Calculation

Image gradient is used to describe the change in one direction of the image by using a pre-defined descriptor and making a convolutional operation on an image. Although large size descriptors could make use of multiple pixels in adjacent positions and reduce the influence of the noise point, the template is too large to capture the difference of two adjacent pixels. Figure 4 shows the pre-defined simple difference descriptors and an example for calculating the gradient graph in four directions. In Figure 4a, there are four different descriptors. The example, image A , is shown by pixel value in Figure 4b. In Figure 4c–f, the upward, downward, left, and right gradient graphs are shown after the convolutional operation with the corresponding descriptor.
In this study, four simple small size descriptors were pre-defined in Figure 4 to calculate the difference in the up, down, left, and right directions of the B-Scan image. To make use of these features, we combined the gradient images in four directions acquired by making convolution operations into longitudinal and transverse gradient graphs after binarization by the gradient value as shown in Figure 5. In the longitudinal gradient map, there are some areas which correspond to the interface of the air–soil and soil–root existing at a non-zero gradient. It also has some discrete regions caused by noise. In the other graph, the gradients of the boundaries of the air–soil and small area at the top of the soil–root approach to zero. Other regions of the soil–root interface still have a non-zero gradient. Through morphological analysis, the interface of the air–soil approximates a straight zonal region in the scanning direction. From the vertical direction, the amplitude fluctuation is caused by the difference between air and soil. From the horizontal direction, the amplitudes are approximately equal at the same time. Thus, the interface of the air–soil appears in the vertical gradient graph, not in the other graph. The scanning cross-section of the root appears similar to the circular shape and the amplitude fluctuation at a different time in the several adjacent corresponding A-Scan signals and shows the hyperbola shape in the B-Scan image combined with those A-Scan signals. However, in the horizontal direction of viewing, the top small area of the root is approximately a horizontal straight line which is similar to the former. Hence the gradient is close to zero in that area and there is a non-zero gradient in other regions of the hyperbola. In the two gradient graphs, the whole hyperbola shape was found in the vertical gradient graph, but only two tails for each hyperbola appear in the horizontal gradient graph lacking the top small area.

2.2.2. Connected Component Analysis

There are some noises in both gradient graphs appearing in a discrete small area. However, some noisy regions could not be eliminated by dilation and erosion with the small kernel. Although these morphological operations with a large kernel could remove these regions, the different features of the top of the hyperbola in longitudinal and transverse gradient graphs would be influenced. Hence the connected component analysis, one of the morphological image processing methods was adopted to eliminate the discrete noisy region. This method could mark every region in an image that could not be absorbed by other regions after dilation and erosion. The connected component is usually used to describe the region composed of foreground pixels with the same pixel value and adjacent position in the image with a connected structure descriptor. Figure 6 shows a simple example of the connected component analysis with the four-connected structure and eight-connected structure. In Figure 6a, the two types of connected structures are shown. The example image (the yellow area is the foreground and the rest is the background) is shown in Figure 6b. Figure 6c,d show the results after connected component analysis with the four-connected structure and eight-connected structure respectively. In these results, the same color shows the same region. There are five regions in Figure 6c and only one region in Figure 6d. From these two kinds of connected structures and the corresponding results, the analysis results include the diagonal position of the point with the eight-connected structure. In this study, the eight-connected structure was predefined to do the connected component analysis of the vertical and horizontal gradient graph and a threshold for eliminating the noisy region was calculated on the average area of all the connected components by Equation (1). The result after denoising by the connected component analysis is shown in Figure 7.
Area avg = 1 n i = 1 n area i

2.2.3. Symmetry Test

In the radar B-Scan image, the root usually shows a hyperbola shape with symmetry caused by the principle of detection with GPR. For each root, the corresponding hyperbola is symmetrical about the A-Scan at the highest point. Comparing the longitudinal and transverse gradient graphs, the top area of each hyperbola area shows differently. In the vertical direction, the hyperbolas keep the complete shape. However, in the horizontal direction, only the two tail regions are reserved without the top area. The top area was obtained by image linear operation and matched with the connected components by position. The symmetry test was applied to separate each corresponding connected component into independent symmetric regions with every matched top area. Figure 8 shows the results of this test on a simulation image. In the results, two independent hyperbolas were preserved and the three intersecting hyperbolas were separated into three independent hyperbolas with three top areas. The connected component analysis was also employed to retain the symmetry region which determined the corresponding top area in the intermediate process.

2.3. Key Point Coordinate Transformation for Hyperbola Recognition

After separating symmetric regions, the Least Squares and Hough transform are employed to fit the hyperbola from the regional points. The former is usually used for equation fitting with a known point set of one single hyperbola. There are some restrictions in this method: (1) the point set must be the hyperbola pattern; (2) the point set could only contain one hyperbola; and (3) the result is easily affected by noise points. Hough transform is a method that uses a point set to vote and cluster in parameter space to determine the equation. It is not necessary to determine the morphological characteristics of point sets in advance and as they could be composed of multiple hyperbolas. In this method, the threshold of voting points in parameter space is used to determine whether or not there are hyperbolas and the number of hyperbolas in the point set. The previous method of hyperbola detection using Hough transform simplified the hyperbola equation based on the GPR imaging principle and voted in a three-dimensional Hough space. That caused a large amount of calculation and high computational complexity because of the high parameter dimension. Due to the discretization of the infinite continuous parameter domain, there is a deviation for hyperbola recognition. In this study, the separated symmetric connected region and its top area were obtained by using the difference of the hyperbola between the longitudinal and transverse gradient graphs. Therefore, the location of the vertex was used as a priori knowledge to further simplify the hyperbola equation and detect the hyperbola in the separated symmetric connected region. The simplified hyperbola equation was transformed into a linear equation by coordinating transformation based on the location of the vertex. The hyperbola detection of the origin set was transformed into line detection in new coordinates, which could reduce the dimension of parameter space and convert the infinite continuous parameter domain to finite by polar coordinate transformation.

2.3.1. Down Opening Check with Key Point

The root usually shows the circle in the scanning plane and appears a hyperbola pattern in the corresponding GPR B-Scan image. After extracting the symmetric connected region, the feature points of the hyperbola are selected by a downward opening check for each region and used for hyperbola fitting. The hyperbola is similar to a parabola with some same points. Therefore, parabola detection was adopted for the downward opening check to gain some key points where hyperbola and parabola coincide. Some identical transformations were applied on the parabola Equation (2) and a linear Equation (4) was obtained based on the axis of symmetry with Equation (3).
y = a 0 + a 1 × ( x a 2 ) 2
X = ( x a 2 ) 2
y = a 0 + a 1 × X
Figure 9 shows some examples of parabola detection by line detection with Hough transform after one coordinate transformation. In the first column, the red curve is a parabola for which the equation is y = x 2 , the green and the yellow are the road moved right 5 and 10 units, respectively, and the blue is the green moved up 5 units. The second column is the result after coordinate transformation with a 2 = 0 , a 2 = 5 , and a 2 = 10 . The third column is the result of line detection by Hough transform on the point set in the second column. In the third column, the blue curve is the original curve and the red curve is the detected curve. From the results, the parabolas with different symmetry axes were fitted after coordinate transformation with the corresponding symmetry axis. The parabolas with the same symmetry axes were fitted after coordinate transformation with the common symmetry axis. In the downward opening check on the B-Scan image, the parameter a 0 and a 1 should be positive.

2.3.2. Hyperbola Recognition with Key Point and Root Parameters Estimation

The root showing the hyperbola signature in B-Scan image was formulated as a geometric model [19,59,60] as shown in Figure 10. Equation (5) expresses the relation of the scanning time t , the horizontal position x , the velocity of propagation ν and the radius of the root R .
( t + 2 R v t 0 + 2 R v ) 2 + ( x x 0 v 2 × t 0 + R ) 2 = 1
where ( x 0 , t 0 ) are the coordinates of the target and ( x ,   t ) are the coordinates of the points on the hyperbola. Equation (6) is the standard equation of the hyperbola. After some simple derivations with Equation (5), the relations in Equations (7) and (8) were obtained.
( y y 0 a ) 2 ( x x 0 b ) 2 = 1
a = t 0 + 2 R v
b = v 2 × ( t 0 + 2 R v )
If the parameters of the hyperbola, a and b , were searched, the depth, the radius of the root, and the propagation velocity of electromagnetic wave were calculated by Equations (9)–(11) at the same time.
R = b × ( a t 0 ) a
d e p t h = v × t 2 = v × t 0 a
v = 2 b a
Transform Equation (5) with a , b and obtain a new Equation (12) without parameters v and R .
( t + a t 0 a ) 2 ( x x 0 b ) 2 = 1
In this equation, there are still four parameters a , b , t 0 , and x 0 . The former methods regarded this problem as an optimization problem to solve the optimal solution. The parameters t 0 and x 0 are the coordinates of the hyperbola vertex. In this study, the top area of the hyperbola was obtained from the vertical and horizontal gradient graphs as the prior knowledge. So, let
X = ( x x 0 ) 2 y = t t 0 t = y + t 0
and make some identity transformations from Equation (14).
( y + a ) 2 = a a + a 2 b 2 × X
y 2 = a 2 b 2 × X 2 × a × y
If y > 0, Equation (16) was deduced which was regarded as a linear Equation (17).
y = a 2 b 2 × X y 2 × a
y = a 0 + a 1 × Z
After the transformation with key point, an equation with four parameters was simplified into a linear equation with only two parameters. The linear detection method with Hough transform was adopted to estimate the two key parameters of Equation (17). A simple example for hyperbola fitting with Hough transform is shown in Figure 11. In Figure 11a, the red curve is a simple hyperbola for which the equation is y 2 a 2 x 2 b 2 = 1 , when a = 2 , b = 1 . The blue curve is the red curve moved down 1.2 units. Figure 11b,c show the intermediate results of the down opening check and hyperbola fitting for the blue curve in Figure 11a. In these two figures, the red curve is to be fitted and the blue is the fitting result. From the result, a fine hyperbola fitting and accurate parameter estimation were obtained using our method.
This section shows the novel method for root detection in the GPR B-scan image by considering the horizontal and vertical characteristics and the symmetry of the hyperbola. From the results of theoretical analysis and numerical validation, this method could not only recognize a single hyperbola but also separate multiple intersecting hyperbolas. To test and verify the effectiveness of our method for tree root detection, standing tree root system, grapevine controlled experiment, and numerical simulation data were collected.

3. Materials and Methods

3.1. Materials

Our study situ was located in Summer Palace, a landscape with a temperate monsoon climate in the western suburb of Beijing. There are hundreds of ancient trees with huge and complex roots. Tree root detection by GPR could provide important data support for the protection of these ancient trees. In our experiments, 35 trees located in the flat meadow and open environments were selected, including willows, pines, and cypresses. The grapevine, which has a similar relative permittivity to the root, was chosen in the controlled experiment.

3.2. Field Detection of Standing Trees and Embedded Roots

According to the characteristics of the root system and combined with previous experience in tree detection, the TRU tree radar (SIR3000T, GSSI, USA) was employed. The tree radar detection system was combined with a radar wave medium coupling single polarized antenna and a data collector. At the same time, the trace-interval and the number of samples are 5 mm and 512, respectively. In the field experiment, the antenna frequency was 900 MHz and the detecting depth was 0.6 m. Due to lateral roots radiating outward from the taproot, the root orientation was usually unknown before digging. To ensure a quasi-perpendicular intersection with most of the roots, a method of loop detection was adopted [12]. An example for tree root detection is shown in Figure 12. In our detection, the tree trunk was regarded as the center circle in the detection design. The detection started from due north and ran counterclockwise. The tracks were circulars around the trunk and increased by 0.4 m each time from 0.6 m to the longest distance of the crown. To date, we have detected the root system of 35 ancient trees and collected 409 DZT files.
Furthermore, to verify the validity of both the field experimental data and simulation data, a controlled experiment was designed and detected in the Practice Base (40°29″ N, 116°20′27″ E) of Beijing Forestry University. This region has a temperate continental monsoon climate with four distinct seasons and its coverings are mainly fine sand. There is no significant difference in the permittivity between the fine sand and soil, and the physical properties are uniform. The experimental field is 2 m long, 1 m wide, and 0.6 m deep, as shown in Figure 13. This shows an experiment for roots with different radius. The depth and distance of these roots are 0.3 m and 0.5 m. The detection direction is perpendicular to the embedded root system and the track was a straight line. Owing that the aim of this paper is to detect the root of the living standing ancient tree, the newly excavated grapevine was chosen in the control experiment to get as close as possible to the real detection.

3.3. Numerical Simulation of Root with Random Position

To enumerate multiple positional relationships of roots, the gprMax3.0 toolbox, an open-source software that could simulate electromagnetic wave propagation by finite-difference time-domain (FDTD) method was employed to generate the synthetic images. The relative permittivity of the sand and soil with 20 percent water content were about 4 and 10, respectively, and the relative permittivity of the root was about 12 [49,50,61]. GPR B-Scan image was best in dry sandy soils but seriously degraded in soils with high water and clay contents [62]. The soil in the natural environment (not after rain) was more homogeneous and biased towards dry soil, and root detection using GPR mainly used the difference in the relative permittivity of the soil and the root system. The detection depth and GPR resolution, including the minimum size detectable as well as the capacity to discriminate between two closely-spaced targets [62], are influenced by the antenna [12]. High-frequency antenna has high resolution but low detection depth, and low-frequency antenna has high penetration but low resolution. This paper used a 900 MHz antenna, whose maximum detection depth was about 1 m [63], the diameter of the smallest detectable root was about 2 cm [64], and the detectable smallest interval between closely-located roots was between 10 and 20 cm [65].
In order to simulate the real environment as much as possible, the base model is shown in Figure 14, which includes the air and soil regions by the following setting: the detection domain was 0.6 m for vertical depth, 6 m for lateral length, and 0.002 m for section thickness. The air layer was 0.04 m, the antenna frequency is 900 MHz, the relative permittivity of the root and soil was between 11-to-13 and 6, respectively, and the start positions of the transmitting antenna and receiving antenna were 0.004 m and 0.008 m for the lateral position. There are several cylinders as root models whose random radius was from 0.01 m to 0.035 m with the random location in the soil box and the distance to each one was greater than five times the radius. After simulation, several technologies like pepper noise and Gaussian blur were applied to the images to be similar to the real images.
In this section, the mixed dataset was prepared to verify the effectiveness and accuracy of the method for root detection in B-Scan images. Some approaches of image processing like mean filter, median filter, Gaussian filter, gain adjustment, expansion, and corrosion were adopted to preprocess the B-Scan images.

4. Results and Discussion

4.1. Analysis of Hyperbola Region Proposition

The framework of Faster RCNN was split into three parts: the backbone, RPN, and tail networks. The VGG16, ResNet50 and ResNet101 were employed as the backbone network for feature extraction. In order to train the whole network effectively, the weights of these networks trained by the ImageNet dataset were loaded to initialize the backbone network weight. The Pascal VOC2007, a classical dataset of object detection with 20 classes was adopted to pre-train the whole model. After that, our mixed B-Scan dataset after random data augmentation was used to fine-tuning the weight trained on Pascal VOC2007. The technology for data augmentation includes image scaling, stretching, flipping, clipping, and gain adjusting. The whole program was implemented with PyTorch, an open-source deep learning framework. The training phase was run on a server equipped with Nvidia Titan XP GPU and the test phase was on a personal laptop with Intel i7-9750H and Nvidia GTX1660ti. As for model evaluation, mean average precision (mAP) and frames per second (FPS) were employed the measure the precision and time consumption. The mAP is commonly used to evaluate the precision of the object detection algorithm by considering both the confidence and intersection over union (IoU) of the object and the FPS usually shows the detection speed that not only depended on the algorithm efficiency but also influenced by the computer hardware platform.
The whole mixed dataset includes 1442 images (1160 for synthetic images and 282 for on-site images). The dataset was divided into 80% for training, 10% for verification and the remaining 10% for test. As for training, the three models were trained for 50 epochs respectively. In Figure 15, one on-site and one synthetic image after detecting are shown. From the result, most hyperbolas are detected but there are still several undetected hyperbolas due to the complex noise, weak response, and hyperbola interaction. After training, the mAP, time pre-image (TPI), and FPS of the three models for hyperbola detection were calculated and compared as shown in Table 1. From the table, the mAP is higher but the TPI is longer and the FPS is less with the increase of network layers. Due to the increased network layers, the backbone network could extract more effective features but cause more calculation. Therefore, precision would increase with the deeper network but more would require more time. In this study, the model with resnet101 was adopted by considering both the detection accuracy and time consumption.

4.2. Analysis of Hyperbola Extraction

In this part, the simulation data was used to verify the connected region extracting method for segmenting the intersecting hyperbolas in the B-Scan image firstly. The root position in the simulation model was generated randomly, so the data produced by the gprMax software includes many relationships of the hyperbolas such as lambda shape and x shape. The gradient graphs in four directions and the symmetry test were adopted to do hyperbola segmentation in various forms whose connected regions include a single hyperbola and multiple hyperbolas with intersecting.
Figure 16 compares the results after the B-Scan image segmentation with C3, OSCA, and our method. In Figure 16a, there is an example image with three intersecting hyperbolas. Figure 16d shows the clustering result for the connected region including three hyperbolas by C3, and there are eight clusters in total. The segmentation process of C3 was clustered by two important definitions: the column segment and connecting elements. The output of this approach included not only the hyperbola shape but also other shapes owing to the interaction of multiple hyperbolas. The number of clustering results was dependent on the number of consecutive crossed hyperbolas. Hence a hyperbola judging method was employed to filter the clustering results following the segmentation phase. Figure 16b shows the output of the OSCA algorithm for the same connected region. The process of OSCA considered three overlapping patterns like lambda, x, and v shape, and segmented them by two definitions, the point segment and the downward/upward opening. If there is a hole in the connected region that would be regarded as an x shape, the clustering result would be influenced. In this experiment, a simple trick considering the gap between two point segments dependent on the size of the hole was adopted to correct the shape judgment. The C3 and the OSCA were influenced by the rough edge of the connected region, noise, and the internal hole of the region. Figure 16c shows the output of our symmetry test. In this experiment, the hyperbolas on both sides could keep the whole shape but the middle one was affected by its intersecting. Owing to the two sides roots with approximate depth and offset value to the middle one, some region in the hyperbolas on both sides is symmetric to the middle root. Therefore, the segmentation result for the middle hyperbola would include some regions from the sides. Figure 17 shows the result of the hyperbola recognition on this segmentation. From the output, all three hyperbolas were distinguished respectively. Although there were some interference areas in the middle hyperbola, our method could still recognize the hyperbola robustly.
As for the on-site data, there is often much noise appearing in the B-Scan image. That would not only cause much redundant calculation but also influence the area extraction. Therefore, the stage for the on-site B-Scan image segmentation was applied after the hyperbola region detection. Figure 18 shows the extraction including top area and connected symmetrical area based on the proposal region obtained from Faster RCNN. From the output, the hyperbola region was extracted in each proposal box. At the same time, there are considerable interference areas due to the complexity of the underground environment showing no opening shape. Thus, the down opening check is necessary for the hyperbola fitting phase to filter these areas.

4.3. Analysis of Hyperbola Fitting and Information Estimation

After the hyperbola extraction, the top area and its connected symmetrical area was obtained and matched. In each top area, a key point ( x 0 , t 0 ) which was helpful to simplify the geometric model equation and transform the coordinate system as important prior knowledge was searched. For each connected symmetrical area, the point set was too large to do a hyperbola fitting. In this study, the percentage point method was adopted to filter the set and gained some points as fitting point set for hyperbola fitting. Coordinate transformation and line detection with Hough transform were performed twice on the key point of the fitting point set. Thus, the radius and position of the root could also be estimated from the corresponding hyperbola equation. The relative error (RE) which was calculated by Equation (18) was adopted to evaluate the accuracy of the estimation. In the equation, V r e a l is the true value and the V p r o is the estimated value.
R E = V r e a l V p r o V r e a l
In this stage, several B-Scan images from the simulation data and on-site data were used to verify the accuracy of the hyperbola fitting. Figure 19 shows the results of the hyperbola fitting after two times coordinate transformation and line detection. Figure 19a shows a simulation B-Scan image with five cylinders of random location. In this experiment, the left two relatively independent hyperbolas were fine fitted and the right three overlapping hyperbolas could also be separated and fitted respectively. The result shows varying degrees of fitting each hyperbola of the right three owing to the use of the symmetry of the hyperbola to segment. In Figure 19b, five hyperbolas were found in the on-site B-Scan image from the field experimental data.
As for estimating the radius and location of the root from the B-Scan image, two simulated experiments and one controlled experiment were designed to verify the accuracy of our estimation method. In Table 2, there is one simulated experiment that includes six geometric models, with each model only including one root. In this test, these cylinders have the same center position but different radius. In the results, the bias of the estimation for the horizontal offset is less than 0.01 m and the RE is less than 1.6%. As for the depth estimation, the RE of five tests is less than 10% and only one test is 11%. The bias of these calculation results is less than 0.03 m. In the radius estimation, there is a gap in the first three and last three. The average RE is about 18% for the first three tests at the centimeter-level root and 6% for the rest at the decimeter level. In the other simulated experiment, there is one B-Scan image including three cylinders with the same radius and different depth as shown in Table 3. The average RE of the horizontal offset is about 10% and the depth is 8%. The bias for the depth calculation is less than 0.03 m. As for the radius estimation, the RE of the last two tests is about 15% but the first one is 60%. The bias of these estimations is less than 0.01 m.
In the controlled experiment, there were three roots with different radius buried in the same depth shown in Figure 13. Table 4 shows the results of the radius and position estimation. The average RE of the horizontal offset is about 12% and the depth is 12%. In the radius calculation, the RE of the first root is about 3%. Although the RE of the second is 45%, the bias is less than 0.013 m. As for the third root, the estimation is less than zero.
From the estimated results of the radius and position, there is an obvious gap in the relative error with different radius. The error is lower for coarse root than fine root in the simulated experiments. In this study, the size of the B-Scan image was ( H , W ) . The first element was the number of sampling points in each detection point and the second was the number of the detection points. In the scanning direction, the number of the sampling point was 512 and the distance between the two adjacent detection points was 5 mm. In the detecting direction, the depth difference between the two adjacent sampling points was calculated from Equation (19).
d = T 512 × c ϵ
T is the time window, c is the velocity of light, and ε is the relative permittivity of the soil. In our simulated experiment, the time window was 15 ns and the relative permittivity was 6. So, the sampling interval in the detecting direction was about 9 mm. If the time window was longer, the sampling interval was bigger in the same number of sampling points. Theoretically, as the number of samples increases, the image becomes finer and the signal distortion becomes smaller, but the memory occupied by the data would increase. It has been proved that the number of sampling points has a great influence on the radar image. Owing to the many small underground abnormalities, too many sampling points will appear too much noise, too few will lose useful information in the B-Scan image [66]. That caused a larger influence on fine roots than coarse roots. In the field experiment, owing to the soil anisotropy and natural stratification, moisture content, and material content usually showed differences at different depths and directions. Therefore, that was the reason for the complexity of soil distribution and further caused the relative permittivity is diversity in the soil. Owing to this, the radius and position estimations of the roots in the controlled experiment were badly affected.
In summary, there were some limitations in our method after analysis. From the result of the hyperbola region proposition, there was some missed detection in the hyperbola dense area. For some on-site data, the low confidence of the hyperbola proposal region was caused by the underground complex environment. At the same time, the symmetry test, coordinate transformation, and hyperbola detection were mainly dependent on the key point which was obtained from the multi-directional gradient graphs. This might be influenced by some noise. In addition, the resolution of the B-Scan image could also affect the accuracy of the estimation for the related information like the radius and depth of the root.

5. Conclusions

In this study, we focus on the characteristics in detection direction and scanning direction as well as the symmetry of hyperbola. A novel method of hyperbola detection based on the vertex of hyperbola for tree root detection in GPR B-Scan image was proposed. The conclusions are as follows:
  • The proposed method of hyperbola recognition considering the characteristics of two directions and the symmetry of the hyperbola showed a significant effect in tree root detection from the results of hyperbola recognition as well as the estimation for the radius and position of the root.
  • The characteristics both in detection direction and scanning direction were captured by the image gradient. The peak and tail of the hyperbola were obtained from the longitudinal and transverse gradient graphs.
  • The multiple intersecting hyperbolas were separated with the symmetry of the hyperbola, which could reduce the influence of the rough boundary and hole inside.
  • The parabola and hyperbola equations were simplified for down opening check and hyperbola fitting by coordinate transformation based on the peak area. The error caused by discretization of the parameter domain was reduced by replacing Cartesian coordinates with polar coordinates.
In the natural environment, the root is usually similar and its orientation is often unknown before digging. The detection method adopted in this paper could not guarantee vertical intersection with all roots. Guo et al. [67], Liu et al. [68], and Wang et al. [69] verified the influence of the root orientation on both A-Scan signal and B-Scan image. The root orientation was defined as horizontal angle and vertical angle. Therefore, the impact of the root orientation makes the geometric model more complex. Guo et al. [12] summarized the characteristics of root detection using GPR with different frequency antenna in terms of detection depth and GPR resolution. In our study, the frequency of the antenna is 900 MHz. Owing to the antenna frequency, the detection depth is about 1 m and the GPR resolution is about 2 cm in the sand. Some processing to reduce noise including filter, dilation, erosion, and threshold segmentation would bring some effect to the symmetry of the hyperbola. The factors influencing the symmetry of hyperbolic curves will be studied in the future.
Due to the different relative permittivity between the soil and the root, the root was detected with GPR. The water content is one of the important factors affecting the relative dielectric constant of the medium [12]. In this study [70], the pipes which contained water, a 1:1 ratio of water and air, air, and salt water (22 mg cm−3 iodized sea salt) were detected in the designed controlled experiments. A small difference in relative dielectric permittivity between moist soil and root affected root detection. In the actual test, the effect of root environment on the hyperbola was much smaller than that of root orientation. In the future, based on the root detection method in this paper, further research will be conducted for the detection of root with different orientation and different moisture contents.
In real detection, the root detection result could be influenced by the complexity of soil distribution and the accuracy of GPR. In the future, we will focus on more accurate object detection algorithms with stronger anti-interference and more accurate identification of hyperbola and estimation for the related information. High precision root detection could provide data support for the restoration and research of three dimensional root structure.

Author Contributions

Conceptualization, X.Z., F.X., Z.W. and J.W.; methodology, X.Z. and J.W.; software, X.Z., F.X., Z.W. and J.W.; validation, X.Z. and J.W.; formal analysis, X.Z.; investigation, X.Z., F.X., Z.W. and J.W.; resources, J.W., F.W. and L.H.; data curation, X.Z., F.X., Z.W., J.W., F.W. and L.H.; writing—original draft preparation, X.Z.; writing—review and editing, J.W., C.G. and N.Y.; visualization, J.W.; supervision, J.W.; project administration, J.W.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities (Grant No. 2021ZY73), the Beijing Municipal Natural Science Foundation (Grant No. 6202023) and the National Natural Science Foundation of China (Grant No. 32071679).

Data Availability Statement

The GPR data and the source codes of GPR data processing, clustering algorithm, and data visualization are available from the authors upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dixon, R.K.; Solomon, A.M.; Brown, S.; Houghton, R.A.; Trexier, M.C.; Wisniewski, J. Carbon Pools and Flux of Global Forest Ecosystems. Science 1994, 263, 185–190. [Google Scholar] [CrossRef] [PubMed]
  2. Gill, R.A.; Jackson, R.B. Global Patterns of Root Turnover for Terrestrial Ecosystems: RESEARCH Root Turnover in Terrestrial Ecosystems. New Phytol. 2000, 147, 13–31. [Google Scholar] [CrossRef]
  3. Diamanti, N.; Redman, D. Field Observations and Numerical Models of GPR Response from Vertical Pavement Cracks. J. Appl. Geophys. 2012, 81, 106–116. [Google Scholar] [CrossRef]
  4. Ježová, J.; Lambot, S. A Dielectric Horn Antenna and Lightweight Radar System for Material Inspection. J. Appl. Geophys. 2019, 170, 103822. [Google Scholar] [CrossRef]
  5. Rasol, M.A.; Pérez-Gracia, V.; Fernandes, F.M.; Pais, J.C.; Santos-Assunçao, S.; Santos, C.; Sossa, V. GPR Laboratory Tests and Numerical Models to Characterize Cracks in Cement Concrete Specimens, Exemplifying Damage in Rigid Pavement. Measurement 2020, 158, 107662. [Google Scholar] [CrossRef]
  6. LaBrecque, D.J.; Yang, X. Difference Inversion of ERT Data: A Fast Inversion Method for 3-D In Situ Monitoring. J. Environ. Eng. Geophys. 2001, 6, 83–89. [Google Scholar] [CrossRef]
  7. Kemna, A.; Vanderborght, J.; Kulessa, B.; Vereecken, H. Imaging and Characterisation of Subsurface Solute Transport Using Electrical Resistivity Tomography (ERT) and Equivalent Transport Models. J. Hydrol. 2002, 267, 125–146. [Google Scholar] [CrossRef]
  8. Wang, Y.; Li, X. Experimental Study on Cracking Damage Characteristics of a Soil and Rock Mixture by UPV Testing. Bull. Eng. Geol. Environ. 2015, 74, 775–788. [Google Scholar] [CrossRef]
  9. Sarro, W.S.; Assis, G.M.; Ferreira, G.C.S. Experimental Investigation of the UPV Wavelength in Compacted Soil. Constr. Build. Mater. 2021, 272, 121834. [Google Scholar] [CrossRef]
  10. Lorenzi, A.; Tisbierek, F.T.; da Silva Filho, L.C.P. Ultrasonic Pulse Velocity Analysis in Concrete Specimens. In Proceedings of the 4th Pan American Conference for NDT (PANNDT 2007), Buenos Aires, Argentina, 22–26 October 2007. [Google Scholar]
  11. Lorenzi, A.; da Silva Filho, L.C.P.; Somensi Lorenzi, L.; Shimomukay, R.; Argenta, C.J. Monitoring Concrete Structures Through UPV Results and Image Analysis. In Proceedings of the 5th Pan American Conference for NDT (PANNDT 2011), Cancun, Mexico, 2–6 October 2011. [Google Scholar]
  12. Guo, L.; Chen, J.; Cui, X.; Fan, B.; Lin, H. Application of Ground Penetrating Radar for Coarse Root Detection and Quantification: A Review. Plant Soil 2013, 362, 1–23. [Google Scholar] [CrossRef] [Green Version]
  13. Wu, Y.; Guo, L.; Cui, X.; Chen, J.; Cao, X.; Lin, H. Ground-Penetrating Radar-Based Automatic Reconstruction of Three-Dimensional Coarse Root System Architecture. Plant Soil 2014, 383, 155–172. [Google Scholar] [CrossRef]
  14. Tanoli, W.A.; Sharafat, A.; Park, J.; Seo, J.W. Damage Prevention for Underground Utilities Using Machine Guidance. Autom. Constr. 2019, 107, 102893. [Google Scholar] [CrossRef]
  15. Yalçiner, C.Ç.; Bano, M.; Kadioglu, M.; Karabacak, V.; Meghraoui, M.; Altunel, E. New Temple Discovery at the Archaeological Site of Nysa (Western Turkey) Using GPR Method. J. Archaeol. Sci. 2009, 36, 1680–1689. [Google Scholar] [CrossRef]
  16. Pasolli, E.; Melgani, F.; Donelli, M. Automatic Analysis of GPR Images: A Pattern-Recognition Approach. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2206–2217. [Google Scholar] [CrossRef]
  17. Mertens, L.; Persico, R.; Matera, L.; Lambot, S. Automated Detection of Reflection Hyperbolas in Complex GPR Images With No A Priori Knowledge on the Medium. IEEE Trans. Geosci. Remote Sens. 2016, 54, 580–596. [Google Scholar] [CrossRef]
  18. Al-Nuaimy, W.; Huang, Y.; Nakhkash, M.; Fang, M.T.C.; Nguyen, V.T.; Eriksen, A. Automatic Detection of Buried Utilities and Solid Objects with GPR Using Neural Networks and Pattern Recognition. J. Appl. Geophys. 2000, 43, 157–165. [Google Scholar] [CrossRef]
  19. Al-Nuaimy, W.; Huang, Y.; Eriksen, A.; Nguyen, V.T. Automatic Detection of Hyperbolic Signatures in Ground-Penetrating Radar Data. In Proceedings of the International Symposium on Optical Science and Technology, San Diego, CA, USA, 29 July–3 August 2001; pp. 327–335. [Google Scholar]
  20. Wen, J.; Li, Z.; Xiao, J. Noise removal in tree radar b-scan images based on shearlet. Wood Res. 2020, 65, 001–012. [Google Scholar] [CrossRef]
  21. Xiao, Z.; Wen, J.; Gao, L.; Xiao, X.; Li, W.; Li, C. Method of Tree Radar Signal Processing Based on Curvelet Transform. Tech. J. Fac. Eng. 2016, 39. [Google Scholar] [CrossRef]
  22. Capineri, L.; Grande, P.; Temple, J.A.G. Advanced Image-Processing Technique for Real-Time Interpretation of Ground-Penetrating Radar Images. Int. J. Imaging Syst. Technol. 1998, 9, 51–59. [Google Scholar] [CrossRef]
  23. Zhou, X.; Chen, H.; Li, J. An Automatic GPR B-Scan Image Interpreting Model. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3398–3412. [Google Scholar] [CrossRef]
  24. Li, W.; Cui, X.; Guo, L.; Chen, J.; Chen, X.; Cao, X. Tree Root Automatic Recognition in Ground Penetrating Radar Profiles Based on Randomized Hough Transform. Remote Sens. 2016, 8, 430. [Google Scholar] [CrossRef] [Green Version]
  25. Lu, G.; Long, Z. The Real-Time Gpr Signals Preprocessing Algorithm Based on LWT in High Scan Rate. In Proceedings of the IEEE International Conference on Wavelet Analysis and Pattern Recognition, Qingdao, China, 11–14 July 2010; pp. 255–259. [Google Scholar]
  26. Borgioli, G.; Capineri, L.; Falorni, P.; Matucci, S.; Windsor, C.G. The Detection of Buried Pipes From Time-of-Flight Radar Data. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2254–2266. [Google Scholar] [CrossRef]
  27. Wu, X.; Senalik, C.A.; Wacker, J.; Wang, X.; Li, G. Object Detection of Ground-Penetrating Radar Signals Using Empirical Mode Decomposition and Dynamic Time Warping. Forests 2020, 11, 230. [Google Scholar] [CrossRef] [Green Version]
  28. Maas, C.; Schmalzl, J. Using Pattern Recognition to Automatically Localize Reflection Hyperbolas in Data from Ground Penetrating Radar. Comput. Geosci. 2013, 58, 116–125. [Google Scholar] [CrossRef]
  29. Viola, P.; Jones, M.J. Robust Real-Time Face Detection. Int. J. Comput. Vis. 2004, 57, 137–154. [Google Scholar] [CrossRef]
  30. Pasolli, E.; Melgani, F.; Donelli, M.; Attoui, R.; de Vos, M. Automatic Detection and Classification of Buried Objects in GPR Images Using Genetic Algorithms and Support Vector Machines. In Proceedings of the 2008 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2008), Boston, MA, USA, 7–11 July 2008; pp. II-525–II–528. [Google Scholar]
  31. Xiang, Z.; Rashidi, A.; Ou, G. An Improved Convolutional Neural Network System for Automatically Detecting Rebar in GPR Data. In Proceedings of the Computing in Civil Engineering 2019, American Society of Civil Engineers, Atlanta, GA, USA, 17–19 June 2019; pp. 422–429. [Google Scholar]
  32. Besaw, L.E.; Stimac, P.J. Deep Convolutional Neural Networks for Classifying GPR B-Scans. In Proceedings of the SPIE 9454, Baltimore, MD, USA, 21 May 2015; Bishop, S.S., Isaacs, J.C., Eds.; International Society for Optics and Photonics: Bellingham, WA, USA, 2015; p. 945413. [Google Scholar]
  33. Besaw, L.E. Detecting Buried Explosive Hazards with Handheld GPR and Deep Learning. In Proceedings of the SPIE 9823, Baltimore, MD, USA, 3 May 2016; Bishop, S.S., Isaacs, J.C., Eds.; International Society for Optics and Photonics: Bellingham, WA, USA, 2016; p. 98230N. [Google Scholar]
  34. Bralich, J.; Reichman, D.; Collins, L.M.; Malof, J.M. Improving Convolutional Neural Networks for Buried Target Detection in Ground Penetrating Radar Using Transfer Learning via Pretraining. In Proceedings of the SPIE 9454, Anaheim, CA, USA, 3 May 2017; Bishop, S.S., Isaacs, J.C., Eds.; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; p. 101820X. [Google Scholar]
  35. Lameri, S.; Lombardi, F.; Bestagini, P.; Lualdi, M.; Tubaro, S. Landmine Detection from GPR Data Using Convolutional Neural Networks. In Proceedings of the 2017 IEEE 25th European Signal Processing Conference (EUSIPCO 2017), Kos, Greece, 28 August–2 September 2017; pp. 508–512. [Google Scholar]
  36. Witten, T.R. Present State of the Art in Ground-Penetrating Radars for Mine Detection. In Proceedings of the SPIE 3392, Orlando, FL, USA, 4 September 1998; Dubey, A.C., Harvey, J.F., Broach, J.T., Eds.; International Society for Optics and Photonics: Bellingham, WA, USA, 1998; p. 576. [Google Scholar]
  37. Reichman, D.; Collins, L.M.; Malof, J.M. Some Good Practices for Applying Convolutional Neural Networks to Buried Threat Detection in Ground Penetrating Radar. In Proceedings of the 2017 IEEE 9th International Workshop on Advanced Ground Penetrating Radar (IWAGPR 2017), Edinburgh, UK, 28–30 June 2017; pp. 1–5. [Google Scholar]
  38. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  39. Pham, M.-T.; Lefèvre, S. Buried Object Detection from B-Scan Ground Penetrating Radar Data Using Faster-RCNN. arXiv 2018, arXiv:1803.08414. [Google Scholar]
  40. Lei, W.; Hou, F.; Xi, J.; Tan, Q.; Xu, M.; Jiang, X.; Liu, G.; Gu, Q. Automatic Hyperbola Detection and Fitting in GPR B-Scan Image. Autom. Constr. 2019, 106, 102839. [Google Scholar] [CrossRef]
  41. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 2016, arXiv:1506.01497. [Google Scholar] [CrossRef] [Green Version]
  42. Dou, Q.; Wei, L.; Magee, D.R.; Cohn, A.G. Real-Time Hyperbola Recognition and Fitting in GPR Data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 51–62. [Google Scholar] [CrossRef] [Green Version]
  43. Windsor, C.G.; Capineri, L.; Falorni, P. A Data Pair-Labeled Generalized Hough Transform for Radar Location of Buried Objects. IEEE Geosci. Remote Sens. Lett. 2014, 11, 124–127. [Google Scholar] [CrossRef]
  44. Bookstein, F.L. Fitting Conic Sections to Scattered Data. Comput. Graph. Image Process. 1979, 9, 56–71. [Google Scholar] [CrossRef]
  45. Akima, H. A Method of Bivariate Interpolation and Smooth Surface Fitting for Irregularly Distributed Data Points. ACM Trans. Math. Softw. 1978, 4, 148–159. [Google Scholar] [CrossRef]
  46. Porrill, J. Fitting Ellipses and Predicting Confidence Envelopes Using a Bias Corrected Kalman Filter. Image Vis. Comput. 1990, 8, 37–41. [Google Scholar] [CrossRef] [Green Version]
  47. Liang, J.; Homayounfar, N.; Ma, W.-C.; Xiong, Y.; Hu, R.; Urtasun, R. PolyTransform: Deep Polygon Transformer for Instance Segmentation. arXiv 2019, arXiv:1912.02801. [Google Scholar]
  48. Warren, C.; Giannopoulos, A.; Giannakis, I. GprMax: Open Source Software to Simulate Electromagnetic Wave Propagation for Ground Penetrating Radar. Comput. Phys. Commun. 2016, 209, 163–170. [Google Scholar] [CrossRef] [Green Version]
  49. Wu, Y.; Cui, F.; Wang, L.; Chen, J.; Li, Y. Detecting Moisture Content of Soil by Transmission-Type Ground Penetrating Radar. Trans. Chin. Soc. Agric. Eng. 2014, 30, 125–131. [Google Scholar]
  50. Attia al Hagrey, S. Geophysical Imaging of Root-Zone, Trunk, and Moisture Heterogeneity. J. Exp. Bot. 2007, 58, 839–854. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Liang, H.; Fan, G.; Li, Y.; Zhao, Y. Theoretical Development of Plant Root Diameter Estimation Based on GprMax Data and Neural Network Modelling. Forests 2021, 12, 615. [Google Scholar] [CrossRef]
  52. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. arXiv 2014, arXiv:1311.2524. [Google Scholar]
  53. Girshick, R. Fast R-CNN. arXiv 2015, arXiv:1504.08083. [Google Scholar]
  54. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2016, arXiv:1506.02640. [Google Scholar]
  55. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
  56. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  57. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 21–37. [Google Scholar]
  58. Jiao, L.; Zhang, F.; Liu, F.; Yang, S.; Li, L.; Feng, Z.; Qu, R. A Survey of Deep Learning-Based Object Detection. IEEE Access 2019, 7, 128837–128868. [Google Scholar] [CrossRef]
  59. Chen, H.; Cohn, A.G. Probabilistic Robust Hyperbola Mixture Model for Interpreting Ground Penetrating Radar Data. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2010), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  60. Shihab, S.; Al-Nuaimy, W. Radius Estimation for Cylindrical Objects Detected by Ground Penetrating Radar. Subsurf. Sens. Technol. Appl. 2005, 6, 151–166. [Google Scholar] [CrossRef]
  61. Conyers, L.B. Ground-Penetrating Radar for Archaeology. Altamira Press 2004, 2. [Google Scholar] [CrossRef]
  62. Butnor, J.R.; Doolittle, J.A.; Kress, L.; Cohen, S.; Johnsen, K.H. Use of Ground-Penetrating Radar to Study Tree Roots in the Southeastern United States. Tree Physiol. 2001, 21, 1269–1278. [Google Scholar] [CrossRef] [Green Version]
  63. Cox, K.D.; Scherm, H.; Serman, N. Ground-Penetrating Radar to Detect and Quantify Residual Root Fragments Following Peach Orchard Clearing. Horttechnology 2005, 15, 600–607. [Google Scholar] [CrossRef]
  64. Dannoura, M.; Hirano, Y.; Igarashi, T.; Ishii, M.; Aono, K.; Yamase, K.; Kanazawa, Y. Detection of Cryptomeria Japonica Roots with Ground Penetrating Radar. Plant Biosyst. Int. J. Deal. All Asp. Plant Biol. 2008, 142, 375–380. [Google Scholar] [CrossRef]
  65. Hirano, Y.; Dannoura, M.; Aono, K.; Igarashi, T.; Ishii, M.; Yamase, K.; Makita, N.; Kanazawa, Y. Limiting Factors in the Detection of Tree Roots Using Ground-Penetrating Radar. Plant Soil 2009, 319, 15–24. [Google Scholar] [CrossRef]
  66. Catapano, I.; Gennarelli, G.; Ludeno, G.; Soldovieri, F.; Persico, R. Ground-Penetrating Radar: Operation Principle and Data Processing. In Wiley Encyclopedia of Electrical and Electronics Engineering; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2019; pp. 1–23. ISBN 978-0-471-34608-1. [Google Scholar]
  67. Guo, L.; Wu, Y.; Chen, J.; Hirano, Y.; Tanikawa, T.; Li, W.; Cui, X. Calibrating the Impact of Root Orientation on Root Quantification Using Ground-Penetrating Radar. Plant Soil 2015, 395, 289–305. [Google Scholar] [CrossRef]
  68. Liu, Q.; Cui, X.; Liu, X.; Chen, J.; Chen, X.; Cao, X. Detection of Root Orientation Using Ground-Penetrating Radar. IEEE Trans. Geosci. Remote Sens. 2018, 56, 93–104. [Google Scholar] [CrossRef]
  69. Wang, M.; Wen, J.; Li, W. Qualitative Research: The Impact of Root Orientation on Coarse Roots Detection Using Ground-Penetrating Radar (GPR). BioResources 2020, 15, 21. [Google Scholar] [CrossRef]
  70. Gormally, K.H.; McIntosh, M.S.; Mucciardi, A.N. Ground-Penetrating Radar Detection and Three-Dimensional Mapping of Lateral Macropores: I. Calibration. Soil Sci. Soc. Am. J. 2011, 75, 1226–1235. [Google Scholar] [CrossRef]
Figure 1. An example of the simulated data. (a) The geometric model of the root detection. (b) The corresponding B-Scan image of (a). (c) The corresponding amplitude map of (b).
Figure 1. An example of the simulated data. (a) The geometric model of the root detection. (b) The corresponding B-Scan image of (a). (c) The corresponding amplitude map of (b).
Forests 12 01019 g001
Figure 2. Flow chart of the method for tree root detection in GPR B-Scan image.
Figure 2. Flow chart of the method for tree root detection in GPR B-Scan image.
Forests 12 01019 g002
Figure 3. The framework of hyperbola region detection with Faster RCNN (Regions with CNN features). RPN: Region proposal network.
Figure 3. The framework of hyperbola region detection with Faster RCNN (Regions with CNN features). RPN: Region proposal network.
Forests 12 01019 g003
Figure 4. Four simple difference descriptors and an example for image gradient calculation. (a) Four pre-defined descriptors. (b) An example image A shown by the pixel value. (c) Upward gradient of the image A . (d) Downward gradient of image A . (e) Left gradient of the image A . (f) Right gradient of the image A .
Figure 4. Four simple difference descriptors and an example for image gradient calculation. (a) Four pre-defined descriptors. (b) An example image A shown by the pixel value. (c) Upward gradient of the image A . (d) Downward gradient of image A . (e) Left gradient of the image A . (f) Right gradient of the image A .
Forests 12 01019 g004
Figure 5. Longitudinal and transverse gradient graphs after binarization. (a) Longitudinal gradient graph combined with the upward and downward gradient graph. (b) Transverse gradient graph combined with the left and right gradient graph.
Figure 5. Longitudinal and transverse gradient graphs after binarization. (a) Longitudinal gradient graph combined with the upward and downward gradient graph. (b) Transverse gradient graph combined with the left and right gradient graph.
Forests 12 01019 g005
Figure 6. A simple example for the Connected Component Analysis for the image. (a) the four-connected structure and eight-connected structure. (b) An example image for connected component analysis. (c) The result after analysis with a four-connected structure. (d) The result after analysis with an eight-connected structure.
Figure 6. A simple example for the Connected Component Analysis for the image. (a) the four-connected structure and eight-connected structure. (b) An example image for connected component analysis. (c) The result after analysis with a four-connected structure. (d) The result after analysis with an eight-connected structure.
Forests 12 01019 g006
Figure 7. Longitudinal and transverse gradient graphs after denoising by the connected component analysis. (a) The longitudinal gradient graph. (b) The transverse gradient graph.
Figure 7. Longitudinal and transverse gradient graphs after denoising by the connected component analysis. (a) The longitudinal gradient graph. (b) The transverse gradient graph.
Forests 12 01019 g007
Figure 8. Separate overlapping hyperbolas with symmetry test for simulation image. (a) The original image before segmentation. (b) The result after separation. (cg) The intermediate results of the symmetry test for the hyperbolas in (a) from left to right.
Figure 8. Separate overlapping hyperbolas with symmetry test for simulation image. (a) The original image before segmentation. (b) The result after separation. (cg) The intermediate results of the symmetry test for the hyperbolas in (a) from left to right.
Forests 12 01019 g008
Figure 9. Parabola recognition with Hough transform after coordinating transformation. (a) Result for parabola recognition when a 2 = 0 . (b) Result for parabola recognition when a 2 = 5 . (c) Result for parabola recognition when a 2 = 10 .
Figure 9. Parabola recognition with Hough transform after coordinating transformation. (a) Result for parabola recognition when a 2 = 0 . (b) Result for parabola recognition when a 2 = 5 . (c) Result for parabola recognition when a 2 = 10 .
Forests 12 01019 g009
Figure 10. The geometric model for root detection by GPR.
Figure 10. The geometric model for root detection by GPR.
Forests 12 01019 g010
Figure 11. An example for hyperbola fitting with Hough transform. (a) The original and shifted hyperbolas hyper. (b) The first coordinate transformation for the down opening check. (c) The second coordinate transformation for hyperbola fitting. (d) The result after hyperbola fitting.
Figure 11. An example for hyperbola fitting with Hough transform. (a) The original and shifted hyperbolas hyper. (b) The first coordinate transformation for the down opening check. (c) The second coordinate transformation for hyperbola fitting. (d) The result after hyperbola fitting.
Forests 12 01019 g011
Figure 12. Detection tracks for each tree. (a) An example of a scanning tree (B00796). (b) The designed scanning tracks.
Figure 12. Detection tracks for each tree. (a) An example of a scanning tree (B00796). (b) The designed scanning tracks.
Forests 12 01019 g012
Figure 13. The site for controlled experiments. (a) The designed experiment for roots with different radius. (b) The geometric model for the corresponding experiment.
Figure 13. The site for controlled experiments. (a) The designed experiment for roots with different radius. (b) The geometric model for the corresponding experiment.
Forests 12 01019 g013
Figure 14. Root simulation model. (a) An example for the geometric model in the simulated experiment. (b) The corresponding B-Scan image from gprMax.
Figure 14. Root simulation model. (a) An example for the geometric model in the simulated experiment. (b) The corresponding B-Scan image from gprMax.
Forests 12 01019 g014
Figure 15. Results of hyperbola detection by Faster RCNN. (a) Results of hyperbola detection on an on-site B-Scan image. (b) Results of hyperbola detection on a simulated B-Scan image.
Figure 15. Results of hyperbola detection by Faster RCNN. (a) Results of hyperbola detection on an on-site B-Scan image. (b) Results of hyperbola detection on a simulated B-Scan image.
Forests 12 01019 g015
Figure 16. Interacting hyperbola segmentation comparing with C3, OSCA, and ours. (a) An example image with three intersecting hyperbolas. (b) The output of the OSCA labeling with different colors. (c) The output of our method labeling with different colors. (d) The output of C3.
Figure 16. Interacting hyperbola segmentation comparing with C3, OSCA, and ours. (a) An example image with three intersecting hyperbolas. (b) The output of the OSCA labeling with different colors. (c) The output of our method labeling with different colors. (d) The output of C3.
Forests 12 01019 g016
Figure 17. Results of our segmentation method. (a) The example in Figure 16. (b) The output from the symmetry test labeling with different colors. (c) The result of the hyperbola fitting on (b). (d) is the result of the hyperbola fitting showing on (a).
Figure 17. Results of our segmentation method. (a) The example in Figure 16. (b) The output from the symmetry test labeling with different colors. (c) The result of the hyperbola fitting on (b). (d) is the result of the hyperbola fitting showing on (a).
Forests 12 01019 g017
Figure 18. Results of segmentation on the proposal region in field data. (a) Hyperbola region detection on an on-site B-Scan image. (b) Hyperbola connected area extraction in each proposal hyperbola region.
Figure 18. Results of segmentation on the proposal region in field data. (a) Hyperbola region detection on an on-site B-Scan image. (b) Hyperbola connected area extraction in each proposal hyperbola region.
Forests 12 01019 g018
Figure 19. The fitting hyperbola on the synthetic and field data. (a) Hyperbola fitting result on the simulated B-Scan image. (b) Hyperbola fitting result on the on-site B-Scan image.
Figure 19. The fitting hyperbola on the synthetic and field data. (a) Hyperbola fitting result on the simulated B-Scan image. (b) Hyperbola fitting result on the on-site B-Scan image.
Forests 12 01019 g019
Table 1. The comparison for the mAP, TPI and FPS of three different backbone networks.
Table 1. The comparison for the mAP, TPI and FPS of three different backbone networks.
Backbone NetworkVGG16ResNet50ResNet101
mAP84.94%86.67%89.71%
TPI0.987 s1.273 s1.662 s
FPS1.020.790.62
Table 2. Radius and location estimation of the single root.
Table 2. Radius and location estimation of the single root.
Radius (m)Depth (m)Offsets (m)
No.RealProRERealProRERealProRE
10.0150.01926%0.2650.2660.4%0.600.5950.8%
20.0250.02712%0.2550.22511.0%0.600.600%
30.0350.02917%0.2450.2325.3%0.600.5901.6%
40.100.1033%0.380.3770.8%0.600.600%
50.150.13410%0.330.331%0.600.5950.8%
60.200.1895.5%0.280.293.6%0.600.5950.8%
Table 3. Radius and location estimation of the multi roots.
Table 3. Radius and location estimation of the multi roots.
Radius (m)Depth (m)Offsets (m)
No.RealProRERealProRERealProRE
10.0150.02460%0.380.3537.1%0.60.5950.8%
20.0150.012516%0.280.3056.6%1.21.34512%
30.0150.013113%0.180.2011.1%1.82.09514%
Table 4. Radius and location estimation of the root in the controlled experiment.
Table 4. Radius and location estimation of the root in the controlled experiment.
Radius (m)Depth (m)Offsets (m)
No.RealProRERealProRERealProRE
10.008440.00823%0.300.2768%0.600.67512.5%
20.02760.0445%0.300.26910.3%1.101.24513.1%
30.03875−0.26-0.300.35217.3%1.601.7811.3%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, X.; Xue, F.; Wang, Z.; Wen, J.; Guan, C.; Wang, F.; Han, L.; Ying, N. A Novel Method of Hyperbola Recognition in Ground Penetrating Radar (GPR) B-Scan Image for Tree Roots Detection. Forests 2021, 12, 1019. https://doi.org/10.3390/f12081019

AMA Style

Zhang X, Xue F, Wang Z, Wen J, Guan C, Wang F, Han L, Ying N. A Novel Method of Hyperbola Recognition in Ground Penetrating Radar (GPR) B-Scan Image for Tree Roots Detection. Forests. 2021; 12(8):1019. https://doi.org/10.3390/f12081019

Chicago/Turabian Style

Zhang, Xiaowei, Fangxiu Xue, Zepeng Wang, Jian Wen, Cheng Guan, Feng Wang, Ling Han, and Na Ying. 2021. "A Novel Method of Hyperbola Recognition in Ground Penetrating Radar (GPR) B-Scan Image for Tree Roots Detection" Forests 12, no. 8: 1019. https://doi.org/10.3390/f12081019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop