Abstract
Periodontal diagnosis requires discovery of the relations among teeth, gingiva (i.e., gums), and alveolar bones, but alveolar bones are inside gingiva and not visible for inspection. Traditional probe examination causes pain, and X-ray based examination is not suited for frequent inspection. This work develops an automatic non-invasive periodontal inspection framework based on gum penetrative Optical Coherence Tomography (OCT), which can be frequently applied without high radiation. We sum up interference responses of all penetration depths for all shooting directions respectively to form the shooting amplitude projection. Because the reaching interference strength decays exponentially with tissuesβ penetration depth, this projection mainly reveals the responses of the top most gingiva or teeth. Since gingiva and teeth have different air-tissue responses, the gumline, revealing itself as an obvious boundary between teeth and gingiva, is the basis line for periodontal inspection. Our system can also automatically identify regions of gingiva, teeth, and alveolar bones from slices of the cross-sectional volume. Although deep networks can successfully and possibly segment noisy maps, reducing the number of manually labeled maps for training is critical for our framework. In order to enhance the effectiveness and efficiency of training and classification, we adjust Snake segmentation to consider neighboring slices in order to locate those regions possibly containing gingiva-teeth and gingivaβalveolar boundaries. Additionally, we also adapt a truncated direct logarithm based on the Snake-segmented region for intensity quantization to emphasize these boundaries for easier identification. Later, the alveolar-gingiva boundary point directly under the gumline is the desired alveolar sample, and we can measure the distance between the gumline and alveolar line for visualization and direct periodontal inspection. At the end, we experimentally verify our choice in intensity quantization and boundary identification against several other algorithms while applying the framework to locate gumline and alveolar line in vivo data successfully.
Key Contribution:
We design an automatic non-invasive periodontal inspection framework based on Optical Coherence Tomography (OCT), which can be applied frequently without high radiation and pain while the system uses data acquisition and correction of optical rectification, fast Fourier transform, and intensity quantization, gumline and alveolar line identification with deep networks, and periodontal analysis and visualization.
1. Introduction
Periodontal disease occurs frequently in youth and middle aged people [1], and it generally causes gingivitis. Gingivitis results from bacteria or acidic substances eroding gums and alveolar bones, resulting in gum shrinkage, loss of alveolar bone, and tooth root exposure, and finally, it causes periodontitis of fallen teeth. Periodontal disease is hard to diagnose because gums occlude roots and alveolar bones from visual inspection. Therefore, this work aims at developing an automatic non-invasive periodontal inspection framework that can be applied frequently without high radiation and pain.
Traditionally, there are two commonly used periodontal inspection mechanisms, probe based [2,3,4,5] and X-ray based [6,7,8]. First, dentists use a periodontal probe to poke between the gums and the teeth to slip below the gumline in order to reach the junctional epithelium, i.e., the bottom of the periodontal pocket for diagnosis [2,3,4,5]. This is the most commonly used because it is quick and immediate for diagnosis and harmless to the human body. However, dentists totally need to examine six locations for a tooth. When the patientβs teeth are red, swollen, inflamed, and bleeding, the puncture can cause extreme tingling and discomfort. Second, dentists can also examine the distance between the cemento-enamel junction and alveolar bones using X-ray imaging [6,7,8]. This cannot be applied frequently due to the toxic radiation. Therefore, this work adapts non-invasive, gum penetrative, painless, and harmless Optical Coherence Tomography (OCT) for periodontal inspection due to the following benefits. (1) It provides real-time sub-surface imaging at near-microscopic resolution; (2) it requires no preparation of the imaged subjects, and it can image the region of interest without contact or through a transparent window or membrane; (3) it does not emit ionizing radiation. In the past, there was research applying OCT for manually inspecting under-gum dental structures [9,10] and periodontal states [11,12,13]. Mota and Fernandes et al. [14,15,16] applied OCT to characterize the toothβgingival interface of porcine jaws, teeth of healthy patients, and teeth of patients with periodontal disease by manually processing and labeling the desired periodontal structures. This work automates the identification of the gumline and alveolar line from OCT imaging to provide useful periodontal information for diagnosis.
Finally, we examine the performance of our selected algorithm at each stage against other algorithms. Additionally, we also test our periodontal inspector on an in vivo dataset collected from two subjects for precise detection of the gumline and alveolar line against manually labeled ground truths. Accordingly, we make the following contributions: We design an automatic OCT based periodontal inspection framework, which is non-invasive, harmless, and can be frequently applied. Our system detects the gumline on the amplitude projection along the shooting direction because injected signals decay exponentially with penetration depth to emphasize mostly the characteristics of the top most tissues where the amplitude projection is the accumulation of the interference responses of all penetration depths along a shooting direction for all shooting directions. Additionally, we locate the alveolar line in each slice of the cross-sectional volume. Although deep networks can be directly applied for identification of the gumline and alveolar line, it would require a very large set of scanned data along with a huge amount of the GPU training time. Therefore, we apply Snake segmentation with the extra consideration of neighboring slices to locate regions possibly containing these boundaries in order to have effective training data and reduce the amount of examination. Additionally, we also adapt the truncated direct logarithm of the Snake focused region to transform the scanning volume data to emphasize the regions of interest and their boundaries for easier classification. As demonstrated in the results, our OCT based inspector can properly and efficiently provide useful periodontal conditions to dentists for periodontal disease diagnosis.
2. Related Work
This work aims at developing an automatic and frequently applied non-invasive periodontal inspection framework based on OCT. It involves several fields, but due to the length limitation, we restrict our attention only on medical practices in periodontal diagnosis and applications of deep networks to tomography.
Periodontal diagnosis: Xiang et al. [5] indicated that clinically, dentists mainly use three indicators for the diagnosis of periodontal disease. The first is bleeding on probing, i.e., while a dentist pokes the gum with a periodontal probe, it bleeds or does not. The second is pocket depth, which is the distance from the attached gingiva to the junctional epithelium measured with a periodontal probe. However, dentists have a hard time controlling the applied power and angle for precise measurement [2,3,4]. Additionally, it requires six examination locations per tooth. When the patientβs teeth are red, swollen, inflamed, and bleeding, the puncture can cause extreme tingling and discomfort. The final one depends on X-ray imaging to locate and examine the hard alveolar bones [6,7,8]. Although it is non-invasive, X-ray involves high radiation and cannot be applied frequently. Therefore, we develop a gum penetrative OCT scanner with various stages including optical rectification, intensity quantization, tissue identification, and state estimation for periodontal inspection.
Other advanced technologies evaluate whether periodontal treatment is successful based on microbiological testing including fluorescence microscopy [17], flow cytometry [18], the enzyme linked immunosorbent assay [19], and polymerase chain reaction [20]. However, these are very expensive and cannot be reused in the clinic, while our system can be frequently applied to every possible place of every patient. Additionally, the scanned information is directly digitalized for further analysis. Genetic polymorphism [21] uses gene analysis to find potential patients with gingivitis. Other gene analyzing methods use the count of immunoglobulin [22] and interleukin-1 [23] to determine patients susceptible to periodontal disease. However, these results vary with the etiology, growth environment, and other conditions of the constituent bacteria, and thus, they cannot be directly used for gingivitis diagnosis. Our OCT based inspector can directly identify the gumline and alveolar line to give dentists a direct and helpful indication.
There are research efforts focusing on applying OCT imaging to understand the under-gum dental structures [9,10]. Moreover, some groups also take advantage of the gum penetrative abilities of OCT for periodontal inspection [11,12,13]. However, all these methods require manual inspection and examination. Mota et al. [14] examined the periodontal structures of porcine jaws with OCT, while Fernandes et al. [15,16] applied OCT to examine the teeth of patients without/with periodontal disease. However, their analysis required manual image processing and tissue labeling. Although Lai et al. [24] applied OCT to reconstruct the dental surface, they did not aim at periodontal inspection. Our work takes advantages of OCTβs gum penetrative abilities for periodontal inspection without pain while being able to be applied frequently.
Deep tomographic networks on medical images: There are various tomography methods include Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Optical Coherence Tomography (OCT) targeting various organs. Pereira et al. [25] applied N4ITK [26] to overcome bias field distortion along with imaging statistics for better MRI imaging results and a simple Convolution Neural Network (CNN) to locate possible tumors. Poudel et al. [27] applied a Recurrent Fully Convolutional Neural Network (RFCNN) to identify various hearing components from MRI imaging. Suzuki et al. [28] applied massive training artificial neural networks with voting, and Van et al. [29] combined a neural network and Support Vector Machine (SVM) to identify lung nodules. Fundamentally, these techniques aim at human torsos, having a very limited resolution for teeth, along with their toxic radiation. The imaging processes are totally different to induce different noises, and the image processing and segmentation techniques should be different for good results. There is research [30,31,32] applying OCT to examine retinas. Additionally, Avanaki et al. [33] used networks to estimate the Rayleigh distribution of the scanned data for denoising, and RΓΆhlig [34] used the Multi-scale Convolutional Mixture of Expert (MCME) to locate the regions of interest. However, their target was different from ours. As shown, we develop different rectification, quantization, and segmentation techniques for better results.
3. Swept Source Optical Coherent Tomography
Optical Coherence Tomography (OCT) is an interferometric and noninvasive 3D volumetric imaging technique [10,30,31,32]. Because it can provide real-time sub-surface imaging without subject preparation and toxic ionizing radiation, it is well suited for studying biological structures. While for imaging, Swept Source Optical Coherence tomography (SSOCT) [35] emits light of various frequencies onto the subject, and the interfered light is collected by the measurement sensor. The ratio of the emitted and received light for various frequencies is used to determine the structural profile of the subject by applying inverse Fourier transform, i.e., a cross-sectional tomograph. This can provide a better depth profile while using less scanning time. Traditionally, this technology is applied for eye examination [32], while Ortman et al. [6] introduced it for alveolar inspection, and Lai et al. [24] used it for tooth scanning and reconstruction. This work uses the hardware described by Lai et al. [24] for gum penetrative inspection of periodontal states as shown in the left of Figure 1.
Figure 1.
The (a) is the top-down ray based interference accumulation for various frequencies, and the (b) is the cross-sectional interference for the slice of the scanned volume marked with red. We can determine the alveolar point for each slice marked in cyan from each cross-sectional map, but it is hard to locate the gumline from the same view. However, we find that we can identify the gumline marked in yellow from the amplitude projection.
4. Overview
While shooting, we assumed that the applier targeted the probe at the toothβgingiva boundary and leveled the probe to be perpendicular to gravity. Therefore, as shown in Figure 2, we could define the capturing coordinate based on the scan to have X be the direction pointing to the ground, Y be the direction to align all slices, and Z be the direction in which the OCT shoots, i.e., the shooting direction. Each gum penetrative cross-sectional slice can spatially provide the corresponding response from various depths, and our system intends to segment the responses for identification of gingiva, teeth, and alveolar bones. However, from dentistsβ perspective, they generally would like to know the distance in the Z direction between the gumline and the alveolar line, as shown in the right of Figure 1. These 3D boundaries are hard to measure. We observed that while OCT emitted at a specific frequency in a specific direction, only tissues at a specific depth could return interference based on the reaching strength, which exponentially decays with the penetrated tissues. Therefore, while accumulating the cross-sectional interferences along a shooting direction for all directions to form an amplitude projection, the top most gingiva or teeth had the strongest interference and showed the dominant effect. As shown in the left of Figure 1, because gingiva and teeth have different hardness for obviously different airβgingiva and airβteeth responses, there exists an obvious boundary between teeth and gingiva, i.e., the gumline. s for obviously different airβgingiva and airβteeth responses, there exists an obvious boundary between teeth and gingiva, i.e., the gumline. As a result, this work intends to locate the gumline from the maximal projection and use it as the basis line for periodontal inspection while using the nine-axis sensor to locate the slicing direction in order to locate the corresponding alveolar line for better periodontal inspection.
Figure 2.
After applying the OCT scanner and signal processing [24], we can gain 3D cross-sectional volumetric interference data. Our system first applies optical rectification and intensity quantization to process the volumetric data. Then, we compute the shooting amplitude projection and apply the OCT net to locate the gumline. Our system uses 2.5D Snake segmentation to locate the Region Of Interest (ROI) of each slice, quantizes it based on the properties of its ROI, and detects the alveolar line using our OCT net. Finally, we analyze the gumline and alveolar line for visualization and diagnosis.
Figure 2 illustrates our entire inspection process. After applying the OCT scanner [24] to gain the spatial interference patterns of various frequencies, we could use fast Fourier transform to reconstruct a 3D cross-sectional volume for the target region. Our system applied the hybrid optical rectification [24] of traditional camera calibration and Thin-Plate Spline (TPS) to correct the lens distortions. Since newly available deep networks [36,37,38] proved their ability to identify various regions from noisy images, our system adapted a deep network, the OCT net, to locate the gumline automatically, i.e., the gingivaβtooth boundary, and alveolar line, i.e., the gingivaβalveolar boundary, from the amplitude projection and slices of the scanning volume, respectively. While having enough training data, deep networks should be able to take the interference variations of personal differences, noises, scanning distances, and other factors into consideration. However, it is hard to collect a very large number of scanning volumes because of the huge amount of man power for labeling and fulfilling the laws of clinical trials. Therefore, we applied the truncated direct logarithm [39] to have the interference values in for identification of the gumline and Region Of Interest (ROI). Then, our system accumulated the interference responses along the capturing Z direction, i.e., of various depths, to form the amplitude projection. Our OCT image network identified gingiva from the projection for the gumline. Generally, air lying on the top of each slice provides very small interference, and the teeth roots located in the bottom half also provide little responses because of exponential decaying. Both provide little information to the deep network, and thus, we adapted Snake segmentation [40] to locate informative regions for better quantization, sample collection, and effective classification. Our system applied the truncated direct logarithm [39] according to the properties of the Snake focused region in order to emphasize boundaries. We sliced the scanning volume along the X direction of the capturing coordinate and used another OCT image network to identify gingiva, teeth, and alveolar bones for the alveolar line. Finally, our system aligned the detected gumline and alveolar line for analysis and visualization along the X direction of the capturing coordinate for diagnosis.
5. Algorithmic Details
Although OCT imaging can penetrate gums, its captured data are generally noisy. In order to estimate periodontal states precisely, we must optically rectify and quantize the captured data while applying deep networks for precise boundary identification. Finally, we analyzed and visualized the detection lines for periodontal diagnosis. The following details these stages.
5.1. Optical Rectification
Infrared rays were emitted and received through the lens for possible induction of optical distortions. Therefore, we followed the same hybrid calibration process of traditional camera calibration and Thin-Plate Spline (TPS) [24] for the OCT-to-world transformation. We first set a given set of N sampling locations, where denotes the OCT captured coordinate and denotes the stage coordinate. We first determined by solving , where , for correcting radial and tangential distortions. Then, we formed two as-harmonic-as-possible functions, and , based on N sampling locations, where denotes the corrected coordinate and denotes the stage coordinate. Our system minimized the bending energy of Thin-Plate Spline (TPS) as , where f is for X and Y, respectively. We could then utilize two functions to estimate its true world coordinate.
5.2. Locate Effective Regions with 2.5D Snake
The resolution of each slice was , and while putting them into training and testing, we had the following three issues. First, the distance of the scanner to the target region varied, inducing variations in the slice; this in turn generally required more training data for more precise prediction. Second, the magnitude of the cross-section responses varied depending on the cross-section information along the Z direction of the capturing coordinate. Although deep networks can automatically find the best relationship among various pixels and cross-sections, but would require a large amount of marked data, which is generally time consuming and hard to achieve. Third, although the entire slice had more examples, those portions of air and the bottom tooth generally had very little responses, i.e., these parts induced too many background examples to bias the training. Therefore, we first calibrated and quantized the slice to emphasize the boundaries for easy recognition. Additionally, we adapted Snake segmentation [40] to locate those regions of interest whose interferences were far from zero in order to remove too many background training examples. This section first gives the details of our adapted Snake segmentation, and the next section details the intensity calibration and quantization. Snake segmentation [40] can locally find the cut to separate two materials, while GrabCut [41] must globally solve the optimal graph, which is more time consuming and hard to parallelize. Thus, we used a flexible 2D curve, , moved inside a slice to minimize the designed energy for the depth response image in order to locate the boundary points. The energy is as where is the internal energy based on the continuity and curvature of the Snake curve, is the data energy directly using the depth response for the indication of another material, and is the neighboring energy to take the boundary of the previous slice into consideration. We express the data term as where , , and are weights for each term, where we set them as , , and , is the intensity energy term based on the average value of a box kernel, is the edge energy term based on its gradient of a Gaussian kernel, is the direction energy, where is the gradient direction, and is the normal of the boundary, to indicate the deviation between the gradient and the boundary normal because they should be perpendicular to each other when converging. While training and segmenting, we used slices of the 3D volumetric interference map. However, it was actually a 3D volume, and neighboring slices should have spatial coherence. If we did not take this into consideration, the system could easily get stuck at local minima, containing too much undesired background. Since the boundary surfaces should be smooth locally, the boundary of two slices should be similar. In other words, the distance of the current boundary point to the boundary of the previous slice should be minimized. Thus, we have the neighbor energy as where is the distance to the neighboring boundary. In each slice, we had the maximal ratio boundary proposed by Lai et al. [24] as the initial curve and advanced it sequentially until converging.
5.3. Inference Intensity Calibration and Quantization
The injected energy of our probe spatially varied with the injected directions, but it did not vary temporally. Therefore, we first used the OCT scanner to capture a plane platform, computed the amplitude projection, and used the projection to calibrate the injected energy to ensure response consistency across pixels. Based on interviews with analysts for OCT imaging, while quantizing the slices, the results, which reach the following criteria, can make them more easily locate teeth, gingiva, and alveolar bones. First, the left and right of each slice consisted of air and teeth, respectively, and its interference response should be very small. Second, it is important to identify the gumline and alveolar line, and thus, the gradient across the boundaries should be high for easy identification. Finally, while penetration depth increases, the interference response decays, i.e., responses inside the homogeneous material should be similar. Here, the goal of quantization, mapping real values to a series of fixed gray levels, is for data visualization. Generally, there are four commonly used methods including equal interval (linear mapping), equal probability (histogram equalization), minimum variance, and histogram hyperbolization [39]. We adapted truncation logarithm quantization, which takes both dynamic range determination and noise reduction into consideration to select a proper section in the responses and transform it to the visible range for later deep network training and identification. Additionally, while quantizing the volume, we had three different choices based on the quantized size: pixel based, slice based, and volume based. Pixel based quantization only considering a single pixel may lose spatial coherence, and the volume based one taking the entire data into consideration may miss consideration of local details. Additionally, our system applied our OCT net on 2D slices, and thus, it was more important to make the characteristics of each slice distinct. We quantized the data based on slice information by first computing the logarithm of all pixels in the scanning volume. Next, we applied the adapted 2.5D Snake to locate the ROI of each slice. For each slice, we established its histogram in the located ROI, found the low mode of the distribution, fit the mode with a normal distribution for the mean and standard deviation, used the mean as the truncated threshold, , and set the maximal logarithmic intensity, , as the maximum value of the ROI. For those smaller than , we set their values as zero; for those larger than , we set their values as 255; otherwise, we linearly mapped them into .
5.4. Top Down Gingival Boundary Identification
As discussed in Section 4, the gumline, an important periodontal evaluation criterion, reveals itself as an obvious boundary of the gingiva and teeth in the amplitude projection. While using traditional segmentation techniques including canny [42], LevelSet [43], and Snake [40], as shown in Figure 3, the results were unsatisfactory due to their noisy nature. Therefore, we decided to apply the newly available deep learning methods for its identification. Generally, SegNet [38] should be able to accomplish this task, but it requires a large amount of data for training where manually labeling data is time consuming and in vivo data collection on patients requires strenuous and cumbersome official application to the government administration. Therefore, this work first collected the data from our teammates, and a professional analyst manually labeled the gingiva and teeth. Additionally, instead of using the entire map of for training, we applied the sliding window mechanism of a window size of to slide through the collected maps for a reduction of the parameter number and increasing the number of data where 101 was chosen based on our test on various sizes using our collected data. As shown in Figure 2, our network first adapted the encoder structure of SegNet [38] with three stages for extracting important features and added three fully connected convolutional stages for classification. The encoder retained higher resolution features while reducing the number of parameters for a smaller training set, and the extracted features were fed into the fully connected decision network for integrated classification. Each encoder stage convolved the data with a filter bank to have feature maps and batch normalized them. Then, it applied rectified linear non-linearity (ReLU), , on each element and max pooling of a window and a stride of 2 for sub-sampling of a factor of two. These two steps aimed at translation invariance over small spatial shifts and encoding larger image context. Additionally, we added in a random drop-out step for better efficiency and accuracy. We had three stages for robust classification. The output of the encoder was linearized for classification with ReLU, max pooling, and random drop-out. At the end, the decision stage output the probability of the classification. This work used Mean Squared Error (MSE) as the loss function and the Adam gradient optimizer [44] for training optimization. While directly plugging the data into training, the learning bias toward background became too large. Therefore, we first separated the data into two categories, gingiva and background. For each iteration, our system randomly and evenly selected 128 examples from both categories for training due to the limitation of the GPU memory. The process repeated until it converged.
Figure 3.
This shows the boundary of gingiva and teeth, i.e., the gumline, identified by canny [42], LevelSet [43], and Snake [40].
Thinning for the Gingival Boundary
After applying our OCT network on the amplitude projection, we had a probability distribution of the gingiva. Directly using a threshold easily results in disconnected, thick boundaries. Therefore, we applied the thinning method proposed by Zhang et al. [45] in the following steps. First, we binarized the probability map, , with a threshold, , to get where in our experiment. Second, we went through all pixels to set the value of a pixel, , to zero if the following conditions were satisfied:
where and are two user specified parameters and set to be two and six and are the neighbors of any given pixel , starting from the top neighbor and ordering clockwisely. Finally, we repeated the second step until the result remained the same. The thinning process continued until the result remained the same in one iteration.
5.5. Volumetric Alveolar Bone Boundary Detection
Our system identified the alveolar line by locating the alveolar bones, which reveal themselves as brighter spots in the slices using the same OCT net. The slice data contained a large portion of background, air, and teeth, and while directly plugging into the state-of-the-art SegNet [38], the net intended to label each pixel as background to have low loss. This required repeatedly adjusting the parameters for better results, and it was time consuming for each iteration. Thus, we reduced the training bias by using 2.5D Snake to locate the ROI as discussed in Section 5.2. Then, our system applied the sliding window mechanism for segmentation by finding its bounding box and zero padding the boundaries to have all pixels as training examples to create a set of images with a label of background, teeth, gingiva, or alveolar bones. In order to have an even number for each category, we first determined the number based on the allowed memory. Then, we then randomly and evenly selected 64 training examples from four categories respectively for training in order to avoid bias for each iteration. The process repeated until converging. While classifying, we zero padded the bounding box of ROI to ensure that every interesting pixel could be classified.
6. Results
Our OCT based periodontal inspector could non-invasively examine the periodontal conditions. We first used the OCT scanner to collect a set of tooth scans targeting the gumline of subjects. Then, we designed ablation studies to evaluate our chosen stages. Finally, we also analyzed its prediction precision on the detected gumlines and alveolar lines against ground truths. All the results in this section were run under a computer with a CPU with Intel Xeon E5-2698 v4 2.2 GHz (20 cores), 256 G DD4 memory, and 4 NVidia Tesla V100.
6.1. Periodontal Dataset
We collected 18 OCT in vivo scans, whose resolution was , from two subjects, whose ages were 23 and 40, respectively, and whose gums were healthy, targeting the toothβgingiva boundaries. We selected nine random sites from each subject. An analyst manually went through these 18 amplitude projections of a resolution of to label the gumlines and regions of gingiva and teeth. Generally, it took about 15 s for an analyst to quantize the scan, 30 s to label the gumline on the projection, and 184 s to label gingiva and tooth regions on a slice. In other words, it took 46,000 s for labeling a whole scan. Then, for each slice of a resolution of , the analyst also labeled the regions of the background, gingiva, alveolar bones, and teeth, as shown in Figure 4. Later, our system adapted the sliding window mechanism to have a larger dataset by zero padding the boundaries to use all pixels of the 2D amplitude projection and 3D slices fully. To train, test, and validate the deep network, we randomly chose for training, for testing, and for validation. While actually performing for identification of the gumline and alveolar line, we zero padded the map to extend its width and height to have an output of the same dimension as the input.
Figure 4.
This shows the volumetric segmentation results. From left to right are the inputs, ground truths (marked manually), and the results of SegNet [38], ResNet [37], ours with a kernel size of give, and ours with a kernel size of seven. From top to bottom are the central slices from Validating Data 7 and 8.
6.2. Ablation Study in Locating Regions of Interest
Our framework proposed 2.5D Snake segmentation to locate regions of interest for removal of redundant background regions. In order to evaluate its effectiveness, we conducted a comparison against the commonly used 2D Snake [40], LevelSet [43], and GrabCut [41]. We adapted 2D Snake [40], LevelSet [43], and GrabCut [41] from the OpenCV library to our framework and used their default settings to locate effective regions on each slice, as shown in Figure 5. On average, traditional Snake [40] took s, 2.5D Snake s, LevelSet [43] s, and GrabCut [41] s. We took the analyst labeled data and computed the Intersection over Union (IoU) between the ground truth and various location methods, as shown in Table 1, where , is the area of detection, and is the area of ground truth. Generally, 2D Snake [40], LevelSet [43], and GrabCut [41] only start from the same initial condition and consider the properties per slice. These make them easily stuck in noisy regions, and they take longer to converge. Our algorithm used the detected boundary of neighboring slices as an optimal term. This can help Snake to walk over those disturbances for better results while comparing to traditional Snake segmentation. While comparing to LevelSet, our algorithm was simpler, faster, and more stable. The adapted Snake was simpler and had better locating rates than GrabCut.
Figure 5.
From left to right are manually labeling (red), traditional Snake [40] (yellow), ours (blue), LevelSet [43] (green), and GrabCut [41] (pink).
Table 1.
The left half of the table shows the slice based IoU for 2.5D Snake, 2D Snake [40], LevelSet [43], and GrabCut [41]. The right half shows the average penalty score for our adapted Truncated Direct Logarithm (TDL), c-means Minimum Distortion [46] (MD), Information Expansion [39] (IE), and Maximum Entropy [39] (ME).
6.3. Ablation Study in Intensity Quantization
In order to reduce the required training data size, we normalized the interference slices for easier classification. We would like to evaluate its effectiveness, and thus, we first designed an evaluation metric based on the criteria described in Section 5.3 as follows.
where , , and are the evaluation metrics for the background, boundary, and target criteria, and , , and are their corresponding weights. This work set , , and for our experiment because the contrast across the boundary had a major influence on identification. First, Lai et al. [24] provided a boundary detection mechanism by locally connecting the first local gradient maximum in the z-direction. After penetrating any tissue, the signal decayed exponentially, and therefore, we used a threshold, , to locate the other sized region of interest boundary. Different quantization algorithms may result in different brightness distributions, and thus, we computed the histogram of the preprocessed slice and set to be the third quartile of the first mode. We related the IoU of the located background to the ground truth, , to the background term as . Our system used the sum of all Z-direction distance to the boundary band as the boundary term as where is the Z-direction distance between that detected and the ground truth. basically indicated the approximation to the exponential decays inside the tissues between that quantified and ground truth. This can be approximated by the brightness distribution, and thus, we computed cumulated histograms inside the detected and labeled target regions, computed their correlation, and set the deviation to one as . We compared our adapted truncated direct logarithm against three commonly used quantization algorithms including c-means minimum distortion [46] to maximize the variance, information expansion [39] to equalize the histogram, and maximum entropy [39] to minimize the information loss. On average, the truncated direct logarithm took s, minimum distortion s, information expansion s, and maximum entropy s. The results are shown in the right of Table 1. Generally, our selected algorithm was simpler and more efficient while its performance was generally more stable and robust to preserve the major boundaries and important tissue regions.
6.4. Periodontal Inspection
Our OCT network had several important parameters including the learning rate and the kernel size. While observing past deep research, we found that the learning rate was generally selected between and and the kernel size was generally selected among 5, 7, and 9. Therefore, we tested various combinations of these parameters for our network, as shown in Figure 6. Generally, the network converged roughly between 1000 and 2000 iterations, and it could perform better while having a learning rate of and its kernel size of five and seven. Therefore, we used these parameters for identification in both 2D projection and 3D slices.
Figure 6.
This shows the loss curve of the learning process for our OCT image network while using the combination of two learning rates, and , and three kernel sizes, 5, 7, and 9.
In order to understand the effectiveness of our OCT network, we conducted a comparison against two commonly used networks, SegNet [38] and ResNet [37]. Using 12 accumulation maps to train, 3 to test, and 3 to validate, SegNet [38] did not have enough information for good segmentation. Therefore, we used the same sliding set with a resolution of . According to the resolution, we reduced the number of layers in the encoder and decoder of SegNet as shown in the left of Figure 7. While labeling gingiva, we zero padded the amplitude projection to , cut the padded results into tiles with a resolution of , applied the trained SegNet on each of them, and stitched them into the final result. We also reduced the number of layers and adjusted the structure of ResNet [37], as shown in the right of Figure 7 according to the resolution of , and trained the net with the same set. Then, our system zero padded the projection to and applied the trained ResNet to each valid sliding region of for gingival classification.
Figure 7.
This shows the adapted structures of SegNet [38] and ResNet [37] in this study.
Similarly, we chose to have the same training set for the 3D volumetric slices for SegNet [38] and ResNet [37]. While using SegNet to label teeth, gingiva, alveolar bones, and background, we zero padded the interesting axis aligned bounding box of the region of interest detected by the adapted Snake to have tiles with a resolution of , applied the trained SegNet on each of them, and stitched them into the final result. Similarly, our system zero padded the bounding box to ensure the classification of all interesting pixels and applied the trained ResNet to each valid sliding region of for classification. Figure 4 shows the segmentation results of SegNet [38], ResNet [37], Ours with a kernel size of five (Ours-5), and Ours with a kernel size of seven (Ours-7). We also computed the average IoU of SegNet [38], ResNet [37], and ours with kernel sizes of five and seven, as shown in Table 2. After training, SegNet could perform well on the testing datasets, but while applying it to the validating datasets, its performance deteriorated quickly. This may be due to the noisy nature of the OCT data. ResNet [37] performed comparatively well as our OCT net in 2D amplitude projections, but our net outperformed ResNet in the 3D slices. Generally, our selected resolution was not large enough to demonstrate its strength. In contrast, our simplified OCT network could perform better by selecting important features from each patch and determining its labeling by integrating these correlated features.
Table 2.
This shows the average IoU of Data 1 and 3 for testing and Data 7 and 8 for validating while using SegNet [38], ResNet [37], Ours with a kernel size of 5 (Ours-5) where we only tested it on 3D slices, and Ours with a kernel size of 7 (Ours-7).
Since we intended to have a non-invasive inspection of the periodontal states, our system could directly draw the detected gumline and alveolar line on the top down accumulation maps as shown in Figure 8. At the same time, we also show the detection results of SegNet [38],and ResNet [37] along with manually marking. On average, our system took s for scanning, s for rectification, s for normalization and quantization, s for ROI location, s for gumline detection, and s for segmentation in each slice. From scanning to visualization, it would take about 20 min while using a general computer. While using the NVidia DGX station, the process could accelerate to 2 min. Clinically, dentists care more about the measurement in the gravity direction, and therefore, we computed the distance between the detected boundaries of SegNet [38], ResNet [37], and ours in the gravity direction to the manually marked ones for precise analysis. Table 3 shows the mean and maximal deviations of SegNet [38], ResNet [37], and ours.
Figure 8.
The left shows the gumline in solid lines and the alveolar line in dotted lines of Data 7 (the top) and 8 (the bottom) detected by an analyst in red, SegNet [38] in yellow, ResNet [37] in green, and ours in blue. The middle and right show the deviation analysis against the manually labeled ones for the gumline and the alveolar line, respectively.
Table 3.
This shows the MSE of the detected gumline and alveolar line of Data 1 and 3 for testing and Data 7 and 8 while using SegNet [38], ResNet [37], and ours with a kernel size of 7 against the ground truths in the units of mm where Gin. denotes the gumline and Alv. denotes the Alveolar line.
7. Conclusions
This work proposed a non-invasive framework for frequent periodontal inspection by estimating the gumline and alveolar line of the target region using optical coherence tomography. Our system optically rectified the scanning results for precise measurement. Furthermore, our system introduced newly available deep networks for boundary identification while using Snake segmentation and intensity calibration and quantization to locate possible boundary regions and signal ranges in order to reduce the required amount of training data and enhance the training efficiency. The results showed that our system could provide reliable estimation of both lines while compared to manually labeled results. However, the proposed system was not without limitations. There are a few future research directions. First, currently, our deep networks works on 2D images for both amplitude projections and 3D interference slices. However, the scanning volumes were actually 3D data, and we would like to apply 3D deep networks in order to take neighboring slices into consideration for possibly better segmentation accuracy. Second, the cemento-enamel junction is the bottom of the periodontal pocket, and dentists locate it by splitting suspended gingiva with a probe. However, the suspended gingiva generally are attached to the teeth, and this cannot be identified by OCT, currently. In other words, our system currently still cannot automatically identify the bottom of the periodontal pocket, i.e., junctional epithelium, in order to estimate the pocket depth because dentists cannot provide the proper indication for detection. Thus, we would like to follow the protocols used in the manual inspections [14,15,16] to locate the bottom in the OCT scans using the examining probe robustly. Later, we can use these marked OCT scans to have a better understanding in order to find good criteria for its identification. Third, while using the nearest alveolar point from the gumline point at each slice is suboptimal, we should be able to improve the precision by reconstructing the alveolar bones and searching for the optimal alveolar line on the surface according to the gumline. Fourth, currently, we have only collected samples from two healthy individuals. In order to evaluate the effectiveness, we would like to get the governmental approval to apply this on patients and collect various samples from various individuals. Fifth, because we designed our inspection framework into various stages, we only need to modify the interference quantization stage for acquiring data while a commercial OCT system should provide the rectification and interference calibration. After quantization, theoretically, the following stages should have similar performance. We would like to seek a commercial probe to examine the effectiveness of our system. However, if the characteristics do not match the requirement of our net, appliers would be required to collect enough scans and label the projection and all slices of these scans.
Author Contributions
Conceptualization, Y.-C.L., C.-Y.Y., and S.Y.-L.; methodology, Y.-C.L., C.-Y.Y., and S.Y.-L.; software, C.-H.C., Z.-Q.C., J.-Y.L., D.-Y.L., K.-W.C., and I.-Y.C.; validation, C.-H.C., Z.-Q.C., J.-Y.L., and D.-Y.L.; formal analysis, Y.-C.L., J.-Y.L., and K.-W.C.; investigation, C.-H.C., Z.-Q.C., J.-Y.L., and D.-Y.L.; resources, Y.-C.L., C.-Y.Y., and S.Y.-L.; data curation, J.-Y.L. and D.-Y.L.; writing, original draft preparation, Y.-C.L., Z.-Q.C., and J.-Y.L.; writing, review and editing, Y.-Y.L., C.-H.C., Z.-Q.C., J.-Y.L., C.-Y.Y., and S.Y.-L.; visualization, Y.-C.L., C.-H.C., Z.-Q.C., J.-Y.L., K.-W.C., and I.-Y.C.; supervision, Y.-C.L., C.-Y.Y., and S.Y.-L.; project administration, Y.-C.L., C.-Y.Y., and S.Y.-L.; funding acquisition, Y.-C.L., C.-Y.Y., and S.Y.-L.
Funding
This work was financed by the Ministry of Science and Technology of Taiwan under Grants MOST108-2218-E-011-024, MOST107-2218-E-011-015, MOST106-3114-E-011-005, MOST107-2221-E-011-112-MY2, and MOST107-2221-E-011-114-MY2.
Acknowledgments
We thank those helping us develop the hardware and those doing the usability test.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| OCT | Optical Coherence Tomography |
| GPU | Graphics Processing Unit |
| 4PCS | Super Four Point Congruent Set |
| FFT | Fast Fourier Transform |
| CT | Computed Tomography |
| MRI | Magnetic Resonance Imaging |
| SSOCT | Swept Source Optical Coherent Tomography |
| AFG | Arbitrary Function Generator |
| ADC | Analog-to-Digital Converter |
| TPS | Thin-Plate Spline |
| RANSAC | RANdom SAmple Consensus |
| DT | Transfer a slice of scanning data (DT) |
| Re | optical Rectification (Re) |
| BD | Boundary Detection |
References
- Brown, J.; LΓΆe, H. Prevalence, extent, severity and progression of periodontal disease. Periodontology 2000 1993, 2, 57β71. [Google Scholar] [CrossRef] [PubMed]
- Goodson, J.; Tanner, A.; Haffajee, A.; Sornberger, G.; Socransky, S. Patterns of progression and regression of advanced destructive periodontal disease. J. Clin. Periodontol. 1982, 9, 472β481. [Google Scholar] [CrossRef] [PubMed]
- Cercek, J.; Kiger, R.; Garrett, S.; Egelberg, J. Relative effects of plaque control and instrumentation on the clinical parameters of human periodontal disease. J. Clin. Periodontol. 1983, 10, 46β56. [Google Scholar] [CrossRef] [PubMed]
- Aeppli, D.; Boen, J.; Bandt, C. Measuring and interpreting increases in probing depth and attachment loss. J. Periodontol. 1985, 56, 262β264. [Google Scholar] [CrossRef]
- Xiang, X.; Sowa, M.; Iacopino, A.; Maev, R.; Hewko, M.; Man, A.; Liu, K.Z. An update on novel non-invasive approaches for periodontal diagnosis. J. Periodontol. 2010, 81, 186β198. [Google Scholar] [CrossRef]
- Ortman, L.; McHenry, K.; Hausmann, E. Relationship between alveolar bone measured by 125I absorptiometry with analysis of standardized radiographs: 2. Bjorn technique. J. Periodontol. 1982, 53, 311β314. [Google Scholar] [CrossRef]
- Jeffcoat, M. Assessment of periodontal disease progression: Application of new technology to conventional tools. Periodontal Case Rep. Publ. Northeast. Soc. Periodontists 1989, 11, 8. [Google Scholar]
- Jeffcoat, M.; Page, R.; Reddy, M.; Wannawisute, A.; Waite, P.; Palcanis, K.; Cogen, R.; Williams, R.; Basch, C. Use of digital radiography to demonstrate the potential of naproxen as an adjunct in the treatment of rapidly progressive periodontitis. J. Periodontal Res. 1991, 26, 415β421. [Google Scholar] [CrossRef]
- Colston, B.; Sathyam, U.; Dasilva, L.; Everett, M.J.; Stroeve, P.; Otis, L. Dental OCT. Opt. Express. 1998, 3, 230β238. [Google Scholar] [CrossRef]
- Baumgartner, A.; Dichtl, S.; Hitzenberger, C.; Sattmann, H.; Robl, B.; Moritz, A.; Fercher, A.; Sperr, W. PolarizationβSensitive optical coherence tomography of dental structures. Caries Res. 2000, 34, 59β69. [Google Scholar] [CrossRef]
- Colston, B.; Everett, M.; Silva, L.; Otis, L.; Nathel, H. Optical Coherence Tomography for Diagnosing Periodontal Disease. Proc. SPIE 1997, 2973, 216β220. [Google Scholar]
- Baek, J.H.; Na, J.; Lee, B.H.; Choi, E.; Son, W.S. Optical approach to the periodontal ligament under orthodontic tooth movement: A preliminary study with optical coherence tomography. Am. J. Orthod. Dentofacial Orthop. 2009, 135, 252β259. [Google Scholar] [CrossRef] [PubMed]
- Wilder-Smith, P.; Holtzman, J.; Epstein J, A. Optical diagnostics in the oral cavity: An overview. Oral Dis. 2010, 16, 717β728. [Google Scholar] [CrossRef] [PubMed]
- Mota, C.C.; Fernandes, L.O.; CimΓ΅es, R.; Gomes, A.S. Non-Invasive Periodontal Probing Through Fourier-Domain Optical Coherence Tomography. J. Periodontol. 2015, 86, 1087β1094. [Google Scholar] [CrossRef] [PubMed]
- Fernandes, L.O.; Mota, C.C.; de Melo, L.S.A.; da Costa Soares, M.U.S.; da Silva Feitosa, D.; Gomes, A.S.L. In vivo assessment of periodontal structures and measurement of gingival sulcus with Optical Coherence Tomography: A pilot study. J. Biophotonics 2017, 10, 862β869. [Google Scholar] [CrossRef] [PubMed]
- Fernandes, L.O.; Mota, C.C.; Oliveira, H.O.; Neves, J.K.; Santiago, L.M.; Gomes, A.L. Optical coherence tomography follow-up of patients treated from periodontal disease. J. Biophotonics 2019, 12, e201800209. [Google Scholar] [CrossRef]
- Teles, R.; Haffajee, A.; Socransky, S. Microbiological goals of periodontal therapy. Periodontology 2000 2006, 42, 180β218. [Google Scholar] [CrossRef]
- Greenstein, G. Microbiologic assessments to enhance periodontal diagnosis. J. Periodontol. 1988, 59, 508β515. [Google Scholar] [CrossRef]
- Lamster, I.; Celenti, R.; Jans, H.; Fine, J.; Grbic, J. Current status of tests for periodontal disease. Adv. Dent. Res. 1993, 7, 182β190. [Google Scholar] [CrossRef]
- Henegariu, O.; Heerema, N.; Dlouhy, S.; Vance, G.; Vogt, P. Multiplex PCR: Critical parameters and step-by-step protocol. Biotechniques 1997, 23, 504β511. [Google Scholar] [CrossRef]
- Hodge, P.; Michalowicz, B. Genetic predisposition to periodontitis in children and young adults. Periodontology 2000 2001, 26, 113β134. [Google Scholar] [CrossRef] [PubMed]
- Yoshie, H.; Kobayashi, T.; Tai, H.; Galicia, J. The role of genetic polymorphisms in periodontitis. Periodontology 2000 2007, 43, 102β132. [Google Scholar] [CrossRef] [PubMed]
- Huynh-Ba, G.; Lang, N.; Tonetti, M.; Salvi, G. The association of the composite IL-1 genotype with periodontitis progression and/or treatment outcomes: A systematic review. J. Clin. Periodontol. 2007, 34, 305β317. [Google Scholar] [CrossRef] [PubMed]
- Lai, Y.C.; Lin, J.Y.; Yao, C.Y.; Lyu, D.Y.; Lee, S.Y.; Chen, K.W.; Chen, I.Y. Interactive OCT-Based Tooth Scan and Reconstruction. Sensors 2019, 19, 4234. [Google Scholar] [CrossRef] [PubMed]
- Pereira, S.; Pinto, A.; Alves, V.; Silva, C. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans. Med Imaging 2016, 35, 1240β1251. [Google Scholar] [CrossRef]
- Tustison, N.; Avants, B.; Cook, P.; Zheng, Y.; Egan, A.; Yushkevich, P.; Gee, J. N4ITK: Improved N3 bias correction. IEEE Trans. Med Imaging 2010, 29, 1310. [Google Scholar] [CrossRef]
- Poudel, R.; Lamata, P.; Montana, G. Recurrent fully convolutional neural networks for multi-slice MRI cardiac segmentation. In Reconstruction, Segmentation, and Analysis of Medical Images; Springer: Cham, Switzerland, 2016; pp. 83β94. [Google Scholar]
- Suzuki, K.; Armato, S., III; Li, F.; Sone, S.; Doi, K. Massive training artificial neural network (MTANN) for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography. Med. Phys. 2003, 30, 1602β1617. [Google Scholar] [CrossRef]
- Van Ginneken, B.; Setio, A.; Jacobs, C.; Ciompi, F. Off-the-shelf convolutional neural network features for pulmonary nodule detection in computed tomography scans. In Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), New York, NY, USA, 16β19 April 2015; pp. 286β289. [Google Scholar]
- Huang, D.; Swanson, E.; Lin, C.; Schuman, J.; Stinson, W.; Chang, W.; Hee, M.; Flotte, T.; Gregory, K.; Puliafito, C. Optical coherence tomography. Science 1991, 254, 1178β1181. [Google Scholar] [CrossRef]
- Schmitt, J. Optical coherence tomography (OCT): A review. IEEE J. Sel. Top. Quantum Electron. 1999, 5, 1205β1215. [Google Scholar] [CrossRef]
- Wollstein, G.; Schuman, J.; Price, L.; Aydin, A.; Beaton, S.; Stark, P.; Fujimoto, J.; Ishikawa, H. Optical coherence tomography (OCT) macular and peripapillary retinal nerve fiber layer measurements and automated visual fields. Am. J. Ophthalmol. 2004, 138, 218β225. [Google Scholar] [CrossRef]
- Avanaki, M.; Laissue, P.; Podoleanu, A.; Hojjat, A. Denoising based on noise parameter estimation in speckled OCT images using neural network. In Proceedings of the 1st Canterbury Workshop on Optical Coherence Tomography and Adaptive Optics, Canterbury, UK, 6β12 September 2008; Volume 7139, p. 71390E. [Google Scholar]
- RΓΆhlig, M.; Rosenthal, P.; Schmidt, C.; Schumann, H.; Stachs, O. Visual Analysis of Optical Coherence Tomography Data in Ophthalmology. In Proceedings of the EuroVA@ EuroVis, Barcelona, Spain, 12β13 June 2017; pp. 37β41. [Google Scholar]
- Potsaid, B.; Baumann, B.; Huang, D.; Barry, S.; Cable, A.E.; Schuman, J.S.; Duker, J.S.; Fujimoto, J.G. Ultrahigh speed 1050 nm swept source/Fourier domain OCT retinal and anterior segment imaging at 100,000 to 400,000 axial scans per second. Opt. Express 2010, 18, 20029β20048. [Google Scholar] [CrossRef] [PubMed]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient based learning applied to document recognition. Proc. IEEE 1998, 86, 2278β2324. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27β30 June 2016; pp. 770β778. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481β2495. [Google Scholar] [CrossRef] [PubMed]
- Yu, K.; Ji, L.; Wang, L.; Xue, P. How to optimize OCT image. Opt. Express 2001, 9, 24β35. [Google Scholar] [CrossRef] [PubMed]
- Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321β331. [Google Scholar] [CrossRef]
- Li, Y.; Sun, J.; Tang, C.K.; Shum, H.Y. Lazy snapping. Acm Trans. Graph. 2004, 23, 303β308. [Google Scholar] [CrossRef]
- Canny, J. A computational approach to edge detection. In Readings in Computer Vision; Elsevier: Amsterdam, The Netherlands, 1987; pp. 184β203. [Google Scholar]
- Vese, L.; Chan, T. A multiphase level set framework for image segmentation using the Mumford and Shah model. Int. J. Comput. Vis. 2002, 50, 271β293. [Google Scholar] [CrossRef]
- Kingma, D.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7β9 May 2015; pp. 1β13. [Google Scholar]
- Zhang, T.; Suen, C. A Fast Parallel Algorithm for Thinning Digital Patterns. Commun. ACM 1984, 27, 236β239. [Google Scholar] [CrossRef]
- Friedman, M.; Abraham, K. Introduction to Pattern Recognition: Statistical, Structural, Neural, and Fuzzy Logic Approaches, 2nd ed.; World Scientific: London, UK, 1999. [Google Scholar]
Β© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).