Next Article in Journal
Frame Transmission Efficiency-Based Cross-Layer Congestion Notification Scheme in Wireless Ad Hoc Networks
Next Article in Special Issue
3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor
Previous Article in Journal
Localization with Graph Diffusion Property
Previous Article in Special Issue
Efficient Depth Enhancement Using a Combination of Color and Depth Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination

1
Medical Photonics Research Center, Korea Photonics Technology Institute (KOPTI), Gwangju 61007, Korea
2
Department of Biomedical Science and Engineering, Gwangju Institute of Science and Technology (GIST), Gwangju 61005, Korea
3
School of Information and Communications, Gwangju Institute of Science and Technology (GIST), Gwangju 61005, Korea
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(7), 1634; https://doi.org/10.3390/s17071634
Submission received: 19 June 2017 / Revised: 10 July 2017 / Accepted: 12 July 2017 / Published: 15 July 2017
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)

Abstract

:
We demonstrated a three-dimensional (3D) dental scanning apparatus based on structured illumination. A liquid lens was used for tuning focus and a piezomotor stage was used for the shift of structured light. A simple algorithm, which detects intensity modulation, was used to perform optical sectioning with structured illumination. We reconstructed a 3D point cloud, which represents the 3D coordinates of the digitized surface of a dental gypsum cast by piling up sectioned images. We performed 3D registration of an individual 3D point cloud, which includes alignment and merging the 3D point clouds to exhibit a 3D model of the dental cast.

1. Introduction

Due to its non-destructiveness and speed, non-contact optical three-dimensional (3D) shape measurement has proven to be useful for a myriad of applications, such as obstacle detection for vehicle guidance, dimensional measurement for quality inspection in automated manufacturing systems, and human body scanning [1,2,3,4,5,6,7,8,9,10,11,12,13,14]. Various optical techniques, including the time-of-flight method, triangulation, stereography, confocal microscopy, interferometry, and fringe projection have been developed for measuring 3D shapes [15,16,17,18,19,20].
Among the various applications, the medical applications of non-contact optical 3D shape measurement [8,9,10,11,12,13] have attracted much attention due to its advantages over conventional shape measurement methods using custom moulds [21,22,23]. In particular, there has been enormous interest in the development of a 3D shape measurement apparatus for dental applications. The development of a 3D intraoral scanning apparatus, which takes a digital impression of the teeth rather than a conventional impression, was one of the central parts of research aimed at dental applications [24]. Since the introduction of digital impressions in the late 1980s, there were various commercial attempts to use an intraoral scanning apparatus for obtaining the digital impressions of the teeth [25,26,27,28]. However, there are still technical problems regarding the accuracy of the digital impressions and the scanning speed of the scanning apparatus. Recently, it was reported that a 3D intraoral scanning apparatus using optical sectioning with structured illumination provides wide-field, high-resolution images compared to other 3D intraoral scanning apparatuses [29,30]. Due to the advantages of structured illumination, a 3D intraoral scanning apparatus using optical sectioning with structured illumination seems to facilitate the faster and more accurate acquisition of the digital impressions. For depth scanning, however, 3D intraoral scanning apparatuses based on structured illumination experience the mechanical movement of a focusing lens that needs additional motor-driven actuators [31]. Recently, an electrically tunable liquid lens was used for 3D imaging, 3D microscopy, and 3D orbital tracking because of its fast response and compactness [32,33,34].
In this paper, we demonstrate a 3D intraoral scanning apparatus based on structured illumination. We reduced the mechanical movements of optics inside the 3D dental scanning apparatus by using a piezo stage and a liquid lens, which replaced the motor-driven actuators of conventional scanning apparatuses. Moreover, we made the scanning apparatus more compact by replacing the motor-driven actuators. We recorded 2D images of the dental cast (gypsum teeth model) with varying the focus along the focal axis. For each focus, three consecutive images were captured with laterally shifting the structured illumination. We performed optical sectioning with structured illumination and reconstructed a 3D point cloud, which represents the 3D coordinates of the digitized surface, of each tooth by stacking up the sectioned images along the focal axis. In addition, we performed 3D registration (3D model aligning and stitching) of the 3D point clouds of each tooth to build a 3D model of the dental cast.

2. Methods

Figure 1a shows a schematic of the 3D scanning apparatus. With the exception of the mirror tip (M), the experimental setup was sealed with a housing to block external noise (Figure 1b). The dimensions of the 3D scanning apparatus were 275 mm × 176 mm × 72 mm. We used collimated white light from a light emitting diode (LED) (MCEP-CW8-070-3, Moritex, Saitama, Japan) as an illumination source. A linear polarizer was used to select the vertically polarized component of the illumination source. The collimated beam was illuminated to a Ronchi ruling (1" × 1" (20 lp/mm), Ronchi Ruling #58-777, Edmund Optics, Barrington, NJ, USA) of 50 μm periodicity. The distance between the imaging lens and the Ronchi ruling was the same as the distance between the imaging lens and the camera. The structured light was reflected off a polarization beam splitter (PBS) and traversed through an imaging lens (L) (f = 50 mm), a tunable lens (TL) (EL-10-30-C, Optotune AG), and a long working distance objective lens (OBJ, working distance: 55 mm). The structured light was projected onto the sample surface and the overlapped images of the structured light and the sample were recorded by a CMOS (Complementary metal–oxide–semiconductor) camera (frame rate: 170 fps, MQ022MG-CM, Ximea, Münster, Germany). It is notable that the cross polarization detection technique was used to remove internal reflection from the beam splitter and to enhance the contrast ratio of the recorded images. We used a focus tunable lens, instead of a combination of a solid lens and a mechanical actuator, to change the focus of the objective lens along the direction parallel to the optical axis. The tunable lens was capable of tuning the working distance of the objective lens from 30 to 130 mm by applying a current of 0 to 250 mA in increments of 2.5 mA. We verified the linearity and the axial step of tuning of the working distance by imaging a depth-of-field target (Depth of Field Target 5-15 #54-440, Edmund Optics). The axial step of the tuning of the working distance was 100 μm. It was advantageous that the focus of the tunable lens could be quickly tuned, within a few milliseconds. For each focal length, we took three consecutive images of the sample by translating the Ronchi ruling along the direction perpendicular to the optical axis. The Ronchi ruling was translated by the piezo stage (Q-522, Physik Instrumente, Karlsruhe, Germany) in steps of (50/3) μm across (100/3) μm. We recorded 300 images for the reconstruction of the 3D models at a single position. We used a gypsum dental cast sample (gypsum teeth model) for the reconstruction of the 3D point cloud model and 3D registration of the individual point cloud models. For better quantification of the overlapped region of the adjacent scanning area, we realigned the gypsum dental cast along a straight line. At present, the scanning speed (~1.9 s) for our 3D scanning apparatus is insufficient for hand-held operation, because it is restricted by the sensitivity and speed of the CMOS camera. The sample was mounted on a linear motorized translation stage (M-ILS200CC, Newport, Irvine, CA, USA) and was translated in steps of 0.5 mm for each scan across a distance of 50 mm.
Figure 2 shows a picture of the gypsum dental cast and the scanning process for the reconstruction of a 3D model of the entire gypsum dental cast. We took top-view images of the dental cast with varying the focal length of the objective lens from the top to the bottom of the cast (vertical scan). After the vertical scan was complete, the dental cast was translated in the lateral direction with a 0.5 mm step (lateral scan), and the next vertical scan was performed. Each single scan along a straight line across the entire dental cast took approximately 190 s.

3. Results

We performed optical sectioning using the recorded raw images of the dental cast. Figure 3a,b show the images and the magnified fragments of the dental cast from the same focal plane. Sinusoidal fringe patterns generated by the Ronchi ruling were overlaid onto the raw images of the dental cast. The sinusoidal fringe patterns were only visible within the in-focus area because the Ronchi ruling and the CMOS camera were located at the same distance from the imaging lens. The patterns were shifted by T/3 between each image, where T is the periodicity of the patterns (Figure 3c).
For the optical sectioning, we only used the intensity of each pixel of the raw images rather than the phase unwrapping method [35,36,37]. Sectioned images were obtained from the root-mean-square (RMS) of the sum of the squared differences between each raw image from the same focal plane. The modulation contrast of each pixel of the sectioned image, I m n , s e c t i o n e d , was defined as:
I m n , s e c t i o n e d = 1 3 { ( I m n , 1 I m n , 2 ) 2 + ( I m n , 2 I m n , 3 ) 2 + ( I m n , 3 I m n , 1 ) 2 } 1 2
where I m n , N (N = 1,2,3) is the intensity of the pixel located at coordinate (m,n) of the Nth raw image. The raw images of the dental cast and its corresponding sectioned images are depicted in Figure 4. In the sectioned images, the white color represents the maximum-intensity of the optical sectioning and indicates a perfect in-focus state. The out-of-focus background in the sectioned images was removed by setting the upper threshold for the intensity modulation and by decoding the in-focus information. The lateral resolution of the sectioned images, which was determined by the periodicity of the sinusoidal fringe pattern, is 50 μm. We calculated the axial distance between each sectioned plane and the exit pupil of the objective lens, assuming that the tunable lens and the objective lens were a single optical element. We reconstructed 3D point clouds of the teeth based on the 3D coordinates and intensity data from the optical sectioning.
Figure 5 shows a picture of the dental cast and the 3D point cloud models (polygonised surfaces) from different scanning positions. We combined sectioned images in a single volume to show different areas of the dental cast (see Movies in the online supplementary data to show rotating view of reconstructed 3D models of the dental cast). The lateral field of view (FOV) of the 3D scanning apparatus was 11.27 mm × 6 mm. During the background removal process, some in-focus information was sacrificed, which reduced the lateral area of the reconstructed 3D models. The optical sectioning depth was 10 mm, and we sliced 100 layers at an axial stepping interval of 100 μm. Since the axial stepping interval was much larger than the focal depth of the objective lens, the axial resolution of the 3D point cloud model was the same as the axial stepping interval.
We performed 3D registration (3D model stitching) to reconstruct a 3D model of the dental cast, whose dimensions were beyond the FOV of the scanning apparatus. Multiple 3D point clouds from adjacent scanning points were stitched together based on distinctive features of the 3D point clouds. To reconstruct a 3D model of the dental cast from a sequence of 3D point clouds, the iterative closet point (ICP) algorithm [38] was used. The ICP algorithm performs pairwise registration in which it aligns the adjacent 3D point clouds. The implementation of ICP worked by iteratively finding corresponding points between two 3D point clouds and estimating the rigid transformation, which minimized the distance between these two corresponding point pairs in order to align the latter 3D point cloud with the former 3D point cloud. To compose the dental cast, the ICP algorithm was repeatedly used to process successively scanned 3D point clouds. The first point cloud was used to establish the reference coordinate system. Then, each 3D point cloud was transformed to the reference coordinate system, where the transformation was a multiple of the pairwise transformation. Figure 6 shows the 3D point cloud model of the dental cast reconstructed from the 3D registration based on the Point Cloud Library [39].

4. Discussion

We demonstrated a 3D dental scanning apparatus based on structured illumination, and suggested a simple algorithm for 3D reconstruction. In addition, we performed 3D registration of the 3D point cloud models of each scanning point to build a 3D model of the dental gypsum cast. We constructed a fast and compact 3D scanning apparatus with a liquid tunable lens and piezomotor stage. The elapsed time for a full scan was approximately 190 s, and the physical dimensions of the apparatus were 275 mm × 176 mm × 72 mm. With this 3D scanning apparatus, we reconstructed 3D point cloud models of a dental gypsum cast. The axial resolution of the 3D point cloud models was 100 μm, which coincides to the axial step of the tuning of the working distance of the 3D scanning apparatus. The accuracy of a commercial 3D intraoral scanner, which is based on the stereovision technique (Cerec Omnicam, Dentsply Sirona, York, PA, USA), was reported to be 149 μm [40]. The accuracy of a commercial intraoral scanner which is based on structured illumination (Trios 3, 3shape) was not reported officially yet. At present, the scanning speed for our 3D scanning apparatus is restricted by the sensitivity and speed of the CMOS camera. It is possible to make the scanning faster with more sensitive and faster custom-built CMOS sensors. For the scanning speed of the commercial 3D intraoral scanners, it took 4:18 minutes to scan a full arch for Cerec Omnicam and 30 s for Trios 3. However, Trios 3 used a CMOS sensor of 3000 frames per second and is currently the most expensive intraoral scanner. We expect our results will contribute to the development of a faster and more precise 3D intraoral scanning apparatus. Moreover, our research will pave the way for further investigation of non-contact 3D shape measurements

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/17/7/1634/s1, Video S1–S3: rotating view of reconstructed 3D models of the dental cast.

Acknowledgments

This work was supported by the Industrial Strategic Technology Development Program (10048888) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea).

Author Contributions

J.S.A and J.B.E conceived and designed the experiments; J.S.A and J.W.K performed the experiments; A.P. and B.H.L analyzed the data; J.S.A wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nedevschi, S.; Danescu, R.; Frentiu, D.; Marita, T.; Oniga, F.; Pocol, C.; Schmidt, R.; Graf, T. High accuracy stereo vision system for far distance obstacle detection. In Proceedings of the 2004 IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 292–297. [Google Scholar]
  2. Broggi, A.; Caraffi, C.; Porta, P.P.; Zani, P. The single frame stereo vision system for reliable obstacle detection used during the 2005 DARPA grand challenge on TerraMax. In Proceedings of the IEEE Intelligent Transportation Systems Conference, Rio de Janeiro, Brazil, 1–4 November 2006; pp. 745–752. [Google Scholar]
  3. Huang, W.; Kovacevic, R. A Laser-Based Vision System for Weld Quality Inspection. Sensors 2011, 11, 506–521. [Google Scholar] [CrossRef] [PubMed]
  4. Matsubara, A.; Yamazaki, T.; Ikenaga, S. Non-contact measurement of spindle stiffness by using magnetic loading device. Int. J. Mach. Tools Manuf. 2013, 71, 20–25. [Google Scholar] [CrossRef]
  5. Schwenke, H.; Neuschaefer-Rube, U.; Pfeifer, T.; Kunzmann, H. Optical methods for dimensional metrology in production engineering. CIRP Ann. Manuf. Technol. 2002, 51, 685–699. [Google Scholar]
  6. Istook, C.L.; Hwang, S.-J. 3D body scanning systems with application to the apparel industry. Fash. Mark. Manag. 2001, 5, 120–132. [Google Scholar] [CrossRef]
  7. Ashdown, S.P.; Loker, S.; Schoenfelder, K.; Lyman-Clarke, L. Using 3D scans for fit analysis. J. Text. Appar. Technol. Manag. 2004, 4, 1–12. [Google Scholar]
  8. Hajeer, M.; Millett, D.; Ayoub, A.; Siebert, J. Current Products and Practices: Applications of 3D imaging in orthodontics: Part I. J. Orthod. 2004, 31, 62–70. [Google Scholar] [CrossRef] [PubMed]
  9. Zhou, H.; Hu, H. Human motion tracking for rehabilitation—A survey. Biomed. Signal Process. Control 2008, 3, 1–18. [Google Scholar]
  10. Kusnoto, B.; Evans, C.A. Reliability of a 3D surface laser scanner for orthodontic applications. Am. J. Orthod. Dentofac. Orthop. 2002, 122, 342–348. [Google Scholar]
  11. Kovacs, L.; Zimmermann, A.; Brockmann, G.; Gühring, M.; Baurecht, H.; Papadopulos, N.; Schwenzer-Zimmerer, K.; Sader, R.; Biemer, E.; Zeilhofer, H. Three-dimensional recording of the human face with a 3D laser scanner. J. Plast. Reconstr. Aesthet. Surg. 2006, 59, 1193–1202. [Google Scholar] [CrossRef] [PubMed]
  12. Maier-Hein, L.; Mountney, P.; Bartoli, A.; Elhawary, H.; Elson, D.; Groch, A.; Kolb, A.; Rodrigues, M.; Sorger, J.; Speidel, S.; et al. Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery. Med. Image Anal. 2013, 17, 974–996. [Google Scholar] [PubMed]
  13. Yatabe, K.; Ishikawa, K.; Oikawa, Y. Compensation of fringe distortion for phase-shifting three-dimensional shape measurement by inverse map estimation. Appl. Opt. 2016, 55, 6017–6024. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, Z. Microsoft kinect sensor and its effect. IEEE Multimed. 2012, 19, 4–10. [Google Scholar] [CrossRef]
  15. Cui, Y.; Schuon, S.; Chan, D.; Thrun, S.; Theobalt, C. 3D shape scanning with a time-of-flight camera. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1173–1180. [Google Scholar]
  16. Fu, G.; Menciassi, A.; Dario, P. Development of a low-cost active 3D triangulation laser scanner for indoor navigation of miniature mobile robots. Robot. Auton. Syst. 2012, 60, 1317–1326. [Google Scholar] [CrossRef]
  17. Kieu, H.; Pan, T.; Wang, Z.; Le, M.; Nguyen, H.; Vo, M. Accurate 3D shape measurement of multiple separate objects with stereo vision. Meas. Sci. Technol. 2014, 25, 035401. [Google Scholar] [CrossRef]
  18. Ghim, Y.-S.; Rhee, H.-G.; Davies, A.; Yang, H.-S.; Lee, Y.-W. 3D surface mapping of freeform optics using wavelength scanning lateral shearing interferometry. Opt. Exp. 2014, 22, 5098–5105. [Google Scholar] [CrossRef] [PubMed]
  19. Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Lasers Eng. 2010, 48, 149–158. [Google Scholar] [CrossRef]
  20. Chen, L.-C.; Huang, C.-C. Miniaturized 3D surface profilometer using digital fringe projection. Meas. Sci. Technol. 2005, 16, 1601. [Google Scholar] [CrossRef]
  21. Birkholz, U.; Haertl, C.; Nassler, P. Method and apparatus for producing an ear impression. U.S. Patent 4,834,927, 30 May 1989. [Google Scholar]
  22. Pantino, D.A. Method of making a face mask from a facial impression and of gas delivery. U.S. Patent 5,832,918, 10 November 1998. [Google Scholar]
  23. Von Nostitz, F.H. Dental impression tray and process for the use thereof. U.S. Patent 4,569,342, 11 February 1986. [Google Scholar]
  24. Christensen, G.J. Impressions are changing: Deciding on conventional, digital or digital plus in-office milling. J. Am. Dent. Assoc. 2009, 140, 1301–1304. [Google Scholar] [CrossRef] [PubMed]
  25. Duret, F.; Termoz, C. Method of and apparatus for making a prosthesis, especially a dental prosthesis. U.S. Patent 4,663,720, 5 May 1987. [Google Scholar]
  26. Brandestini, M.; Moermann, W.H. Method and apparatus for the three-dimensional registration and display of prepared teeth. U.S. Patent 4,837,732, 6 June 1989. [Google Scholar]
  27. Taneva, E.; Kusnoto, B.; Evans, C.A. 3D Scanning, Imaging, and Printing in Orthodontics. Issues in Contemporary Orthodontics 2015. Available online: https://www.intechopen.com/books/issues-in-contemporary-orthodontics/3d-scanning-imaging-and-printing-in-orthodontics (accessed on 15 July 2017).
  28. Logozzo, S.; Zanetti, E.M.; Franceschini, G.; Kilpelä, A.; Mäkynen, A. Recent advances in dental optics–Part I: 3D intraoral scanners for restorative dentistry. Opt. Lasers Eng. 2014, 54, 203–221. [Google Scholar] [CrossRef]
  29. Ender, A.; Mehl, A. Full arch scans: conventional versus digital impressions--an in-vitro study. Int. J. Comput. Dent. 2011, 14, 11–21. [Google Scholar] [PubMed]
  30. Ender, A.; Attin, T.; Mehl, A. In vivo precision of conventional and digital methods of obtaining complete-arch dental impressions. J. Prosthet. Dent. 2016, 115, 313–320. [Google Scholar] [CrossRef] [PubMed]
  31. Fisker, R.; Öjelund, H.; Kjær, R.; van der Poel, M.; Qazi, A.A.; Hollenbeck, K.-J. Focus scanning apparatus. U.S. Patent 20120092461A1, 19 April 2012. [Google Scholar]
  32. Doblas, A.; Sánchez-Ortiga, E.; Saavedra, G.; Sola-Pikabea, J.; Martínez-Corral, M.; Hsieh, P.-Y.; Huang, Y.-P. Three-dimensional microscopy through liquid-lens axial scanning. Proc. SPIE 2015, 9495. [Google Scholar] [CrossRef]
  33. Annibale, P.; Dvornikov, A.; Gratton, E. Electrically tunable lens speeds up 3D orbital tracking. Biomed. Opt. Exp. 2015, 6, 2181–2190. [Google Scholar] [CrossRef] [PubMed]
  34. Pokorny, P.; Miks, A. 3D optical two-mirror scanner with focus-tunable lens. Appl. Opt. 2015, 54, 6955–6960. [Google Scholar] [CrossRef] [PubMed]
  35. Qian, J.; Lei, M.; Dan, D.; Yao, B.; Zhou, X.; Yang, Y.; Yan, S.; Min, J.; Yu, X. Full-color structured illumination optical sectioning microscopy. Sci. Rep. 2015, 5, 14513. [Google Scholar] [CrossRef] [PubMed]
  36. Lohry, W.; Zhang, S. High-speed absolute three-dimensional shape measurement using three binary dithered patterns. Opt. Exp. 2014, 22, 26752. [Google Scholar] [CrossRef] [PubMed]
  37. Zhang, S.; van der Weide, D.; Oliver, J. Superfast phase-shifting method for 3-D shape measurement. Opt. Exp. 2010, 18, 9684. [Google Scholar] [CrossRef] [PubMed]
  38. Besl, P.J.; McKay, H.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  39. Holz, D.; Ichim, A.E.; Tombari, F.; Rusu, R.B.; Behnke, S. Registration with the Point Cloud Library: A Modular Framework for Aligning in 3-D. IEEE Robot. Autom. Mag. 2015, 22, 110–124. [Google Scholar] [CrossRef]
  40. Boeddinghaus, M.; Breloer, E.S.; Rehmann, P.; Wöstmann, B. Accuracy of single-tooth restorations based on intraoral digital and conventional impressions in patients. Clin. Oral Investig. 2015, 19, 2027–2034. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) Schematic of the experimental setup. The structured light pattern, which was generated by a Ronchi ruling, focused on the sample. The scattered light from the sample was collected by an objective lens and recorded by a camera. The ⊙ and arrow signs (red) represent the polarization of illuminated and scattered light, respectively. (M: mirror, OBJ: objective lens, TL: tunable lens, L: lens, PBS: polarizing beam splitter, CAM: camera, PS: piezo stage, LP: linear polarizer). (b) Picture of the three-dimensional scanning apparatus. The dimensions of the apparatus are 275 mm × 176 mm × 72 mm. Scale bar: 100 mm.
Figure 1. (a) Schematic of the experimental setup. The structured light pattern, which was generated by a Ronchi ruling, focused on the sample. The scattered light from the sample was collected by an objective lens and recorded by a camera. The ⊙ and arrow signs (red) represent the polarization of illuminated and scattered light, respectively. (M: mirror, OBJ: objective lens, TL: tunable lens, L: lens, PBS: polarizing beam splitter, CAM: camera, PS: piezo stage, LP: linear polarizer). (b) Picture of the three-dimensional scanning apparatus. The dimensions of the apparatus are 275 mm × 176 mm × 72 mm. Scale bar: 100 mm.
Sensors 17 01634 g001
Figure 2. Picture of the dental cast (gypsum model) and the images captured by the scanning apparatus from the top view. Scanning process: Images of the dental cast were recorded with varying objective lens focal length (vertical scan). The dental cast moved laterally after each vertical scan was finished (lateral scan). Scale bar: 10 mm.
Figure 2. Picture of the dental cast (gypsum model) and the images captured by the scanning apparatus from the top view. Scanning process: Images of the dental cast were recorded with varying objective lens focal length (vertical scan). The dental cast moved laterally after each vertical scan was finished (lateral scan). Scale bar: 10 mm.
Sensors 17 01634 g002
Figure 3. (a) Images (top row) of the dental gypsum cast and (b) magnified images (bottom row) of the in-focus area. Sinusoidal fringe patterns were visualized only within the in-focus areas. (c) The intensity profiles of the magnified images (Figure 3b) along the horizontal axis of the images. The intensity profiles were vertically translated for better discrimination. The sinusoidal fringe patterns were shifted by T/3 (T: periodicity of the patterns) between each image.
Figure 3. (a) Images (top row) of the dental gypsum cast and (b) magnified images (bottom row) of the in-focus area. Sinusoidal fringe patterns were visualized only within the in-focus areas. (c) The intensity profiles of the magnified images (Figure 3b) along the horizontal axis of the images. The intensity profiles were vertically translated for better discrimination. The sinusoidal fringe patterns were shifted by T/3 (T: periodicity of the patterns) between each image.
Sensors 17 01634 g003
Figure 4. Optical sectioning from the raw images of the dental cast. The intensity of each pixel of the sectioned image is defined in Equation (1). The sectioned images from different focal planes were stacked to reconstruct a 3D point cloud model of the dental cast.
Figure 4. Optical sectioning from the raw images of the dental cast. The intensity of each pixel of the sectioned image is defined in Equation (1). The sectioned images from different focal planes were stacked to reconstruct a 3D point cloud model of the dental cast.
Sensors 17 01634 g004
Figure 5. The dental cast and the 3D point cloud models of the dental cast from different scanning points. The sectioned images were combined in a single volume to show different areas of the dental cast.
Figure 5. The dental cast and the 3D point cloud models of the dental cast from different scanning points. The sectioned images were combined in a single volume to show different areas of the dental cast.
Sensors 17 01634 g005
Figure 6. The 3D point cloud of the dental cast (half arch) reconstructed from 3D registration.
Figure 6. The 3D point cloud of the dental cast (half arch) reconstructed from 3D registration.
Sensors 17 01634 g006

Share and Cite

MDPI and ACS Style

Ahn, J.S.; Park, A.; Kim, J.W.; Lee, B.H.; Eom, J.B. Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination. Sensors 2017, 17, 1634. https://doi.org/10.3390/s17071634

AMA Style

Ahn JS, Park A, Kim JW, Lee BH, Eom JB. Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination. Sensors. 2017; 17(7):1634. https://doi.org/10.3390/s17071634

Chicago/Turabian Style

Ahn, Jae Sung, Anjin Park, Ju Wan Kim, Byeong Ha Lee, and Joo Beom Eom. 2017. "Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination" Sensors 17, no. 7: 1634. https://doi.org/10.3390/s17071634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop