Next Article in Journal / Special Issue
Number Determination of Successfully Packaged Dies Per Wafer Based on Machine Vision
Previous Article in Journal
A Novel Approach to the Design of Axial-Flux Switched-Reluctance Motors
Previous Article in Special Issue
Residual Generator Fuzzy Identification for Wind TurbineBenchmark Fault Diagnosis

Machines 2015, 3(2), 55-71; https://doi.org/10.3390/machines3020055

Article
Initial Work on the Characterization of Additive Manufacturing (3D Printing) Using Software Image Analysis
Department of Computer Science, University of North Dakota, 3950 Campus Road, Stop 9015, Grand Forks, ND 58202, USA
Academic Editor: David Mba
Received: 17 August 2014 / Accepted: 21 March 2015 / Published: 2 April 2015

Abstract

:
A current challenge in additive manufacturing (commonly known as 3D printing) is the detection of defects. Detection of defects (or the lack thereof) in bespoke industrial manufacturing may be safety critical and reduce or eliminate the need for testing of printed objects. In consumer and prototype printing, early defect detection may facilitate the printer taking corrective measures (or pausing printing and alerting a user), preventing the need to re-print objects after the compounding of a small error occurs. This paper considers one approach to defect detection. It characterizes the efficacy of using a multi-camera system and image processing software to assess printing progress (thus detecting completion failure defects) and quality. The potential applications and extrapolations of this type of a system are also discussed.
Keywords:
3D printer; 3D printing; image analysis; image software; object analysis

1. Introduction

Many additive manufacturing systems lack the capability to assess the quality of the products that they produce. Desktop 3D printers, for example, may continue printing until they have completed all steps in an object, even though their filament ran out or jammed part way through. These and other printers may fail to notice minor defects that could potentially be automatically corrected (if detected before another layer is deployed). They also cannot identify defects, which require manual intervention or may render the object unsuitable for use (necessitating the restarting of printing and rendering any additional time or supplies consumed on the current print wasteful).
When quality is critical, thus, it is necessary to test 3D printed objects post-production. However, this limits the type of objects that can be produced, as some tests may be destructive and, given the potential for irregularities in any item, testing a unit would not, for some applications, be suitable to certify a batch. Moreover, techniques like lean and just-in-time (JIT) manufacturing and total quality management (TQM) propose the use of process controls to eliminate acceptance and other testing costs. Given the forgoing, a mechanism for assessing quality of 3D printed items during production would seem to have the potential to offer significant benefit.
This paper considers one approach to performing quality assessment for 3D printed objects during the printing process. It utilizes digital imagery to assess the progress and quality of 3D printed objects. The efficacy of this assessment approach is considered and the potential uses for it are discussed.

2. Background

Several areas of prior work, which are relevant to the proposed characterization technique, are now reviewed. First, additive manufacturing is discussed. Next, 3D scanning technologies are reviewed. Finally, quality management is discussed.

2.1. Additive Manufacturing (3D Printing)

Three-dimensional printing, according to Berman [1], represents a “new industrial revolution”. It has the potential disrupt manufacturing in the way that online music and electronic books impacted their industries. The technology can be utilized for prototyping. It can also be utilized for short production runs and bespoke items. Examples include custom equipment parts, artificial limbs, dental fixtures and bridge components [1].
The 3D printing concept dates back to the 1970s [2]. The fused deposition modeling (FDM) technique [1] fuses layer of extruded material upon layer of extruded material to create an object. Laser sintering and power-based approaches have also been utilized [1] as has larger-scale printing using a rapidly setting masonry substance [3]. Biodegradable materials [4], imaging apertures [5], pharmaceuticals [6,7], nanocomposites [8], and microfluids [9] have all been 3D printed. The technology has been used for preserving and increasing access to historical objects (via replication) [10] and creating educational excitement [11].
Possible drawbacks for the technology also exist. Some have raised concern about the emissions from the printers [12]. Others have considered their prospective impact on society [13]. However, most relevant to this paper, is the concern that 3D printers cannot detect product defects, making them unsuitable for the production of safety-critical parts (or requiring a validation plan to be created for the item) [14].

2.2. Three-Dimensional Scanning

Three-dimensional scanning represents one approach to prospectively identifying defects in objects. The technology has been used to create custom running shoes [15], evaluate the impact of cosmetic products [16], size uniforms [17], and create custom swimwear [18]. It has also been used to detect a variety of changes and defects including changes in skeletal structure [19], validate the quality of automotive products [20] and to assess concrete [21] and turbine blades [22]. Approaches based on lasers have been demonstrated as have so-called “white light” techniques. Low-cost solutions using the Xbox Kinect [23] and Raspberry Pi cameras [24] have been created. Some scanners require scanner movement around an object [25,26]; others allow the object to remain stationary. The combination of projected light and camera sensing has also been proposed [27].

2.3. Quality Management

A complete discussion of quality management is far beyond the scope of this section. However, one point bears significant consideration in the context of the current work. This is the fact that some quality management systems, such as total quality management (TQM) [28], are highly reliant on suppliers being able to characterize and guarantee the quality of their parts. While this can be performed via inspection (either by the supplier, prior to shipping, or the buyer, upon receipt), process certification is preferred, as it can have reduced cost levels (via removing or reducing inspection time costs). Additionally, catching errors early in the production process reduces additional waste from more time and supplies being spent on a defective part. For 3D printed parts to be used under such a system they either need a more robust process control, such as Aron [14] discusses the lack of or extensive (and prospectively expensive) inspection.

2.4. Use of Imaging in Quality Assessment

Imaging and image processing has been used extensively for quality assessment. Fang, et al. [29], for example, discuss the use of imaging and a so-called “process signature” to identify defects in the process of the creation of a ceramic object using a fused deposition technique. Cheng and Jafari [30] also use imaging and image processing to detect defects in layered manufacturing. Their approach uses a control-feedback approach and follows the so-called “road path” (the path taken when depositing material) to identify over- and under-fill conditions. This approach appears to be comparatively computationally intensive (based on applying a fuzzy model) and is limited to the identification and characterization of a single type of defect. Szkilnyk, Hughes, and Surgenor [31] also used image processing for quality assessment, in this case in a manufacturing assembly environment. They found that the imaging/image processing system could detect defects (such as jams), but only those that were known a priori.
Nandi, Tudu, and Koley [32], and Lee, Archibald, and Xiong [33] look at the efficacy of using a vision system for assessing the maturity level and other quality aspects of fruits via the assessment of their color. A sub-field of image processing, known as structured light techniques, has also been utilized in quality assessment. Structured light approaches can be used to obtain 3D measurements [34] (and the approach has been used to make 3D scanning systems [35]), to determine range-to-object [36] and to characterize surfaces (by comparing a projected and sensed pattern) [37,38]. Structure light techniques have, for example, been used to monitor the fermentation of bread dough [39], which has been shown to correlate with the bread’s internal structure [40] and bread quality.

3. Research Methods and Experimental Design

The goal of the experiment described in this article was to test the hypothesis that image processing could be utilized to characterize differences in 3D printed objects. To this end, a system for imaging objects during the printing process was required. An experimental setup was created using a MakerBot Replicator 2 3D printer and five camera units. The camera units were comprised of a Raspberry Pi and Raspberry Pi camera and were networked using Ethernet cable and a switch to a central server which triggered imaging. These cameras were configured and controlled in the same manner as used for the larger-size 3D scanner described in [24].
The cameras were positioned around the 3D printer, as shown in Figure 1 and placed on stands comprised of a 3D printed base and a polyvinyl chloride (PVC) pipe. These stands were affixed to the table utilizing double-sided tape. Three different views of a CAD model depicting the stands, printer and their relative placement are presented in Figure 1a–c.
Figure 1. CAD renderings of system from (a) top; (b) side-angle and (c) front.
Figure 1. CAD renderings of system from (a) top; (b) side-angle and (c) front.
Machines 03 00055 g001
An Ethernet cable and power cable were run to each camera. The power cables were connected to a variable DC power supply (shown in the far left of Figure 2). The Ethernet cables were connected to a server via a switch. Imaging is triggered from the server’s console, avoiding the inadvertent imaging of researchers.
Figure 2. Image of experimental setup.
Figure 2. Image of experimental setup.
Machines 03 00055 g002
To facilitate comparison, it was desirable to have the images taken at a single 3D printer configuration. This would reduce the level of irrelevant data in the image from non-printed-object changes (in a system taking images continuously and not at a rest position, position changes as well as operating vibration would introduce discrepancies that would need to be corrected for). Data was collected by stopping the printing process at numerous points and placing the printer in sleep mode, which moved the printing plate to a common position. It was believed that this was the same position as the printer returns to when a job is complete; unfortunately, this was found to be inaccurate (as this final position is a slightly lower level. For this reason, the image that was supposed to serve as the final in-process image (in which the structure is very nearly done) has been used as the target object for comparison purposes. Image data from eight positions from each of five angles (for a total of 40 images) was thus collected and analyzed using custom-developed software created using C# and the Dot Net Framework for comparing images and generating the results presented in the subsequent section.
It is important to note that no action was taken to exclude factors, which could introduce noise into the data. For example, panels could have been installed around the scanner or another method for blocking changes outside of the scanner could have been implemented. Modifications could also have been made to aspects of the scanner itself (such as covering logos) to make it more consistent (and less likely to cause confusion with the orange colored filament); however, in both cases, this would have impaired the assessment of the operation of this approach in real world conditions.

4. Data Collected

Data analysis involved a comparison of the in-process object to the final object. Note that, in addition to the obvious application of characterizing build progress, this comparison could prospectively detect two types of potential error: when a build has been stopped mid-progress resulting in an incomplete object and when an issue with the printer results in a failure to dispense or deposit filament. Figure 3a shows the image (for the front camera position) that was used as the complete object and Figure 3b shows partial object from the first progress step.
The final and in-progress images are next compared on a pixel-by-pixel basis. The result of this comparison is the identification of differences between the two images. Figure 4c characterizes the level of difference in the image: brighter areas represent the greatest levels of difference. This image is created by placing, for each pixel, a brightness value (the same red, green and blue values), which correspond to a scaled level of difference. The scale factor is calculated via:
S c a l e F a c t o r   =   255 M a x D i f f e r e n c e
MaxDifference is the maximum level of the summed difference of the red, green and blue values for any single pixel anywhere in the image. This is determined, for the pixel at position x = i, y = j, by calculating this summed difference:
D i f f e r e n c e i , j = D i f f e r e n c e R i , j +   D i f f e r e n c e G i , j +   D i f f e r e n c e B i , j
where MaxDifference is the largest value of Differencei,j recorded for any position. Using this, the brightness value for the pixel at position x = i, y = j is computed using:
B r i g h t n e s s i , j = D i f f e r e n c e i , j × S c a l e F a c t o r
As is clear from Figure 3c, not all difference levels are salient. Areas outside of the pyramid area are not completely black (as they would be if there was absolutely no difference), but should not be considered. Thus, a threshold is utilized to determine salient levels of difference from presumably immaterial ones. Pixels exceeding this difference threshold are evaluated; those failing to exceed this value are ignored. Given the importance of this value, several prospective values were evaluated for this application. Figure 3d–f show the pixels included at threshold levels of 50, 75 and 100. In these images, the black areas are the significant ones and white areas are ignored.
Figure 3. Examination of various threshold levels: (a) complete object; (b) partially printed object; (c) showing difference between partial and complete object; (d) threshold of 50; (e) threshold of 75; (f) threshold of 100.
Figure 3. Examination of various threshold levels: (a) complete object; (b) partially printed object; (c) showing difference between partial and complete object; (d) threshold of 50; (e) threshold of 75; (f) threshold of 100.
Machines 03 00055 g003
In this particular example, a threshold level of 50 incorrectly selects the base of the object (which is the same as the final object) as different. The 75 threshold level correctly characterizes this base as the same, while (perhaps) detecting a slight pulling away of the object from the build plate (the indicated-significant area on the bottom left). It also (incorrectly) identifies a small area in the middle of the in-progress object and (correctly) the visible lattice from construction. A clear demarcation between the remainder of the object that hasn’t yet printed and the already printed area is also clear. The 100 threshold (incorrectly) ignores a small bottom area of this region. The MakerBot logo is not identified as a different, given the closeness of its red color to the orange filament.
Given the results of this experiment, the 75 difference threshold level was selected for use going forward. This was applied to all of the images from all five cameras and eight progress levels. In Figure 5, the processing of the progress level 1 image for all five camera positions is shown. The leftmost column shows the finished object. The second column shows the current progress of printing of this object. The third and fourth columns characterize the areas of greatest difference (brightest white) from areas of less significant (darker) difference and the identification of pixels exceeding the difference threshold, respectively.
Figure 4. Images from all angles (at a 75 threshold level). The first column is the finished object image, the second column is the partial (stage 1) object. The third and fourth columns depict the partial-complete difference comparison and threshold-exceeding pixels identification.
Figure 4. Images from all angles (at a 75 threshold level). The first column is the finished object image, the second column is the partial (stage 1) object. The third and fourth columns depict the partial-complete difference comparison and threshold-exceeding pixels identification.
Machines 03 00055 g004
The impact of excluding the consideration of certain colors was, next considered. Figure 6 shows the impact of excluding the blue, green and red channels. Figure 5a shows the exclusion of blue and Figure 5b shows the exclusion of green. Neither exclusion corrects the MakerBot logo issue (though the blue exclusion creates greater difference levels around two indentations to either side of it). Excluding red has a significant impact on MakerBot logo; however, it places many different pixels below the significant pixel detection threshold.
Given that the total difference is a summed and not an averaged value, it makes sense to adjust the threshold when part of the difference level is excluded. Figure 7 shows the impact of manipulating the threshold value. Figure 6a shows a threshold value of 75, while Figure 6b,c show the impact of threshold values of 62 and 50, respectively.
Figure 5. Depicting the impact of excluding (a) blue; (b) green; and (c) red from the difference assessment. The top row depicts the partial-complete object difference and the bottom depicts the threshold-exceeding areas (using a 75 threshold level).
Figure 5. Depicting the impact of excluding (a) blue; (b) green; and (c) red from the difference assessment. The top row depicts the partial-complete object difference and the bottom depicts the threshold-exceeding areas (using a 75 threshold level).
Machines 03 00055 g005
Figure 6. Impact of excluding red at threshold levels of (a) 75; (b) 62; and (c) 50.
Figure 6. Impact of excluding red at threshold levels of (a) 75; (b) 62; and (c) 50.
Machines 03 00055 g006
As the MakerBot logo issue could be easily corrected via applying tape or paint over the logo (or through explicit pre-processing of the images), this was not considered further (and color exclusion is not used in subsequent experimentation, herein). However, it has been included in the discussion to demonstrate the efficacy of the technique for dealing with erroneously classified pixels. Additional manipulation of the threshold level (as well as a more specific color exclusion/inclusion approach) could potentially be useful in many applications.
Work now turned to detecting the level of completeness of the object (also relevant to assessing build progress). To this end, data from all eight progress levels was compared to the final image. The difference was depicted visually as well as assessed quantitatively.
Figure 7 and Figure 8 present all eight progress levels for angle 3 (the front view). The top row shows the captured image. The second row displays the characterization of the difference level and the bottom row shows the pixels that are judged, via the use of the threshold, to be significantly different.
Figure 7. Progression of object through the printing process: (a) progress point 1; (b) progress point 2; (c) progress point 3; (d) progress point 4.
Figure 7. Progression of object through the printing process: (a) progress point 1; (b) progress point 2; (c) progress point 3; (d) progress point 4.
Machines 03 00055 g007
Figure 8. Progression of object through the printing process: (a) progress point 5; (b) progress point 6; (c) progress point 7; (d) progress point 8 (this is also used as the complete object).
Figure 8. Progression of object through the printing process: (a) progress point 5; (b) progress point 6; (c) progress point 7; (d) progress point 8 (this is also used as the complete object).
Machines 03 00055 g008
The build progress/object completeness is, thus, quite clear with a visible progression from Figure 7a to Figure 8d. It is also notable that some very minor background movement/movement relative to the background may have occurred between progress points two and three, resulting in the elimination of the limited points detected in the background in the third rows of Figure 7a,b.
The quantitative data from this collection process is presented in Table 1 (and depicted visually in Figure 9), which shows the aggregate level of difference (calculated via summation of the difference levels of each pixel, using Equation (4)) by progress level and camera position.
A g g r e g a t e D i f f e r e n c e =   i = 0.. m , j = 0.. n D i f f e r e n c e i ,   j
where differencei,j is the difference between values at pixel i, j and m and n are the maximum values for x and y. A clear progression of declining difference can also be seen in this numeric data.
Table 1. Aggregate difference by level of progress and angle.
Table 1. Aggregate difference by level of progress and angle.
Angle
12345
Progress1201575263154742364606260772211214209386909779
2159074877120966265529193273180098380338718052
3128796927100588139284574631135275350300301392
4952245097845195821376519796921027310651833
5835817875990020919681786689931596302212892
6721269625812738316915472077356784303391576
743090774476384894808864949056229289977798
Figure 9. Visual depiction of data from Table 1.
Figure 9. Visual depiction of data from Table 1.
Machines 03 00055 g009
Table 2 presents the maximum difference, which is calculated:
M a x D i f f e r e n c e = M a x i = 0.. m ,   j = 0.. n ( D i f f e r e n c e i ,   j )
where Max() is a function that selects the maximum value within the set of values from the proscribed range. While there is decline in maximum difference as the progress levels advance, the correlation is not absolute, as there are instances where the difference increases from a progress level to the next subsequent one.
Table 2. Maximum difference level by level of progress and angle.
Table 2. Maximum difference level by level of progress and angle.
Angle
12345
Progress1633568542604669
2631477539665663
3613489584661648
4583476568656624
5562485559658625
6555473561667606
7502435446609564
Additional analysis of this data is presented in Table 3 and Table 4 which present the average level of difference for each progress level and angle and the percentage of difference relative to total difference, respectively. The aggregate difference (Table 1) and average difference (Table 3) for angle 3 are higher for most levels due to the fact that the object fills significantly more of the image area from this angle. Looking at the difference from a percentage perspective (in Table 4) demonstrates that the object completion values (ignoring the amount of image space covered) are much closer to the other angles. This is calculated via:
A v g D i f f = A v g i = 0.. m ,   j = 0.. n ( D i f f e r e n c e i ,   j )
where Avg() is a function that returns the average for the set of values from the range provided. The average Table 4 percentage of distance values are calculated using:
P e r c e n t D i f f = A v g D i f f T o t a l D i f f e r e n c e
where TotalDifference is the summation of the difference at all of the progress levels.
Table 3. Average level of difference per-pixel by level of progress and angle.
Table 3. Average level of difference per-pixel by level of progress and angle.
Angle
12345
Progress140.0030.71120.3241.9276.79
231.5724.01105.0235.7467.22
325.5619.9656.4826.8559.60
418.9015.5742.4219.2361.65
516.5911.8939.0617.8559.98
614.3111.5433.5715.3560.21
78.559.459.549.7457.55
Table 4. Percentage of difference by level of progress and angle.
Table 4. Percentage of difference by level of progress and angle.
Angle
12345
Progress125.7%24.9%29.6%25.1%17.3%
220.3%19.5%25.8%21.4%15.2%
316.4%16.2%13.9%16.1%13.5%
412.2%12.6%10.4%11.5%13.9%
510.7%9.7%9.6%10.7%13.5%
69.2%9.4%8.3%9.2%13.6%
75.5%7.7%2.3%5.8%13.0%
The aggregate difference level (and derivative metrics) provide one way to assess the completion; however, this is impacted by lots of small ambient differences as well as the level of difference between the final object and the background (which could be inconsistent across various areas of the object). An alternate approach is to simply count the number of pixels, which have been judged via the use of the threshold value, to be significantly different. This, particularly for cases where lighting changes occur or foreground-background differences are inconsistent, reduces the impact of non-object differences. Data for the number of pixels that are different is presented in Table 5. and visually depicted in Figure 10. This is calculated using:
N u m D i f f P i x = C o u n t i = 0.. m ,   j = 0.. n ( D i f f e r e n c e i ,   j > T h r e s )
where Count() is a function that determines the number of instances within the range provided where a condition is true. Thres is the specified threshold value used for comparison purposes.
One notable issue exists in this data. A slight movement of a door that falls within the viewing area of angle five occurred between progress points one and two, creating a significantly higher number of difference points in angle five, progress point one. This is a far more pronounced impact than this had on the aggregate difference approach (in Table 1). This type of movement could be excluded through greater color filtering and/or enclosing the printer in an opaque box or wrap.
Table 5. Number of pixels with above-threshold difference, by progress level and angle.
Table 5. Number of pixels with above-threshold difference, by progress level and angle.
Angle
12345
Progress13287752589011353743352334503292
2238267204661755034261407191781
3166407163139545944190409161034
4108800100143321195117178105233
594427837152619039668775725
668056636221897247025559148
71508814094398131829216624
The Table 6 percent of pixels value is determined using the equation:
P e r c e n t P i x =   N u m D i f f P i x T o t a l P i x e l C o u n t
where the TotalPixelCount is determined by multiplying the m and n (maximum x and y) values.
Figure 10. Visual Depiction of Data from Table 5.
Figure 10. Visual Depiction of Data from Table 5.
Machines 03 00055 g010
Table 6. Percent of pixels with above-threshold difference, by progress level and angle.
Table 6. Percent of pixels with above-threshold difference, by progress level and angle.
Angle
12345
Progress16.52%5.14%26.87%6.99%9.99%
24.73%4.06%14.98%5.19%3.81%
33.30%3.24%10.83%3.78%3.20%
42.16%1.99%6.37%2.33%2.09%
51.87%1.66%5.20%1.92%1.50%
61.35%1.26%3.77%1.39%1.17%
70.30%0.28%0.79%0.36%0.33%
Table 7 presents the percentage of difference, based on only considering difference values above the identified threshold. The Table 7 percentage of difference values are calculated via:
A g g T h r e s D i f f =   S u m I f i = 0.. m , j = 0.. n ( D i f f e r e n c e i ,   j ,   D i f f e r e n c e i ,   j > T h r e s )
where SumIf() is a function that sums the value provided in the first parameter, if the logical statement that is the second parameter evaluates to true.
Table 7. Percentage of difference at each level of progress and angle.
Table 7. Percentage of difference at each level of progress and angle.
Angle
12345
Progress132.2%29.1%39.0%31.8%45.2%
223.4%23.0%21.8%23.6%17.2%
316.3%18.4%15.7%17.2%14.5%
410.7%11.3%9.3%10.6%9.5%
59.3%9.4%7.6%8.7%6.8%
66.7%7.2%5.5%6.3%5.3%
71.5%1.6%1.1%1.7%1.5%
In addition to looking at the raw number of pixels exhibiting difference, this can also be assessed as a percentage of pixels exhibiting a difference in the image (Table 6) or, more usefully, as the percentage of total difference level (Table 7). These values, again, show a consistent decline in difference from progress level to subsequent progress level.

5. Analysis of Data Collected

The data collected demonstrates a correlation between object completeness and difference level. This is present in both the aggregate difference and number-of-different pixels (based on threshold application) data. The former is influenced by potentially irrelevant difference-magnitude information. For this application, this data was not important; however, for other applications, the color difference could be indicative of the magnitude of defect. For example, for an object with different interior coloration (or subsurface layer coloration), a surface scratch might generate a low magnitude difference, while more significant (that breaks through the outer layer) would have greater difference magnitude.
The later metric corresponds (as depicted aptly in the figures) to the surface area of the object. For defect detection or completeness/incompleteness assessment, this may be sufficient; however, for applications characterizing the amount of time taken versus progress (or projecting remaining time, etc.) a metric tied to volume may be more relevant. Notably, however, the fact that many 3D printing system use a very limited lattice fill, may make surface area (which may represent the bulk of a layer’s printing) a more relevant metric (that could be augmented with a fill level projection based on a percentage of the surface area).
The data collected has also shown that the proposed system is very sensitive to environmental and/or camera position changes. The very small movement present in some of the early angle three images as well as the impact of the door position on the first angle 5 image demonstrate the importance of either avoiding the sensing of the surrounding environment or excluding it from consideration.

6. Conclusions and Future Work

The work presented herein has demonstrated the efficacy of utilizing imaging and image processing to detect two types of defects (“dry printing” where filament is not applied and premature job termination) in 3D printed objects, which result in incomplete objects. These defects are detected through the assessment of printing progress (which can also be characterized in its own right) and the comparison of actual progress to expected progress. This initial work has demonstrated that basic assessment (which could be incorporated into printer control systems) can be performed with limited computational resources in a non-recursive manner that has a linear time-cost relationship to the number of pixels to be assessed.
Other techniques, however, may be required for more robust assessment. For example, commercial systems, which construct CAD models from a collection of imagery (such as used in [24]), create point clouds, which allow them to exclude points outside of an area-of-interest. This type of technique would potentially allow characterization of additional types of defects, as well as solving environmental change issues.
Future work will focus on the development and characterization of techniques to identify and characterize other types of defects (particularly including those where material is present but may have a structural fault). It will also characterize different approaches that do not require imagery of a final object as a baseline for comparison purposes.

Acknowledgments

Materials and resources utilized for this work were provided by North Dakota EPSCoR (NSF # EPS-814442) and the University of North Dakota Department (UND) of Computer Science. Scanning hardware was procured with the support of the UND Summer Programs and Events Council. Thanks are given to Benjamin Kading for developing the stands and Raspberry Pi/camera mounts for the camera units and also created Figure 1a–c.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Berman, B. 3-D printing: The new industrial revolution. Bus. Horiz. 2012, 55, 155–162. [Google Scholar] [CrossRef]
  2. Bowyer, A. 3D Printing and Humanity’s First Imperfect Replicator. 3D Print. Addit. Manuf. 2014, 1, 4–5. [Google Scholar]
  3. Goldin, M. Chinese Company Builds Houses Quickly with 3D Printing. Available online: http://mashable.com/2014/04/28/3d-printing-houses-china/ (accessed on 1 April 2015).
  4. Serra, T.; Planell, J.A.; Navarro, M. High-resolution PLA-based composite scaffolds via 3-D printing technology. Acta Biomater. 2013, 9, 5521–5530. [Google Scholar] [CrossRef] [PubMed]
  5. Miller, B.W.; Moore, J.W.; Barrett, H.H.; Fryé, T.; Adler, S.; Sery, J.; Furenlid, L.R. 3D printing in X-ray and Gamma-Ray Imaging: A novel method for fabricating high-density imaging apertures. Nucl. Instrum. Methods Phys. Res. Sect. A 2011, 659, 262–268. [Google Scholar] [CrossRef]
  6. Khaled, S.A.; Burley, J.C.; Alexander, M.R.; Roberts, C.J. Desktop 3D printing of controlled release pharmaceutical bilayer tablets. Int. J. Pharm. 2014, 461, 105–111. [Google Scholar] [CrossRef] [PubMed]
  7. Sanderson, K. Download a drug, then press print. New Sci. 2012, 214, 8–9. [Google Scholar] [CrossRef]
  8. Campbell, T.A.; Ivanova, O.S. 3D printing of multifunctional nanocomposites. Nano Today 2013, 8, 119–120. [Google Scholar] [CrossRef]
  9. Bonyár, A.; Sántha, H.; Ring, B.; Varga, M.; Gábor Kovács, J.; Harsányi, G. 3D Rapid Prototyping Technology (RPT) as a powerful tool in microfluidic development. Proc. Eng. 2010, 5, 291–294. [Google Scholar] [CrossRef]
  10. Abate, D.; Ciavarella, R.; Furini, G.; Guarnieri, G.; Migliori, S.; Pierattini, S. 3D modeling and remote rendering technique of a high definition cultural heritage artefact. Proc. Comput. Sci. 2011, 3, 848–852. [Google Scholar] [CrossRef]
  11. Eisenberg, M. 3D printing for children: What to build next? Int. J. Child-Comput. Interact. 2013, 1, 7–13. [Google Scholar] [CrossRef]
  12. Stephens, B.; Azimi, P.; El Orch, Z.; Ramos, T. Ultrafine particle emissions from desktop 3D printers. Atmos. Environ. 2013, 79, 334–339. [Google Scholar] [CrossRef]
  13. Birtchnell, T.; Urry, J. 3D, SF and the future. Futures 2013, 50, 25–34. [Google Scholar] [CrossRef]
  14. Aron, J. Oops, it crumbled—Reality check for 3D printed models. NewScientist 2012, 215, 22. [Google Scholar]
  15. DeMatto, A. Ways Body Scanners Could Make Fitting Rooms Obsolete. Available online: http://www.popularmechanics.com/technology/gadgets/a5909/3d-body-scanning-technology-applications/ (accessed on 1 April 2015).
  16. Ares, M.; Royo, S.; Vidal, J.; Campderrós, L.; Panyella, D.; Pérez, F.; Vera, S.; Ballester, M.A.G. 3D Scanning System for In-Vivo Imaging of Human Body; Fringe 2013; Springer: Berlin/Heidelberg, Germany, 2014; pp. 899–902. [Google Scholar]
  17. TC2. US Coast Guard Uses Body Scanners in Measurement of Uniforms. Available online: http://www.fibre2fashion.com/news/world-textiles-research-news/newsdetails.aspx?news_id=57925 (accessed on 31 March 2015).
  18. King, R. 3D Imaging Spreads to Fashion and Beyond. Available online: http://www.hpcwire.com/2008/10/06/3d_imaging_spreads_to_fashion_and_beyond/ (accessed on 1 April 2015).
  19. Stephan, C.N.; Guyomarc’h, P. Quantification of Perspective-Induced Shape Change of Clavicles at Radiography and 3D Scanning to Assist Human Identification. J. Forensic Sci. 2014, 59, 447–453. [Google Scholar] [CrossRef] [PubMed]
  20. Voicu, A.; Gheorghe, G.I.; Badita, L. 3D Measuring of Complex Automotive Parts Using Video-Laser Scanning. Available online: http://fsim.valahia.ro/sbmm.html/docs/2013/mechanics/18_Voicu_2013.pdf (accessed on 1 April 2015).
  21. Bindean, I.; Stoian, V. Determination of the Remaining Bearing Capacity of an Existing Slab Using 3D Scanning Technology. In Recent Advances in Civil and Mining Engineering; WSEAS Press: Athens, Greece, 2013; pp. 136–140. [Google Scholar]
  22. Brozović, M.; Avsec, A.; Tevčić, M. Dimensional Control of Complex Geometry Objects Using 3D Scanning Technology. In Proceedings of the 14th International Scientific Conference on Production Engineering, Biograd, Croatia, 19–22 June 2013.
  23. Hitomi, E.E.; da Silva, J.V.; Ruppert, G.C. 3D scanning using RGBD imaging devices: A survey. Comput. Vis. Med. Image Process. IV 2013, 2013, 197–202. [Google Scholar]
  24. Straub, J.; Kerlin, S. Development of a Large, Low-Cost, Instant 3D Scanner. Technologies 2014, 2, 76–95. [Google Scholar] [CrossRef]
  25. Munkelt, C.; Kleiner, B.; Torhallsson, T.; Kühmstedt, P.; Notni, G. Handheld 3D Scanning with Automatic Multi-View Registration Based on Optical and Inertial Pose Estimation; Fringe 2013; Springer: Berlin/Heidelberg, Germany, 2014; pp. 809–814. [Google Scholar]
  26. Cappelletto, E.; Zanuttigh, P.; Cortelazzo, G.M. Handheld Scanning with 3D Cameras. In Proceedings of the 2013 IEEE 15th International Workshop on Multimedia Signal Processing (MMSP), Sardinia, Italy, 30 September–2 October 2013; IEEE: New York, NY, USA, 2013; pp. 367–372. [Google Scholar]
  27. Wijenayake, U.; Baek, S.; Park, S. A Fast and Dense 3D Scanning Technique Using Dual Pseudorandom Arrays and A Hole-filling Method. Available online: http://www.researchgate.net/publication/235943932_A_Fast_and_Dense_3D_Scanning_Technique_Using_Dual_Pseudorandom_Arrays_and_A_Hole-filling_Method/file/50463514977598fe12.pdf (accessed on 31 March 2015).
  28. Kaynak, H. The relationship between total quality management practices and their effects on firm performance. J. Oper. Manag. 2003, 21, 405–435. [Google Scholar] [CrossRef]
  29. Fang, T.; Jafari, M.A.; Bakhadyrov, I.; Safari, A.; Danforth, S.; Langrana, N. Online Defect Detection in Layered Manufacturing Using Process Signature. In Proceedings of the 1998 IEEE International Conference on Systems, Man, and Cybernetics, San Diego, CA, USA, 14 October 1998; IEEE: New York, NY, USA, 1998; Volume 5, pp. 4373–4378. [Google Scholar]
  30. Cheng, Y.; Jafari, M.A. Vision-based online process control in manufacturing applications. IEEE Trans. Autom. Sci. Eng. 2008, 5, 140–153. [Google Scholar] [CrossRef]
  31. Szkilnyk, G.; Hughes, K.; Surgenor, B. Vision Based Fault Detection of Automated Assembly Equipment. In Proceedings of the ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Washington, DC, USA, 28–31 August 2011; American Society of Mechanical Engineers: New York, NY, USA, 2011; pp. 691–697. [Google Scholar]
  32. Nandi, C.S.; Tudu, B.; Koley, C. Automated Machine Vision Based System for Fruit Sorting and Grading. In Proceedings of the IEEE 2012 Sixth International Conference on Sensing Technology (ICST), Kolkata, India, 18–21 December 2012; IEEE: New York, NY, USA, 2012; pp. 195–200. [Google Scholar]
  33. Lee, D.; Archibald, J.K.; Xiong, G. Rapid color grading for fruit quality evaluation using direct color mapping. IEEE Trans. Autom. Sci. Eng. 2011, 8, 292–302. [Google Scholar] [CrossRef]
  34. Valkenburg, R.J.; McIvor, A.M. Accurate 3D measurement using a structured light system. Image Vis. Comput. 1998, 16, 99–110. [Google Scholar] [CrossRef]
  35. Rocchini, C.; Cignoni, P.; Montani, C.; Pingi, P.; Scopigno, R. A Low Cost 3D Scanner Based on Structured Light. Available online: http://vcg.isti.cnr.it/publications/papers/vcgscanner.pdf (accessed on 1 April 2015).
  36. Caspi, D.; Kiryati, N.; Shamir, J. Range imaging with adaptive color structured light. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 470–480. [Google Scholar] [CrossRef]
  37. Salvi, J.; Fernandez, S.; Pribanic, T.; Llado, X. A state of the art in structured light patterns for surface profilometry. Pattern Recognit. 2010, 43, 2666–2680. [Google Scholar] [CrossRef]
  38. Salvi, J.; Pages, J.; Batlle, J. Pattern codification strategies in structured light systems. Pattern Recognit. 2004, 37, 827–849. [Google Scholar] [CrossRef]
  39. Ivorra, E.; Amat, S.V.; Sánchez, A.J.; Barat, J.M.; Grau, R. Continuous monitoring of bread dough fermentation using a 3D vision Structured Light technique. J. Food Eng. 2014, 130, 8–13. [Google Scholar] [CrossRef]
  40. Verdú, S.; Ivorra, E.; Sánchez, A.J.; Barat, J.M.; Grau, R. Relationship between fermentation behavior, measured with a 3D vision Structured Light technique, and the internal structure of bread. J. Food Eng. 2015, 146, 227–233. [Google Scholar] [CrossRef]
Back to TopTop