Next Article in Journal
Exploring the Nonlinear Mechanical Characteristics of 3D-Printed ABS with Varying Infill Densities
Next Article in Special Issue
Melt Electrowritten Biodegradable Mesh Implants with Auxetic Designs for Pelvic Organ Prolapse Repair
Previous Article in Journal
Feasibility Study on Laser Powder Bed Fusion of Ferritic Steel in High Vacuum Atmosphere
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

In Situ Active Contour-Based Segmentation and Dimensional Analysis of Part Features in Additive Manufacturing

by
Tushar Saini
and
Panos S. Shiakolas
*,†
Mechanical and Aerospace Engineering Department, The University of Texas at Arlington, Arlington, TX 76019, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Manuf. Mater. Process. 2025, 9(3), 102; https://doi.org/10.3390/jmmp9030102
Submission received: 26 February 2025 / Revised: 15 March 2025 / Accepted: 17 March 2025 / Published: 19 March 2025

Abstract

:
The evaluation of the geometric conformity of in-layer features in Additive Manufacturing (AM) remains a challenge due to low contrast between the features and the background, textural variations, imaging artifacts, and lighting conditions. This research presents a novel in situ vision-based framework for AM to identify in real-time in-layer features and estimate their shape and printed dimensions and then compare them with the as-processed layer features to evaluate geometrical differences. The framework employs a composite approach to segment features by combining simple thresholding for external features with the Chan–Vese (C–V) active contour model to identify low-contrast internal features. The effect of varying C–V parameters on the segmentation output is also evaluated. The framework was evaluated on a 20.000 mm × 20.000 mm multilayer part with internal features (two circles and a rectangle) printed using Fused Deposition Modeling (FDM). The segmentation performance of the composite method was compared with traditional methods with the results showing the composite method scoring higher in most metrics, including a maximum Jaccard index of 78.34%, effectively segmenting high- and low-contrast features. The improved segmentation enabled the identification of feature geometric differences ranging from 1 to 10 pixels (0.025 mm to 0.250 mm) after printing each layer in situ and in real time. This performance verifies the ability of the framework to detect differences at the pixel level on the evaluation platform. The results demonstrate the potential of the framework to segment features under different contrast and texture conditions, ensure geometric conformity and make decisions on any differences in feature geometry and shape.

1. Introduction

Additive Manufacturing (AM), commonly known as 3D printing, has transformed the manufacturing industry by enabling the production of complex and customized components for various applications. It is finding applications in the medical field to create customized implants and prosthetics, advance bioprinting for tissue engineering, and manufacture patient-specific surgical tools and models [1,2,3,4]. Among the various AM technologies, Fused Deposition Modeling (FDM) has become particularly popular due to its cost-effectiveness and versatility, especially for rapid prototyping and small-scale production. However, maintaining the quality and reliability of AM parts continues to pose significant challenges. Defects such as warping, delamination, and voids can occur during printing, leading to part failure and material waste. Additionally, differences in the geometry of internal and external features can affect the structural integrity, and more importantly, the functional performance of the part, especially in applications where features are printed in situ such as microfluidics, robotics, and other multimaterial applications [5,6,7,8,9].
Quality control in AM is important to ensure that the final product meets specified standards and performance criteria. Despite its importance, achieving high-quality manufactured parts continues to be a challenge which highlights the importance of developing advanced process monitoring methods. To address these challenges and reduce manufacturing failure rates, researchers and industries have developed in situ monitoring technologies. These technologies utilize machine vision and other sensor-based systems to enhance the quality of AM components.
Traditional post-processing inspection methods, including X-ray Computed Tomography and ultrasound imaging, are often insufficient as they do not detect in-layer defects and require significant time and resources [10,11,12,13,14,15,16]. Recent advances in in situ monitoring technologies have shown promise for real-time quality control in AM. Closed-loop control systems employing machine vision and sensor-based frameworks have been explored to improve defect detection accuracy and efficiency [17,18,19]. Machine vision-based methods use image segmentation and edge detection techniques to identify features and defects in AM parts, which primarily include external geometry and defects, such as surface roughness and voids [20,21,22,23,24,25].
Several closed-loop control systems have been developed for AM processes to modify specific part properties based on the application and correct errors during printing. Lu et al. [17] developed a closed-loop feedback system for a robot-based AM process involving carbon fiber-reinforced polymers. This system adjusts process parameters like layer thickness and feed rate to identify abrasion defects using deep-learning segmentation algorithms. Garanger et al. [18] introduced an in situ feedback control system that uses stiffness measurements taken during the process to adjust the infill density and achieve the desired stiffness. Cummings et al. [19] developed a closed-loop system utilizing ultrasonic sensing to identify and correct filament bonding failures by modifying the process parameters.
Metal AM processes have become increasingly popular because of their ability to produce complex parts with high precision and Digital Twin (DT) technology has been explored to monitor and control these processes. Gaikwad et al. [26] combined physics-based predictions with machine learning algorithms to forecast defects in metal AM processes. Knapp et al. [27] emphasized the importance of monitoring process parameters such as cooling rates and temperature gradients using DT technology. Nath and Mahdevan [28] discussed the use of DT and thermal cameras to monitor the melt pool, predict part quality, and adjust laser power in real time.
Machine vision technologies using cameras [21,22,29,30] have been widely researched and applied in AM processes to detect defects and other anomalies. Baumann et al. [21] and Lyngby et al. [22] used cameras to capture images of the printed part and analyze them to identify external defects. Khan et al. [29] implemented a deep learning model based on a Convolutional Neural Network (CNN) to detect defects in 3D printing processes, focusing on identifying geometrical anomalies in infill patterns and classifying printed objects in real time as either ‘good’ or ‘bad’. Liu et al. [30] attached a camera to the print head to capture images of the printed part in real time to detect process abnormalities and surface roughness using video streams and camera feeds. Their objective was to monitor material deposition and optimize surface texture.
Another method for monitoring the printing process uses optical sensors to observe the printed layer from above. This method allows for real-time image capture and analysis of the printed layer [20,23,24,25,31]. He et al. [20] used statistical process monitoring based on image segmentation and control charts to monitor the external geometry. Yi et al. [23] developed statistical tools for analyzing segmented images to detect the presence of any in-layer defects. Semantic segmentation, where each pixel in an image is classified into a specific category, has been used to detect defects in AM parts. Delli et al. [24] and Shen et al. [25] applied machine learning algorithms to train models that use multiple thresholding-based images to distinguish between successful and failed layers. Ye et al. [31] developed a framework that integrated in situ point cloud sensing and predictive modeling to detect defects due to deviations in process parameters from defined values. Instead of analyzing the entire layer, research also focused on smaller sections of the image and classified these imaged areas as either ‘in control’ or ‘out of control’ [32,33]. Rossi et al. [32] evaluated the performance of various binary classifiers for image patches. The output of the classifiers did not contain any segmented features. Moretti et al. [33] employed Canny edge detection segmentation to identify the external boundary of a printed part in a layer-by-layer process. Castro et al. [34] used a Haar classifier to detect defects in AM parts. The classifier was trained on a dataset of images with defects such as voids. While machine learning algorithms such as CNNs and deep learning models have shown promise in detecting defects in AM parts under a variety of conditions, they require prior training on a large dataset of images, which may not always be feasible.
Photogrammetry and laser line scanning have also been investigated for their effectiveness in detecting defects in AM parts. Nuchitprasitchai et al. [35] used photogrammetry with six cameras to detect in-layer differences caused due to clogged nozzles or filament run-out. A two-camera 3D digital image correlation was used by Holzmond and Li [36] to detect defects in parts produced by the Fused Filament Fabrication process, which utilized the natural speckle pattern of the material to create point clouds of the printed layer. Lyu et al. [37] used laser line scanning to detect surface defects such as under-extrusion and over-extrusion. The photogrammetry and laser line scanning methods output is a 3D point cloud which is reconstructed to identify defects in the printed part. However, these methods are computationally intensive and require complex post-processing steps due to the large amount of image data.
Even though these methods have been used to detect defects and anomalies in AM parts, they have not been researched for evaluating in-layer feature geometry and their respective dimensions. In our previous research, we developed a framework that used vision-based detection of part features but was limited to segmenting and analyzing features on a single layer [38]. We found feature analysis to be complicated by the low contrast between internal features and the previous layer, making it difficult to differentiate between them. Additionally, textural variations in the background, imaging artifacts, and lighting conditions affected the accuracy of the segmentation process. We continued our research by comparing five traditional image segmentation methods on a multilayer AM part for their effectiveness in segmenting internal features [39]. The research found that these traditional methods were not suitable for detecting internal features due to low contrast between the features and the previous layer, and due to textural variations. Since internal features could not be detected, no dimensional analysis could be performed for quality control purposes. These findings led to the investigation of active contour models for the segmentation of multilayer components with internal features which is discussed in this research.
Active contour models, also known as ‘snakes’, have been developed to address challenges in segmentation, such as the difficulty in accurately capturing object boundaries within images with complex backgrounds, varying intensities, or low contrast between the object and its surroundings. These models have primarily been used in medical imaging to segment tumors and other internal body parts in Medical Resonance Imaging and Computed Tomography scans [40]. In AM, active contour models have been used to segment melt-pool edges in Selective Laser Melting processes [41] and to detect the defects in Laser Powder Bed Fusion (LPBF) to ensure there are no peaks or valleys in the powder bed. Active contour models have been explored by Caltanissetta et al. [42] and Li et al. [43] to detect defects introduced during powder recoating in LPBF and to evaluate the external geometry of the printed part. However, none of these studies focused on detecting internal features in AM parts which are difficult to detect due to the low contrast between the internal features and the background.
This research presents a vision-based framework for AM towards detecting printed component features on a layer-by-layer basis. The performance of an active contour model is investigated towards addressing segmentation challenges typical of multilayer components, such as low contrast between the features and the layers. The framework uses a high-resolution imaging setup to acquire layer images, which are segmented by the active contour model to identify and provide real-time in situ information on the geometry and dimensions of internal features and the external geometry on a layer-by-layer basis. The identified geometric information is then compared with the as-processed layer information to evaluate any geometric differences. The magnitude of these differences is a function of the imaging setup and the resolution. The framework was evaluated with an eight-layer part printed using FDM, comparing each as-printed layer to the as-processed layer to identify geometric differences in the internal and external features and the shapes of the printed components.
The article is structured as follows. Section 2 introduces and discusses the proposed framework along with the image acquisition, processing, and feature detection and analysis methodologies. Section 3 presents the experimental setup and calibration approach to relate all subsystems. The results are presented in Section 4 and discussed in Section 5 followed by conclusions.

2. Materials and Methods

The in situ quality control framework to perform visual inspection and analysis of each individual layer immediately following its printing and compare it with the as-processed layer is presented in Figure 1. The framework is designed to detect and analyze the internal and external geometries and shape of the printed features, compare them with the as-processed layer, and provide real-time information on any geometric differences, on a layer-by-layer basis. The magnitude of these geometric differences is limited by the resolution of the imaging system. This information can then be advantageously employed to allow a monitoring system or an operator to make informed decisions and, if necessary, initiate corrective actions or abort the print, thus improving the overall efficiency and reliability of the AM process. The framework is divided into three main components: preparation of the as-processed layer (Figure 1a), image acquisition, preprocessing, and segmentation (Figure 1b), and evaluation of the differences between the as-printed and as-processed layers (Figure 1c).
The framework is designed to be integrated with an AM platform, such as FDM, to monitor the printing process in real time. The process begins with the part model designed using Dassault Systemes SOLIDWORKS 2024 (Waltham, MA, USA) Computer-Aided Design (CAD) software. The model is then exported to a Standard Tessellation Language (STL) file which is imported into a slicing software, where process- and platform-specific parameters (infill, temperature, feedrate, etc.) are used to generate a GCODE to be interpreted by the AM platform to print the component layer by layer. The sliced GCODE is post-processed using an in-house developed post-processor, written in Python 3.10 [44], to add custom markers between the toolpaths of each layer. These custom markers, MOVE_CAM_POS and START_IMG_ANLYS shown in Figure 2, are interpreted by the controller during printing to initiate the execution of macros to perform a set of predefined actions. These two macros position the camera and start the image analysis procedure for real-time image capturing and analysis of the printed layer, respectively.
The post-processed GCODE is used to print the object on the AM platform. At the end of each layer, the AM platform controller uses the custom markers to trigger the vision system and initiate the layer analysis subprocess. This subprocess includes positioning the camera over the printed layer, acquiring the image, and analyzing the layer to identify internal features and external geometries. The layer toolpath data is also processed to generate a mask for the as-processed layer, which is then compared with the as-printed layer to detect any geometric differences. The framework outputs a report at the end of each layer highlighting any geometrical differences between the as-processed and as-printed layers. This report could be used to decide whether any corrective actions need to be taken including aborting the print or if the printing process should continue and print the next layer where another assessment on part quality could be made. The following sections describe each of these components in detail.

2.1. Prepare the As-Processed Layer

The as-processed layer mask is created by analyzing the toolpath data generated by the slicing software. The toolpath data is processed to extract the X- and Y-coordinates that define the path the printer head is expected to follow during the print process. The extracted coordinates are used to plot the movement of the printer head during the material deposition process, thereby creating the as-processed layer mask. Since the toolpath coordinates use a different unit to measure the distance traveled while depositing material, a scaling factor is applied to convert these coordinates to pixels for image analysis. The calculation of this scaling factor is discussed in Section 3.

2.2. Acquire and Preprocess a Layer Image

The image acquisition process begins with a calibrated imaging system, which consists of a high-resolution camera and a host system to control the camera and the motion platform (to be discussed in Section 3). Using the position offsets determined during the calibration stage, the host instructs the motion platform to move the camera head over the designated area of the printed object. Once the camera is moved over the printed part, the host sets the required resolution, brightness, and contrast. An image frame is captured for further processing. Given the inherent imperfections often present in raw images due to lighting variations, and multiple noise sources due to lens distortions and color inconsistencies, a comprehensive, multi-stage image processing pipeline is developed to mitigate these issues. The pipeline corrects for lens distortions, enhances contrast, and reduces noise to optimize the image such that the contrast between the features and the background is maximized.
The initial step in the image processing pipeline applies an optical calibration matrix to the raw image to rectify lens distortions and generate an undistorted image. The calibration step is explained in Section 3. For effective image segmentation, enhancing the contrast improves edge definition between the object and its background but, as a trade-off, it also amplifies any noise. A mild Gaussian blur is applied to the image to reduce ‘salt-and-pepper’ noise while ensuring that the introduced blurring does not compromise the fidelity of the detected edges. Additionally, portions of the print bed in the background of the acquired image may introduce extraneous noise, particularly when specific substrates are used. These substrates can create uneven textures that contribute to detecting false edges during the segmentation stage. It is important to note that this issue is specific to certain printing platforms and processes (e.g., FDM), and may not need to be addressed in all scenarios.
A region of interest (RoI) is established based on the external boundary of the layer, and all subsequent image processing and analyses are confined to this defined region. This focused approach excludes any extraneous noise, limits possible false edges, and minimizes vignetting effects that might be present at the frame edges due to nonuniform lighting conditions. The RoI is re-established for every layer since it can change depending on the external geometry of each layer.

2.3. Image Segmentation

The core of the framework image analysis capabilities lies in its segmentation and edge detection algorithms. After the acquisition stage and preprocessing of the image, the framework employs a range of image segmentation methods to identify the external and internal features of the AM part. The segmentation and evaluation of features is particularly important for AM, where differences in feature size and location can have significant implications on the intended function of the printed part.

2.3.1. Segmentation Using Thresholding Methods

Thresholding methods are utilized in the segmentation process by converting a grayscale image into a binary representation, where the region of interest (RoI) is distinctly isolated from the background. These methods are designed to identify a specific range of brightness in the image, which is selected to enhance the contrast between the object and its surroundings, aiding in more precise detection of the object outline. Morphological operations such as dilation and erosion are applied to smooth the edges and connect any nearby disconnected edges. Finally, all connected contours are hierarchically sorted to differentiate between external geometry and internal features by establishing a parent–child relationship between nested components.
However, the thresholding methods for segmentation rely on a high contrast between the segmented regions. Components manufactured with AM-fabricated parts do not often show a clear difference between internal details because consecutive layers are typically printed using the same material. This uniformity can reduce the visibility of internal features, as there is minimal contrast to distinguish different layers or internal structures from each other.

2.3.2. Active Contour Based Segmentation Using Chan–Vese

Thresholding-based segmentation methods require high contrast between the background and foreground, which makes such methods unsuitable for parts with similar intensities between the background and foreground or if internal features are not distinct from the rest of the layer. To address these deficiencies due to low contrast, active contour or snake models have been developed and used for image segmentation [45,46,47].
Active contour segmentation improves upon edge-based segmentation methods by incorporating advanced algorithms to improve the identification of object boundaries. Instead of relying solely on edge gradients, active contour models take into account additional information, such as local image statistics and gradient variation, to refine the boundary identification. This approach is particularly useful for AM parts, which often feature complex image regions due to variations in surface texture and material properties [36].
Still using these new methods, the final output is sensitive to the initial conditions of the algorithm, making it important to define the suitable parameters prior to the segmentation process. Additionally, active contour models face challenges with topological changes, such as merging and splitting of the evaluated curve, which makes them less popular for segmentation tasks.
The limitations of active contour methods can be addressed by combining active contour models with level set methods. Level set methods are a class of numerical techniques used to continuously track the evolution of curves and surfaces [48]. The level set methods are based on the concept of an energy functional, a mathematical expression that quantifies the ‘energy’ of a particular contour based on factors such as boundary smoothness, alignment with image features, and conformity to prior shape knowledge [45]. This allows us to incorporate various image properties, such as intensity, gradient information, and prior knowledge, into a single framework. By minimizing the energy functional, level set methods can find a globally optimal solution that balances image data fidelity and regularization. In addition, level set methods offer greater flexibility in terms of initialization compared to active contour models.
Instead of explicitly defining the initial contour, level set methods begin by initializing the level set function. This function can be initialized with a rough approximation of the object boundary, a defined RoI, or by using a region-growing algorithm. This approach simplifies the initialization process and reduces the sensitivity to the placement of the initial contour.
The Chan–Vese (C–V) model [47,49,50] is a special case of the Mumford–Shah functional [46]. The C–V model was specifically designed to segment images into two regions (foreground and background) with approximately piecewise-constant intensities. The C–V model simplifies the Mumford–Shah function by assuming that the image can be segmented into two regions with constant intensities. Although the Mumford–Shah functional offers a more general framework with greater flexibility to segment complex images with higher computational requirements, the C–V model is a computationally efficient approach suitable for segmenting images into two regions with uniform intensities [51]. The output of the C–V model for AM parts is a binary image with features segmented from the background. The computational efficiency of the C–V model makes it appealing for in situ real-time analysis since it reduces the dwell time between printing layers while performing image analysis. The energy functional of the C–V model is presented in Equation (1) [47].
F ( C , c 1 , c 2 ) = μ · length ( C ) + ν · area ( inside ( C ) ) + λ 1 inside ( C ) | u 0 c 1 | 2 d x d y + λ 2 outside ( C ) | u 0 c 2 | 2 d x d y
where u 0 is the original image, μ and ν are regularization parameters, c 1 and c 2 are the values of u inside and outside the closed contour C, and λ 1 and λ 2 are intensity weighting parameters. The parameters μ , ν , λ 1 and λ 2 are non-negative rational numbers. As a simplification, the area term is generally ignored for grayscale images by setting ν = 0 [47].
Even though this simplification improves the computational efficiency, the performance of the algorithm is affected due to the binary nature of the piecewise solution, which splits the image into two classes with the lowest intra-class variance. The values selected for the remaining parameters affect the final segmentation output and are selected based on the image properties and the desired segmentation result. Understanding the significance of each of these parameters is important for achieving acceptable segmentation results, particularly in the context of AM. The significance of the initial level set, intensity weighting parameters ( λ 1 and λ 2 ) , contour smoothness parameter ( μ ) , and iteration limit on the performance of the C–V model is discussed.
  • Initial level set: The initial level set function is a signed distance function that represents the initial contour. The shape and size of the initial level set can significantly affect the convergence of the model and the quality of the final segmentation result. Common choices for the shape of the initial level set include simple geometric shapes such as circles or rectangles, placed at strategic locations in the image, or more complex patterns such as a checkerboard. The initial level set function is initialized with the desired contour and is updated iteratively to minimize the energy function. AM parts often contain complex internal features and structures, and simple geometric shapes may not be suitable for the initial level set. Moreover, due to the limited prior knowledge about feature shapes, utilizing complex patterns such as a checkerboard allows for a more flexible and adaptable initialization.
  • Intensity weighting parameters λ 1 and λ 2 : The intensity weighting parameters control the influence of the intensity term in the energy functional. A higher value of λ 1 and λ 2 gives more weight to the intensity term, resulting in a contour that closely follows the intensity boundaries in the image. Generally, both λ 1 and λ 2 are set to a similar value, usually 1, which can be effective when the object of interest and the background have similar variability in terms of intensity. Adjusting these values relative to each other, especially in images with significant noise or texture, allows for finer control over the segmentation. This is important for components manufactured through AM in general, as variations in lighting and material properties can significantly affect the quality of image analysis.
  • Contour smoothness parameter μ : The contour smoothness parameter controls the smoothness of the contour in the segmentation process. Higher values of μ penalize the length of the contour, resulting in a smoother contour without jagged edges. This is preferable if the acquired image contains a large amount of image noise or texture. However, a very high value might also oversimplify the contour, causing it to miss important features of the object. If images contain fine details, then a lower value of μ is preferable, as it allows the contour to fit more closely to small features of the object. However, a very low μ can cause the contour to be too irregular, leading to false edges or inaccuracies in the segmentation. For AM applications, a low μ value is desirable; however, it is important to preprocess the image to reduce excessive noise. According to the literature, for most applications, a good starting value is μ = 0.100 [50].
  • Iteration limit n m a x : The iteration limit specifies the maximum number of iterations the algorithm will execute to minimize the energy function. During each iteration, the algorithm updates the level set function to progressively refine the segmentation until the iteration limit is reached. The choice of the iteration limit is based on the convergence behavior of the algorithm and the desired accuracy of the segmentation results. A higher iteration limit allows the algorithm to refine the segmentation, which can lead to more accurate results. However, it also increases the computation time, an important consideration for AM where the dwell time between layers to process the acquired image affects the overall print time.
While the C–V model is suitable for segmenting images into two regions with uniform intensities, it is limited when segmenting images with multiple regions or regions with varying intensities. AM parts often contain multiple regions with varying intensities due to internal features, external geometry, and the background. The intensity variation issue is addressed by proposing a two-stage solution where thresholding methods are used to obtain the external geometry and the C–V model is used to segment the internal features.
Simple thresholding is used to segment the external geometry, which is analyzed for external features, and used to establish a local RoI for segmenting the internal features. The RoI is offset from the external contour to limit the effects of the external contour on the segmentation of the internal features. When determining the offset amount, it is important to consider the resolution of the image acquisition system, the size of the features being analyzed, and the potential for boundary artifacts to affect the segmentation process.
The local RoI is the region where the C–V model is applied to segment the internal features. A checkerboard pattern is selected as the initial level set function, and the algorithm is iterated until the energy function is minimized. The output is a binary mask of the internal features of the part. Then, the external and internal masks are combined to generate the as-printed layer mask, which contains the part external geometry and internal features.

2.4. Comparison of the As-Printed with As-Processed Layers

Each detected contour is analyzed to determine its geometric characteristics, such as dimensions, shape, and location of the feature within the image. The location data are used to calculate the RoI and align the as-printed and as-processed layer masks for comparison. The shape of the contour can be determined by applying the Ramer-Douglas-Peucker algorithm [52,53], which approximates a curve as a polygon and outputs the number of sides of the polygon. Dimensions are computed using the Euclidean distance between specific points on the bounding box of the contour. The segmented contours and features are saved in a single binary mask representing the as-printed layer.
The framework then generates two lists of contours, one representing the as-processed layer from the toolpath data and another representing the as-printed layer from the image data. The centroid of each contour is determined to establish the location of the features within the layer, using image moments which calculate a weighted average of pixel intensities [54]. The tolerance for geometric differences to be reported is defined based on the process, imaging system resolution, and user-defined limits.

3. Experimental Setup and Configuration

The proposed framework described in Section 2 has been implemented and tested on a commercially available 3D printing platform, which has been customized with specific driving software and augmented with image acquisition software and hardware. All platform operations are managed by a single-board computer. The image acquisition hardware consists of a camera and an LED lighting system, both mounted on the print head.

3.1. Additive Manufacturing Platform

A Creality Ender 3 Pro Fused Filament Fabrication printer (Creality 3D Technology Co., Shenzhen, China) [55] was used for this research. This printer has a build area of 235 mm × 235 mm and features a printer head with a 0.4 mm diameter extruder nozzle. The design of the Ender 3 Pro allows the build area to move along the Y-axis, while the printer head is mounted on a gantry that can move along both the X- and Z-axes. The AM platform used for this research including the custom-designed mount and the imaging setup is shown in Figure 3a. The custom-designed mount is attached to the print head assembly and integrates the imaging and lighting hardware. This mount provides a stable platform for the camera and the accompanying LED lighting system, which consists of two Neopixel LED ring-shaped arrays (Adafruit, New York, NY, USA). These LEDs are individually addressable and are controlled by the host system, allowing for adjustable lighting conditions to enhance the contrast of captured images. The ring shape ensures even illumination of the surface and minimizes shadows caused by the geometry and finish of the printed parts. The segmentation process requires high contrast between the part and the background to ensure that the printed part can be segmented from the printer bed. White masking tape is used on the bed to provide a bright background and ensure that the part adheres to the bed. The part is printed using matte blue Polylactic Acid (PLA) filament. The printing parameters used for this research are shown in Table 1.
The operations of the developed environment are managed by a Raspberry Pi 2 Model B host (Raspberry Pi Foundation, Cambridge, UK) [56], which controls the camera, LED lights, and the execution of the framework algorithms. The original Marlin firmware on the printer is replaced with the Klipper firmware [57], which runs alongside the Raspberry Pi. This firmware change improves printer performance by off-loading computationally demanding tasks to the Raspberry Pi, thus reducing performance issues during printing. Additionally, the Raspberry Pi host enhances the system flexibility by supporting GCODE macros and providing Application Programming Interfaces for communication with external programs and control of devices. The software for both printing and imaging (acquisition and processing) is custom-developed in Python 3.10 and runs on the host. Image processing and segmentation are implemented using OpenCV 4.6.0 [58], a widely used open source computer vision library that provides a comprehensive collection of functions for image and video processing tasks.

3.2. Image Acquisition Setup and Calibration

A Raspberry Pi HQ camera (Raspberry Pi Foundation, Cambridge, UK) [59] is used to acquire images of the printed layers. The camera is equipped with a high-resolution Sony IMX477 backside-illuminated CMOS sensor (Sony Semiconductor Solutions Corporation, Atsugi, Japan) [60], capable of capturing 12-bit raw image data. The camera is paired with an 8 mm wide-angle lens with manual aperture and zoom controls. During initial testing, apertures larger than f / 1.8 were observed to result in increased chromatic aberration, creating colored fringes around the edges of features. The lens aperture was set to f / 2.0 to ensure that the sensor efficiently captures details in both shadows and highlights with adequate contrast and sharpness in the image in order for the segmentation process. The zoom is adjusted to ensure that the layer being inspected is in sharp focus. The camera is mounted on the print head assembly, for a top-down view of the printed layer. The camera position is calibrated to ensure the entire layer is within its field of view. A close-up view of the image acquisition setup and its components is shown in Figure 3b.
The camera is calibrated using a reference object-based calibration procedure to estimate the intrinsic and extrinsic camera parameters [61]. A planar surface with a checkerboard pattern printed on paper is used for calibration. The checkerboard size is chosen based on the camera resolution and the field of view. A diverse set of images is acquired considering various orientations, distances, and angles of the checkerboard pattern surface relative to the camera. The images are analyzed to accurately identify the intersection points of the checkerboard squares. An optimization algorithm is used to minimize the distance between the imaged and predicted corner positions. This process yields the intrinsic camera matrix, K, distortion coefficients, D, and the rotation, R, and translation, T, vectors are shown in Equations (2)–(5).
K = f x 0 c x 0 f y c y 0 0 1
D = [ k 1 , k 2 , p 1 , p 2 , k 3 ]
R = [ r 1 , r 2 , r 3 ] T
T = [ t x , t y , t z ] T
These parameters are used to undistort captured images using an inverse distortion model to compute normalized undistorted image coordinates, accounting for both radial and tangential distortions [62]. The output is an undistorted image suitable for further image analysis processes. The calibration is performed before starting the printing operation and must be repeated if there are any changes in the position or orientation of the camera relative to the print bed.
The parameters f x and f y are obtained from the camera intrinsic matrix K and represent the focal lengths (in pixels) in the X- and Y-directions, respectively. These parameters are used to calculate the field of view (FoV) and the RoI for the as-processed layer. The horizontal and vertical FoV (in radians), F o V x and F o V y , can be calculated using Equations (6) and (7), respectively. The RoI width, W R o I , and height, H R o I , are calculated using Equations (8) and (9), respectively.
F o V x = 2 × arctan image width 2 × f x
F o V y = 2 × arctan image height 2 × f y
W R o I = 2 × d × tan F o V x 2
H R o I = 2 × d × tan F o V y 2
where d is the distance between the camera and the printed layer. The RoI is centered at the optical axis of the camera at ( c x , c y ) in the camera coordinate frame. The RoI width extends along the X-axis and its height along the Y-axis. The RoI is symmetrically positioned relative to the optical axis. The calculated dimensions and location are used to define the RoI for the as-processed layer.
The manufacturing platform consists of multiple components, with different coordinate systems and measurement units. To ensure inter-operation between the various components, it is important to calibrate each system to a standardized unit system and to define a unified coordinate system with a common origin point. The presented research uses millimeters and degrees as the units of measurement. The machine origin of the 3D printer is designated as the common origin for all components. All calibration procedures are performed and all toolpath commands are modified relative to this common origin before initiating the print operation.
A high degree of measurement accuracy is important for evaluating both the external and internal features of the printed part. A conversion scale between pixels and real-world metric units is established. The checkerboard pattern, used for camera calibration, has squares of known dimensions and is used to establish the unit conversion scale. The length of the square side in pixels, l p , is calculated using the corner coordinates of the squares. The known length of the square side, l k , is used to calculate the conversion scale, s, shown in Equation (10).
s = l k l p
The checkerboard pattern used for calibration consists of squares with a 3 mm side ( l k ). After imaging, the corners were measured to be 117 pixels ( l p ) apart. Using Equation (10), the conversion scale is calculated to be 39 pixels/mm. As such, the image acquisition system can identify features with a 0.025 mm/pixel resolution. The conversion scale is used to calculate the physical length using the counted pixels. Recalibration is necessary if there is any change in the distance between the camera sensor and the part being measured. In this research, the distance from the camera to the most recent printed layer to be imaged is kept constant by securing the camera on the print head assembly.
The minimum printable feature size (MPFS) of the FDM-based evaluation platform is assumed to be the same as the nozzle diameter of 0.4 mm. The MPFS in pixels is calculated using Equation (11), where the ceiling function is employed since a pixel is a discrete quantity. The MPFS is calculated to be 16 pixels.
MPFS = Nozzle Diameter × s pixel ( s )
It is important to establish a relationship between the camera’s optical axis and the printer coordinate system since they operate in different coordinate systems. The optical axis is an imaginary line that runs perpendicular to the center of the camera image frame. Offsets in the X- and Y- directions between this optical axis and the printer nozzle are computed to align the camera frame with the printer origin.
The approach for positional calibration uses a circle at a known location on the printer bed relative to the printer coordinate axes as the calibration feature due to its simplicity since it requires only the definition of its origin and diameter. The circle is printed on the bed and an image is captured by positioning the camera over the circle. Using Hough Circle Transform [63], the circle is detected in the image, and the center of the circle is determined in pixel coordinates ( x i , y i ) , which is at an offset ( x o , y o ) from the center of the camera frame. The offsets are then converted into millimeters using the conversion scale, s, calculated during the camera calibration. The positional calibration procedure was repeated 50 times to improve accuracy. The computed offsets had an average error of 4 and 6 pixels (0.100 mm and 0.150 mm) along the X- and Y-axes, respectively. The positional offsets are used to align the segmented as-processed toolpath with the camera-acquired image of the as-printed part.

4. Results

The performance of the proposed framework (see Section 2) for in situ identification of geometric differences between as-processed and as-printed parts on a layer-by-layer approach is evaluated on a multilayer FDM printed part (Figure 4) using the presented procedures (see Section 2) and experimental setup (see Section 3). The camera calibration matrix, pixel-to-real length units conversion scale, and the positional offsets between the camera and the printer nozzle are computed.
A 20.000 mm × 20.000 mm part was designed in CAD software as shown in Figure 4a. The part consists of eight layers with a layer height of 0.2 mm. The first layer is completely solid with no internal features. Layers two through eight include three internal features, two circles each with a diameter of 4.000 mm and a rectangle of 5.000 mm × 10.000 mm (Figure 4b). The as-processed contours for the first layer and layers two through eight are shown in Figure 4c,d, respectively.
After each layer is printed, the camera assembly is moved over the printed part to capture an image of the printed layer. The image is preprocessed to remove optical distortions, improve brightness and contrast, and generate the required RoI. On the first layer, segmentation is performed in the RoI using simple thresholding to detect the external geometry and internal features. For the remaining layers, the external geometry is obtained using simple thresholding to establish a local RoI within which internal features with low contrast are segmented using the C–V model.
The following subsections discuss the C–V parameters and their significance in segmenting images of AM components, followed by the results of segmenting internal features on subsequent layers using the C–V model.

4.1. Effects of Chan–Vese Parameters

When using the C–V model to segment internal features of parts manufactured using AM, it is important to understand how changing different parameters can affect the segmentation output. A software-based checkerboard pattern with squares of alternating binary values is defined and used as the initial level set, which must cover the RoI. This allows the C–V model to be initialized from multiple points. The resolution of the checkerboard pattern depends on the MPFS (see Section 3) and the resolution of the image acquisition system. For the developed evaluation setup, the size of each square is defined to be 15 pixels. While a higher resolution checkerboard pattern could improve segmentation quality by detecting finer details of the features, it would also increase the convergence time due to the added complexity in initial contour calculations.
The contour smoothness parameter, μ , and the iteration limit, n m a x , are important to determine the quality of the segmentation. The value of μ is typically selected between 0 and 1 [47,49]. The intensity weighting factors, ( λ 1 , λ 2 ) , are set equal to 1 due to the minimal variation between the two regions in the analyzed samples. The effects of varying μ [ 0.002 , 0.250 ] for n m a x = 500 are shown in Figure 5. The effects of varying n m a x [ 1 , 1000 ] with μ = 0.010 are shown in Figure 6.

4.2. Segmentation Results

Saini et al. [39] compared several segmentation methods for AM components and found that traditional segmentation methods such as thresholding and edge detection were not effective for segmenting internal features of AM components.

4.2.1. Comparison of Segmentation Methods for the Multilayer Part

The image of each printed layer (see Figure 4) was acquired and processed to obtain the external geometry and define the RoI using the simple thresholding method. The region within the RoI was then segmented using the C–V model. The results of the segmentation are shown in Figure 7. The as-printed geometry is shown in Figure 7a, the simple thresholding output showing the high-contrast contours is shown in Figure 7b, and the C–V method output is shown in Figure 7c. A composite image of simple thresholding and C–V output is shown in Figure 7d. Additionally, image processing metrics such as accuracy, precision, recall, and Jaccard index [39] are computed to evaluate how simple thresholding and the C–V model compare to traditional methods. The results for the performance metrics are summarized in Table 2 and the segmentation output is shown in Figure 8.

4.2.2. Feature Recognition and Dimensional Analysis

The results for feature recognition and analysis for the segmented layers are discussed, focusing on the first and eighth layers of the part. The segmentation output of the composite image of each layer is analyzed to identify the geometric features, their shapes, and dimensions, which are then compared with the as-processed layer features. As mentioned in Section 2, the dimensions are computed using the Euclidean distance between the opposite edges on the minimum bounding box of the contour. The number of sides of a segmented feature contour is determined by applying the Ramer–Douglas–Peucker algorithm which approximates the contour as a polygon.
The first layer of the part is a 20.000 mm × 20.000 mm solid layer with no internal features. The external geometry is identified using simple thresholding with the results shown in Table 3 in both pixels and millimeters. The process is repeated for the next layers which contain the internal features, two circles and a rectangle. The external geometry is segmented using simple thresholding and the internal features are segmented using the C–V model. The results for the eighth layer are shown in Table 4 in pixels and millimeters.

5. Discussion

5.1. Chan–Vese Parameters

The contour smoothness parameter, μ , is tuned according to the application needs and image characteristics. A low μ is preferred, but image preprocessing is essential to minimize noise. In the literature [47], a value between 0 and 1 is recommended. In this research, the intensity weighting factors, ( λ 1 , λ 2 ) , are set equal to 1 due to the minimal variation between the two regions in the samples analyzed.
The effects of varying μ [ 0.002 , 0.250 ] with n m a x = 500 are shown in Figure 5. A low μ (=0.002) provided high-quality segmentation, but at a high computational cost. The segmentation also included unwanted edges such as those from the infill. At μ = 0.025 and μ = 0.050 , it is observed that weaker contours begin to break, and ghosting artifacts appear in empty regions. Additionally, for samples with μ = 0.050 , 0.150 , and 0.250 , a rapid drop in energy was observed with the model prematurely converging at 316, 173, and 89 iterations, respectively. Since μ controls the smoothness of the contours, a high value results in an oversimplified contour, causing a reduction in energy [50]. For the presented research, the value of μ [ 0.01 , 0.025 ] provided the best balance of detected features, minimal unwanted defects, and a low iteration number. A low iteration number is desired since it reduces the computational cost during real-time analysis.
The effects of varying the number of iterations on the segmentation output with μ = 0.01 and λ 1 = λ 2 = 1 are shown in Figure 6. The results show that the initial level set dominates the segmentation output for the first 75 iterations. Features are segmented after 125 iterations, however, unwanted weaker edges are also segmented. At 250 iterations, the influence of the initial level set decreases significantly, revealing both weak and strong edges with greater clarity. However, several unwanted weak edges attributed to image noise and surface textures are still visible. Most of these weak edges are effectively filtered out by 500 iterations, enhancing the segmentation quality. The improvement in segmentation becomes marginal when comparing the results between 500 and 1000 iterations.
The Python implementation of the C–V model using the Scikit-image v0.21 package [64] is limited to using a single core, and on the Raspberry Pi Model 2B host completing 500 iterations takes approximately 18 s. This observation highlights the importance of keeping the iteration number low, especially for in situ real-time layer-by-layer analysis of AM printed parts since every iteration adds to the overall print time, considering the dwell period while each layer is analyzed and evaluated.

5.2. Segmentation of AM Components

The results of the segmentation of the composite method using simple thresholding for external geometry and C–V for internal features are compared to the traditional methods of thresholding, edge detection, and watershed transform. The results show that the composite method provides better segmentation in most cases. In layer L1, the composite method achieves the highest accuracy of 99.62% and shows significantly improved precision (81.62%), recall (94.54%), and Jaccard index (78.34%). The high recall and Jaccard index values indicate that the composite method is effective in segmenting the external geometry with a high rate of true positives. These metrics show the effectiveness of the composite method in processing parts with a variety of contrast and texture conditions.
The composite method consistently reached high-performance metrics values, outperforming traditional methods across most metrics for multiple layers (L3, L5, and L8) of the printed test sample. In layer L3, the composite method achieves an accuracy of 99.12%, a precision of 84.32%, and a recall of 76.51%. On the other hand, simple thresholding has an accuracy of 97.41%, a precision of 35.98%, and a recall of 43.26%. Both adaptive thresholding and the Sobel edge detector exhibit even lower performance in precision and recall.
The Jaccard index, a measure of the intersection over the union of the segmented regions, is particularly significant as it provides a comprehensive metric for evaluating segmentation performance in terms of precision and recall. In layer L3, the composite method achieves a Jaccard index of 67.19%, whereas simple thresholding and adaptive thresholding have Jaccard indices of 24.45% and 15.68%, respectively. This performance demonstrates the ability of the composite method to segment features with high accuracy and precision while maintaining low sensitivity to noise and texture variations. Despite the high overall performance of the composite method across different layers, a downward trend is observed in the Jaccard index as we progress from layer L3 to L8; however, its values are still larger than the other methods. This decline in performance can be attributed to an observed increase in surface noise and texture variations as layers are printed and due to printing artifacts introduced during printing.
In our previous research [39], adaptive thresholding and the Sobel edge detector, despite their lower metric scores, were noted for their ability to segment internal and external features. However, the segmentation outputs of these methods were characterized by high noise, which resulted in many false edges. This was largely due to their high sensitivity to noise and texture variations. Comparing these methods to the composite one, the latter shows better performance for all measures while also maintaining a low sensitivity to noise and texture variations.

5.3. Feature Recognition and Evaluation

The feature recognition and evaluation show that the composite method effectively identifies and measures the dimensions of both external and internal features by computing the Euclidean distance between the opposite edges of the minimum bounding box. The implementation of the Ramer-Douglas-Peucker algorithm in the framework enables the identification of the shape of the feature to be evaluated. The algorithm simplifies the contour of the feature and estimates the number of sides. Irregularly shaped features such as ellipses and circles are identified as polygons with a large number of sides.
The as-processed dimensions of the first layer were 20.000 mm × 20.000 mm, corresponding to 800 pixels × 800 pixels (using the conversion factor in Equation (10)). After processing, the identified as-printed dimensions for the height and width were 806 pixels and 797 pixels, corresponding to measurements of 20.150 mm and 19.925 mm, respectively. The differences of 6 pixels in the height and −3 pixels in the width of the printed layer correspond to +0.150 mm and −0.075 mm, respectively. These measurements demonstrate the capability of the imaging environment and subsequent image processing to identify differences at the pixel level, which for this work is as small as 0.025 mm. These differences could be attributed to process factors such as filament flow variations and inconsistencies in material deposition. Additionally, the number of sides was correctly identified as four, indicating a quadrilateral feature.
Layers two to eight had the same internal features: two circles and a rectangle. Simple thresholding was used to segment the external geometry and the C–V model to segment the internal features since they are low contrast. As shown in Table 4, the composite method measured the as-processed heights and widths for both internal circles to be 160 pixels (4.000 mm). The left as-printed circle measured 153 pixels in height and 154 pixels in width, corresponding to dimensions of 3.825 mm and 3.850 mm, with differences of −7 and −6 pixels, respectively. The right as-printed circle measured 150 pixels in height and 152 pixels in width, or 3.750 mm and 3.800 mm, resulting in a difference of −10 and −8 pixels, respectively. The high difference between the as-processed and as-printed dimensions for the right circle is attributed to material over-extrusion which affects the geometry of the feature. The two circles were identified as polygons with 19 and 14 sides, by the Ramer–Douglas–Peucker algorithm. The internal rectangle measured as-processed dimensions were 200 pixels × 400 pixels (5.000 mm × 10.000 mm) and the as-printed ones were 199 pixels × 405 pixels (4.975 mm × 10.125 mm) yielding differences of −1 pixel in height and +5 pixels in width. The rectangle was correctly approximated as having four sides.

6. Conclusions

It is important to ensure that AM-fabricated components and their features meet specified geometric requirements. Since AM processes are layer-by-layer manufacturing processes, they present several challenges for analyzing both internal and external features in real time due to low contrast and varying surface textures. Accurate feature identification and analysis at the layer level is essential to address these challenges effectively to assess printed part quality.
This research introduces a novel framework for segmenting AM components using a composite method that combines simple thresholding and the C–V active contour model. The C–V model employs energy minimization to segment images based on pixel intensity without relying on prior knowledge of the shape of the internal features. The effects of the parameters that influence the C–V model performance were analyzed. The initial level set, a checkerboard pattern, allowed the model to identify contours from multiple points. The intensity weighting factors ( λ 1 and λ 2 ) were defined to 1, the contour smoothness ( μ ) to 0.01, and the number of iterations (n) to 500. The model effectively segmented the low-contrast internal features of the test part in situ and in real time with a low dwell period while the feature analysis was performed.
An FDM-based test platform was developed using a Raspberry Pi host and the proposed framework was implemented using a custom Python script. Calibration methodologies were developed to ensure accurate measurements of the printed components and features within the resolution of the imaging system of 1 pixel (0.025 mm). A multilayer (eight-layer) part was printed and segmented at each layer using the composite method. The results of the segmentation of the composite method were compared with traditional segmentation methods and demonstrated improved performance in most cases, achieving a Jaccard index as high as 78.34%.
The shape and dimensions of the as-printed features (two circles and a rectangle) were evaluated and compared to the as-processed ones. The developed composite segmentation method identified geometric differences for the in-layer features ranging from 1 to 10 pixels (0.025 mm to 0.25 mm), after printing each layer in situ and in real time requiring approximately 18 s (using the current computing platform of a Raspberry Pi Model 2B) per layer for image analysis.
The results of this research validate the potential of the presented framework to segment features in AM components and evaluate geometric differences between the as-printed and as-processed features on a layer-by-layer for quality control purposes. The evaluation of the current framework used simple geometries, such as rectangles and circles. In future research, the performance of the framework will be evaluated using more complex geometries. While in this research we did not explore printed components and features exceeding the camera FoV, future research will investigate ‘stitching’ or combining multiple camera FoV regions to cover larger size features or layers.

Author Contributions

Conceptualization, P.S.S.; methodology, T.S. and P.S.S.; software, T.S.; validation, T.S. and P.S.S.; formal analysis, T.S. and P.S.S.; investigation, T.S.; resources, P.S.S.; data curation, T.S.; writing original draft preparation, T.S.; writing review and editing, T.S. and P.S.S.; visualization, T.S. and P.S.S.; supervision, P.S.S.; project administration, P.S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their feedback that helped improve the article from its original form.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AMAdditive Manufacturing
C–VChan–Vese
FDMFused Deposition Modeling
DTDigital Twin
CNNConvolutional Neural Network
LPBFLaser Powder Bed Fusion
CADComputer-Aided Design
STLStandard Tessellation Language
GCODEGeometric Code
RoIRegion of Interest
FoVField of View
MPFSMinimum Printable Feature Size

References

  1. Barcena, A.J.R.; Ravi, P.; Kundu, S.; Tappa, K. Emerging Biomedical and Clinical Applications of 3D-Printed Poly(Lactic Acid)-Based Devices and Delivery Systems. Bioengineering 2024, 11, 705. [Google Scholar] [CrossRef] [PubMed]
  2. Patel, P.; Ravi, P.; Shiakolas, P.; Welch, T.; Saini, T. Additive manufacturing of heterogeneous bio-resorbable constructs for soft tissue applications. In Proceedings of the Materials Science and Technology 2018, MS and T 2018, Columbus, OH, USA, 14–18 October 2018; pp. 1496–1503. [Google Scholar] [CrossRef]
  3. Adejokun, S.A.; Kumat, S.S.; Shiakolas, P.S. A Microrobot with an Attached Microforce Sensor for Transurethral Access to the Bladder Interior Wall. J. Eng. Sci. Med. Diagn. Ther. 2023, 6, 031001. [Google Scholar] [CrossRef]
  4. Martelli, A.; Bellucci, D.; Cannillo, V. Additive Manufacturing of Polymer/Bioactive Glass Scaffolds for Regenerative Medicine: A Review. Polymers 2023, 15, 2473. [Google Scholar] [CrossRef]
  5. Wang, J.; Shao, C.; Wang, Y.; Sun, L.; Zhao, Y. Microfluidics for Medical Additive Manufacturing. Engineering 2020, 6, 1244–1257. [Google Scholar] [CrossRef]
  6. Hazra, S.; Abdul Rahaman, A.H.; Shiakolas, P.S. An Affordable Telerobotic System Architecture for Grasp Training and Object Grasping for Human–Machine Interaction. J. Eng. Sci. Med. Diagn. Ther. 2024, 7, 011011. [Google Scholar] [CrossRef]
  7. Salifu, S.; Desai, D.; Ogunbiyi, O.; Mwale, K. Recent development in the additive manufacturing of polymer-based composites for automotive structures—A review. Int. J. Adv. Manuf. Technol. 2022, 119, 6877–6891. [Google Scholar] [CrossRef]
  8. Saini, T.; Shiakolas, P.; Dhal, K. In-Situ Fabrication of Electro-Mechanical Structures Using Multi-Material and Multi-Process Additive Manufacturing. In Proceedings of the Contributed Papers from MS&T17, MS&T18, Columbus, OH, USA, 14–18 October 2018; pp. 49–55. [Google Scholar] [CrossRef]
  9. Ravi, P.; Shiakolas, P.S.; Welch, T.; Saini, T.; Guleserian, K.; Batra, A.K. On the Capabilities of a Multi-Modality 3D Bioprinter for Customized Biomedical Devices. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Houston, TX, USA, 13–19 November 2015; Volume 2A, p. V02AT02A008. [Google Scholar] [CrossRef]
  10. Shen, C.; Hua, X.; Li, F.; Zhang, T.; Li, Y.; Zhang, Y.; Wang, L.; Ding, Y.; Zhang, P.; Lu, Q. Composition-induced microcrack defect formation in the twin-wire plasma arc additive manufacturing of binary TiAl alloy: An X-ray computed tomography-based investigation. J. Mater. Res. 2021, 36, 4974–4985. [Google Scholar] [CrossRef]
  11. Ziabari, A.; Venkatakrishnan, S.V.; Snow, Z.; Lisovich, A.; Sprayberry, M.; Brackman, P.; Frederick, C.; Bhattad, P.; Graham, S.; Bingham, P.; et al. Enabling rapid X-ray CT characterisation for additive manufacturing using CAD models and deep learning-based reconstruction. NPJ Comput. Mater. 2023, 9, 91. [Google Scholar] [CrossRef]
  12. Rieder, H.; Dillhöfer, A.; Spies, M.; Bamberg, J.; Hess, T. Online Monitoring of Additive Manufacturing Processes Using Ultrasound. Available online: https://api.semanticscholar.org/CorpusID:26787041 (accessed on 8 June 2024).
  13. Mireles, J.; Ridwan, S.; Morton, P.A.; Hinojos, A.; Wicker, R.B. Analysis and correction of defects within parts fabricated using powder bed fusion technology. Surf. Topogr. Metrol. Prop. 2015, 3, 034002. [Google Scholar] [CrossRef]
  14. Du Plessis, A.; Le Roux, S.G.; Els, J.; Booysen, G.; Blaine, D.C. Application of microCT to the non-destructive testing of an additive manufactured titanium component. Case Stud. Nondestruct. Test. Eval. 2015, 4, 1–7. [Google Scholar] [CrossRef]
  15. Du Plessis, A.; Le Roux, S.G.; Booysen, G.; Els, J. Quality Control of a Laser Additive Manufactured Medical Implant by X-ray Tomography. 3D Print. Addit. Manuf. 2016, 3, 175–182. [Google Scholar] [CrossRef]
  16. Cerniglia, D.; Scafidi, M.; Pantano, A.; Łopatka, R. Laser Ultrasonic Technique for Laser Powder Deposition Inspection. Available online: https://www.ndt.net/?id=15510 (accessed on 25 August 2024).
  17. Lu, L.; Hou, J.; Yuan, S.; Yao, X.; Li, Y.; Zhu, J. Deep learning-assisted real-time defect detection and closed-loop adjustment for additive manufacturing of continuous fiber-reinforced polymer composites. Robot. Comput.-Integr. Manuf. 2023, 79, 102431. [Google Scholar] [CrossRef]
  18. Garanger, K.; Khamvilai, T.; Feron, E. 3D Printing of a Leaf Spring: A Demonstration of Closed-Loop Control in Additive Manufacturing. In Proceedings of the 2018 IEEE Conference on Control Technology and Applications (CCTA), Copenhagen, Denmark, 21–24 August 2018; pp. 465–470. [Google Scholar] [CrossRef]
  19. Cummings, I.T.; Bax, M.E.; Fuller, I.J.; Wachtor, A.J.; Bernardin, J.D. A Framework for Additive Manufacturing Process Monitoring & Control. In Topics in Modal Analysis & Testing, Volume 10; Mains, M., Blough, J., Eds.; Conference Proceedings of the Society for Experimental Mechanics Series; Springer International Publishing: Cham, Switzerland, 2017; pp. 137–146. [Google Scholar] [CrossRef]
  20. He, K.; Zhang, Q.; Hong, Y. Profile monitoring based quality control method for fused deposition modeling process. J. Intell. Manuf. 2019, 30, 947–958. [Google Scholar] [CrossRef]
  21. Baumann, F.; Roller, D. Vision based error detection for 3D printing processes. MATEC Web Conf. 2016, 59, 06003. [Google Scholar] [CrossRef]
  22. Lyngby, R.; Wilm, J.; Eiríksson, E.; Nielsen, J.; Jensen, J.; Aanæs, H.; Pedersen, D. In-line 3D print failure detection using computer vision. In Proceedings of the Dimensional Accuracy and Surface Finish in Additive Manufacturing, Leuven, Belgium, 10–11 October 2017. [Google Scholar]
  23. Yi, W.; Ketai, H.; Xiaomin, Z.; Wenying, D. Machine vision based statistical process control in fused deposition modeling. In Proceedings of the 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), Siem Reap, Cambodia, 18–20 June 2017; pp. 936–941. [Google Scholar] [CrossRef]
  24. Delli, U.; Chang, S. Automated Process Monitoring in 3D Printing Using Supervised Machine Learning; Elsevier: Amsterdam, The Netherlands, 2018; Volume 26, pp. 865–870. ISSN 23519789. [Google Scholar] [CrossRef]
  25. Shen, H.; Sun, W.; Fu, J. Multi-view online vision detection based on robot fused deposit modeling 3D printing technology. Rapid Prototyp. J. 2019, 25, 343–355. [Google Scholar] [CrossRef]
  26. Gaikwad, A.; Yavari, R.; Montazeri, M.; Cole, K.; Bian, L.; Rao, P. Toward the digital twin of additive manufacturing: Integrating thermal simulations, sensing, and analytics to detect process faults. IISE Trans. 2020, 52, 1204–1217. [Google Scholar] [CrossRef]
  27. Knapp, G.L.; Mukherjee, T.; Zuback, J.S.; Wei, H.L.; Palmer, T.A.; De, A.; DebRoy, T. Building blocks for a digital twin of additive manufacturing. Acta Mater. 2017, 135, 390–399. [Google Scholar] [CrossRef]
  28. Nath, P.; Mahadevan, S. Probabilistic Digital Twin for Additive Manufacturing Process Design and Control. J. Mech. Des. 2022, 144, 091704. [Google Scholar] [CrossRef]
  29. Farhan Khan, M.; Alam, A.; Ateeb Siddiqui, M.; Saad Alam, M.; Rafat, Y.; Salik, N.; Al-Saidan, I. Real-time defect detection in 3D printing using machine learning. Mater. Today Proc. 2021, 42, 521–528. [Google Scholar] [CrossRef]
  30. Liu, C.; Law, A.C.C.; Roberson, D.; Kong, Z.J. Image analysis-based closed loop quality control for additive manufacturing with fused filament fabrication. J. Manuf. Syst. 2019, 51, 75–86. [Google Scholar] [CrossRef]
  31. Ye, Z.; Liu, C.; Tian, W.; Kan, C. In-situ point cloud fusion for layer-wise monitoring of additive manufacturing. J. Manuf. Syst. 2021, 61, 210–222. [Google Scholar] [CrossRef]
  32. Rossi, A.; Moretti, M.; Senin, N. Layer inspection via digital imaging and machine learning for in-process monitoring of fused filament fabrication. J. Manuf. Process. 2021, 70, 438–451. [Google Scholar] [CrossRef]
  33. Moretti, M.; Rossi, A.; Senin, N. In-process monitoring of part geometry in fused filament fabrication using computer vision and digital twins. Addit. Manuf. 2021, 37, 101609. [Google Scholar] [CrossRef]
  34. Castro, P.; Pathinettampadian, G.; Thanigainathan, S.; Prabakar, V.; Krishnan, R.A.; Subramaniyan, M.K. Measurement of additively manufactured part dimensions using OpenCV for process monitoring. J. Process. Mech. Eng. 2024, 09544089241227894. [Google Scholar] [CrossRef]
  35. Nuchitprasitchai, S.; Roggemann, M.; Pearce, J. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. JMMP 2017, 1, 2. [Google Scholar] [CrossRef]
  36. Holzmond, O.; Li, X. In situ real time defect detection of 3D printed parts. Addit. Manuf. 2017, 17, 135–142. [Google Scholar] [CrossRef]
  37. Lyu, J.; Manoochehri, S. Online Convolutional Neural Network-based anomaly detection and quality control for Fused Filament Fabrication process. Virtual Phys. Prototyp. 2021, 16, 160–177. [Google Scholar] [CrossRef]
  38. Saini, T.; Shiakolas, P.S. A Framework for In-Situ Vision Based Detection of Part Features and its Single Layer Verification for Additive Manufacturing. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, New Orleans, LA, USA, 29 October–2 November 2023; Volume 3, p. V003T03A083. [Google Scholar] [CrossRef]
  39. Saini, T.; Shiakolas, P.S.; McMurrough, C. Evaluation of Image Segmentation Methods for In Situ Quality Assessment in Additive Manufacturing. Metrology 2024, 4, 598–618. [Google Scholar] [CrossRef]
  40. Gunasekara, S.R.; Kaldera, H.N.T.K.; Dissanayake, M.B. A Systematic Approach for MRI Brain Tumor Localization and Segmentation Using Deep Learning and Active Contouring. J. Healthc. Eng. 2021, 2021, 6695108. [Google Scholar] [CrossRef]
  41. Fang, J.; Wang, K. Weld Pool Image Segmentation of Hump Formation Based on Fuzzy C-Means and Chan-Vese Model. J. Mater. Eng. Perform. 2019, 28, 4467–4476. [Google Scholar] [CrossRef]
  42. Caltanissetta, F.; Grasso, M.; Petrò, S.; Colosimo, B.M. Characterization of in situ measurements based on layerwise imaging in laser powder bed fusion. Addit. Manuf. 2018, 24, 183–199. [Google Scholar] [CrossRef]
  43. Li, Z.; Liu, X.; Wen, S.; He, P.; Zhong, K.; Wei, Q.; Shi, Y.; Liu, S. In Situ 3D Monitoring of Geometric Signatures in the Powder-Bed-Fusion Additive Manufacturing Process via Vision Sensing Methods. Sensors 2018, 18, 1180. [Google Scholar] [CrossRef]
  44. Python Software Foundation. What is New In Python 3.10. Available online: https://docs.python.org/3/whatsnew/3.10.html (accessed on 2 February 2024).
  45. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  46. Mumford, D.; Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 1989, 42, 577–685. [Google Scholar] [CrossRef]
  47. Chan, T.; Vese, L. An Active Contour Model without Edges. In Scale-Space Theories in Computer Vision; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1999; Volume 1682, pp. 141–151. [Google Scholar] [CrossRef]
  48. Osher, S.; Sethian, J.A. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Physics 1988, 79, 12–49. [Google Scholar] [CrossRef]
  49. Cohen, R. The Chan-Vese Algorithm. Technical report, Israel Institute of Technology. arXiv 2011, arXiv:1107.2782. [Google Scholar] [CrossRef]
  50. Getreuer, P. Chan-Vese Segmentation. Image Process. Line 2012, 2, 214–224. [Google Scholar] [CrossRef]
  51. Wang, X.F.; Huang, D.S.; Xu, H. An efficient local Chan–Vese model for image segmentation. Pattern Recognit. 2010, 43, 603–618. [Google Scholar] [CrossRef]
  52. Ramer, U. An iterative procedure for the polygonal approximation of plane curves. Comput. Graph. Image Process. 1972, 1, 244–256. [Google Scholar] [CrossRef]
  53. Douglas, D.H.; Peucker, T.K. Algorithms for the Reduction of the Number of Points Required to Represent a Digitized Line or its Caricature. Cartogr. Int. J. Geogr. Inf. Geovisualization 1973, 10, 112–122. [Google Scholar] [CrossRef]
  54. Ming-Kuei, H. Visual pattern recognition by moment invariants. IEEE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar] [CrossRef]
  55. Shenzhen Creality 3D Technology Co., Ltd. Creality Ender 3 Pro. Available online: https://www.creality.com/products/ender-3-pro-3d-printer (accessed on 17 January 2024).
  56. Raspberry Pi Foundation. Raspberry Pi 2 Model B. Available online: https://www.raspberrypi.com/products/raspberry-pi-2-model-b/ (accessed on 17 January 2024).
  57. O’Connor, K. Klipper Firmware. Available online: https://www.klipper3d.org/Features.html (accessed on 17 January 2024).
  58. OpenCV. OpenCV 4.6.0. Available online: https://docs.opencv.org/4.6.0 (accessed on 9 April 2024).
  59. Raspberry Pi Foundation. Raspberry Pi HQ Camera. Available online: https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/ (accessed on 17 January 2024).
  60. Sony Semiconductor Solutions Corporation. Image Sensor for Consumer Cameras. Available online: https://www.sony-semicon.com/en/products/is/camera/index.html (accessed on 18 March 2025).
  61. Hartley, R.; Zisserman, A. Camera Models. In Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004; pp. 153–177. ISBN 0521540518. [Google Scholar]
  62. OpenCV. Camera Calibration with OpenCV. Available online: https://docs.opencv.org/4.x/d4/d94/tutorial_camera_calibration.html (accessed on 9 April 2024).
  63. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  64. Scikit-Image Team. Chan-Vese Segmentation. Available online: https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_chan_vese.html (accessed on 21 July 2023).
Figure 1. In situ quality control framework components: (a) Preparation of the as-processed layer, (b) image acquisition, preprocessing, and segmentation, (c) evaluation of differences between the as-printed and as-processed layers.
Figure 1. In situ quality control framework components: (a) Preparation of the as-processed layer, (b) image acquisition, preprocessing, and segmentation, (c) evaluation of differences between the as-printed and as-processed layers.
Jmmp 09 00102 g001
Figure 2. Sample GCODE with custom markers to position the camera over the printed layer and start the image analysis process.
Figure 2. Sample GCODE with custom markers to position the camera over the printed layer and start the image analysis process.
Jmmp 09 00102 g002
Figure 3. Customized AM Platform [39]. (a) Integrated printing and imaging head assembly alongside the Raspberry Pi host. (b) Customized print head assembly, Raspberry Pi HQ camera, 8 mm wide-angle lens, and LED lighting assembly.
Figure 3. Customized AM Platform [39]. (a) Integrated printing and imaging head assembly alongside the Raspberry Pi host. (b) Customized print head assembly, Raspberry Pi HQ camera, 8 mm wide-angle lens, and LED lighting assembly.
Jmmp 09 00102 g003
Figure 4. As-processed test sample. (a) Isometric view of the as-processed test sample. (b) Top view with dimensions of external geometry and internal features. (c) Segmented as-processed first layer (L1). (d) Segmented as-processed representation for layers two through eight (L2–L8).
Figure 4. As-processed test sample. (a) Isometric view of the as-processed test sample. (b) Top view with dimensions of external geometry and internal features. (c) Segmented as-processed first layer (L1). (d) Segmented as-processed representation for layers two through eight (L2–L8).
Jmmp 09 00102 g004
Figure 5. Effects of varying μ = [ 0.002 , 0.250 ] on the Chan–Vese segmentation output with ( λ 1 , λ 2 ) = 1 , n m a x = 500 , and the initial level set being a checkerboard pattern with 15-pixel squares.
Figure 5. Effects of varying μ = [ 0.002 , 0.250 ] on the Chan–Vese segmentation output with ( λ 1 , λ 2 ) = 1 , n m a x = 500 , and the initial level set being a checkerboard pattern with 15-pixel squares.
Jmmp 09 00102 g005
Figure 6. Effects of varying n = [ 1 , 1000 ] on the Chan–Vese segmentation output with ( λ 1 , λ 2 ) = 1 , μ = 0.010 , and the initial level set being a checkerboard pattern with 15-pixel squares.
Figure 6. Effects of varying n = [ 1 , 1000 ] on the Chan–Vese segmentation output with ( λ 1 , λ 2 ) = 1 , μ = 0.010 , and the initial level set being a checkerboard pattern with 15-pixel squares.
Jmmp 09 00102 g006
Figure 7. Segmentation results of the first layer (L1) of the multilayer part. (a) As-printed layer. (b) Simple thresholding output. (c) C–V output. (d) Composite output.
Figure 7. Segmentation results of the first layer (L1) of the multilayer part. (a) As-printed layer. (b) Simple thresholding output. (c) C–V output. (d) Composite output.
Jmmp 09 00102 g007
Figure 8. Output of different segmentation methods for layers one, three, five, and eight (L1, L3, L5, and L8). Note: all images except those for composite Simple thresholding and C–V output are reproduced from our previous work [39] and shown in this figure for ease of comparison.
Figure 8. Output of different segmentation methods for layers one, three, five, and eight (L1, L3, L5, and L8). Note: all images except those for composite Simple thresholding and C–V output are reproduced from our previous work [39] and shown in this figure for ease of comparison.
Jmmp 09 00102 g008
Table 1. Print parameters.
Table 1. Print parameters.
ParameterValue
Layer Height (mm)0.2
Perimeters3
Extrusion Width for all features (mm)0.4
Fill Density100%
Fill PatternRectilinear
Table 2. Performance metrics of different segmentation methods on a multilayer part with internal features. The metrics for the composite segmentation using Simple Thresholding and Chan–Vese are evaluated in this research. The metrics for simple thresholding, adaptive thresholding, Sobel edge detector, Canny edge detector, and watershed transform were evaluated in our previous work and reproduced here for comparison purposes [39]. Note: The results are the average values from the analysis of three acquired images.
Table 2. Performance metrics of different segmentation methods on a multilayer part with internal features. The metrics for the composite segmentation using Simple Thresholding and Chan–Vese are evaluated in this research. The metrics for simple thresholding, adaptive thresholding, Sobel edge detector, Canny edge detector, and watershed transform were evaluated in our previous work and reproduced here for comparison purposes [39]. Note: The results are the average values from the analysis of three acquired images.
Layer NumberMethodAccuracy (%)Precision (%)Recall (%)Jaccard Index (%)
L1Simple Thresh. and Chan–Vese99.62081.62094.54078.340
Simple thresholding99.34059.81089.85056.020
Adaptive thresholding95.06056.42015.49013.830
Sobel edge detector82.22085.1706.3806.310
Canny edge detector99.41069.32085.90062.240
Watershed transform97.73062.75033.60028.010
L3Simple Thresh. and Chan–Vese99.12084.32076.51067.190
Simple thresholding97.41035.98043.26024.450
Adaptive thresholding94.28045.72019.26015.680
Sobel edge detector79.16073.8007.8207.610
Canny edge detector98.27035.11078.47032.030
Watershed transform97.26063.45043.86035.020
L5Simple Thresh. and Chan–Vese99.01076.87082.27066.030
Simple thresholding97.41035.50043.05024.150
Adaptive thresholding93.01049.74016.59014.210
Sobel edge detector76.66072.7406.9406.760
Canny edge detector97.95042.38058.22032.500
Watershed transform95.56066.46029.69025.820
L8Simple Thresh. and Chan–Vese98.77070.77080.17060.340
Simple thresholding97.38035.15042.44023.800
Adaptive thresholding93.27053.58018.09015.640
Sobel edge detector74.95074.6306.6306.480
Canny edge detector97.79047.26052.76033.190
Watershed transform95.33058.46026.84022.540
Table 3. Feature recognition and analysis for layer one (L1) using simple thresholding.
Table 3. Feature recognition and analysis for layer one (L1) using simple thresholding.
FeatureHeightWidthNumber of Sides
As-ProcessedAs-PrintedDiff.As-ProcessedAs-PrintedDiff.
External Geometry (pixels)8008066800797−34
External Geometry (mm)20.00020.1500.15020.00019.925−0.075
Table 4. Feature recognition and analysis for layer eight (L8) using Simple Thresholding and C–V composite method.
Table 4. Feature recognition and analysis for layer eight (L8) using Simple Thresholding and C–V composite method.
FeatureHeightWidthNumber of Sides
As-ProcessedAs-PrintedDiff.As-ProcessedAs-PrintedDiff.
External Geometry (pixels)8008066800797−34
External Geometry (mm)20.00020.1500.15020.00019.925−0.075
Internal Circle-left (pixels)160153−7160154−619
Internal Circle-left (mm)4.0003.825−0.1754.0003.850−0.150
Internal Circle-right (pixels)160150−10160152-814
Internal Circle-right (mm)4.0003.750−0.2504.0003.800−0.200
Internal Rectangle (pixels)200199−140040554
Internal Rectangle (mm)5.0004.975−0.02510.00010.1250.125
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saini, T.; Shiakolas, P.S. In Situ Active Contour-Based Segmentation and Dimensional Analysis of Part Features in Additive Manufacturing. J. Manuf. Mater. Process. 2025, 9, 102. https://doi.org/10.3390/jmmp9030102

AMA Style

Saini T, Shiakolas PS. In Situ Active Contour-Based Segmentation and Dimensional Analysis of Part Features in Additive Manufacturing. Journal of Manufacturing and Materials Processing. 2025; 9(3):102. https://doi.org/10.3390/jmmp9030102

Chicago/Turabian Style

Saini, Tushar, and Panos S. Shiakolas. 2025. "In Situ Active Contour-Based Segmentation and Dimensional Analysis of Part Features in Additive Manufacturing" Journal of Manufacturing and Materials Processing 9, no. 3: 102. https://doi.org/10.3390/jmmp9030102

APA Style

Saini, T., & Shiakolas, P. S. (2025). In Situ Active Contour-Based Segmentation and Dimensional Analysis of Part Features in Additive Manufacturing. Journal of Manufacturing and Materials Processing, 9(3), 102. https://doi.org/10.3390/jmmp9030102

Article Metrics

Back to TopTop