Next Article in Journal
A Preference Model-Based Surrogate-Assisted Constrained Multi-Objective Evolutionary Algorithm for Expensively Constrained Multi-Objective Problems
Previous Article in Journal
An Analysis of Spatial Variation in Human Impact on Forest Ecological Functions
Previous Article in Special Issue
All-Integer Quantization for Low-Complexity Min-Sum Successive Cancellation Polar Decoder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Feature Automatic Evaluation of the Aesthetics of 3D Printed Surfaces

Department of Signal Processing and Multimedia Engineering, West Pomeranian University of Technology in Szczecin, 70-313 Szczecin, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(9), 4852; https://doi.org/10.3390/app15094852
Submission received: 19 March 2025 / Revised: 22 April 2025 / Accepted: 24 April 2025 / Published: 27 April 2025
(This article belongs to the Special Issue Advanced Digital Signal Processing and Its Applications)

Abstract

:
Additive manufacturing is one of the continuously developing areas of technology that still requires reliable monitoring and quality assessment of obtained products. Considering the relatively long time necessary for manufacturing larger products, one of the most desired solutions is video quality monitoring of the manufactured object’s surface. This makes it possible to stop the printing process if the quality is unacceptable. It helps to save the filament, energy, and time, preventing the production of items with poor aesthetic quality. In the paper, several approaches to image-based surface quality assessment are discussed and combined towards a high correlation with the subjective perception of typical quality degradations of the 3D printed surfaces, exceeding 0.9. Although one of the most significant limitations of using full-reference image quality-assessment metrics might be the lack of reference images, it can be overcome by using mutual similarity calculated for image regions. For the created dataset containing 107 samples with subjective aesthetic quality scores, it is shown that the combination of even two metrics using their weighted sum and product significantly outperforms any elementary metric or feature when considering correlations with subjective quality scores.

1. Introduction

An increasing interest in additive manufacturing technologies, observed in recent years, makes it possible to enhance the rapid prototyping, customization, and small series production of various parts and elements. Starting from the RepRap project, initiated in 2005, through the expiration of the patent for additive printing in 2009, to further rapid development of this branch of technology, the most popular and widely available technology is based on Fused Deposition Modeling (FDM) utilizing polymer filaments, such as PLA (polyactic acid) and ABS (acrylonitrile butadiene styrene) [1]. Some other important types of 3D printing technology are SLA (stereolithography) and mSLA (masked stereolithography).
The development of additive manufacturing technology is particularly important in the era of Industry 4.0 due to the possibility of using various materials, not only the popular plastic filaments, but also metal or concrete, for 3D printing. Therefore, various types of 3D printers available on the market require more or less precise calibration and, depending on the materials used for their construction as well as the type and quality of filaments, provide varied quality of final products [2].
Three-dimensional printing is a complex process, the quality of which is influenced by many factors such as nozzle diameter, head temperature, layer height, printing speed, and filament diameter [3]. The above parameters are related to the maximum volumetric flow, expressed in mm3 per second, which determines how much material the extruder and head system is able to extrude. Nowadays, slicer programs have the ability to set a limit on this parameter, which leads to a reduction in the printing speed, depending on the layer height and layer’s line width. However, even for correctly selected parameters, the printing process may be temporarily disrupted or completely interrupted. An example of such problems may be dirt or a foreign body in the filament, which usually leads to the nozzle being partially or completely clogged.
The main problems with 3D print quality come from the lack of selection of appropriate printing parameters—filament calibration for the process. Head temperature that is too low leads to under-extrusion due to the nozzle being blocked by unmelted material. On the other hand, too high temperature usually leads to stringing and the filament’s warping on the edges of the printed object. Additionally, poor quality filament with varying diameter differing from the required one (typically 1.75 mm) may lead to similar quality problems. The extruder motor rotates at a given angle and speed to extrude the appropriate amount of material. If the filament diameter deviates from the required one, then for a larger diameter the extruder will feed more filament, leading to over-extrusion. Nevertheless, an under-extrusion is also possible when the stepper motor is unable to push out the excess material and misses some steps. This will cause the material not to be supplied in the appropriate amount. For the samples used in the paper, the printing defects were obtained by interfering with the extruder’s operation.
Although in many applications, especially those involving the use of relatively cheap and widely available 3D printers, the presence of even minor distortions or cracks on the surface of a 3D printed object results in disqualification of the manufactured item, in some cases it is possible to correct minor surface defects after printing is complete. From an aesthetic point of view, the presence of such minor defects might be acceptable; however, the detection of some more significant issues during the printing process should result in its termination. Hence, it is possible to save material, energy, and time, preventing the production of objects that would be thrown away. Nevertheless, although there exist some 3D printing-monitoring solutions based on the use of cameras, there is still a need for improvement of existing surface quality-assessment methods for 3D prints to achieve better correlation with subjective opinions.
In a recent paper [4], an overview of the current state-of-the-art of the quality control for additive manufacturing has been presented, with a particular emphasis on video feedback and 3D scanning. Its authors indicate the need for quality control for robust and reliable 3D printing due to the lack of a comprehensive solution and the need for further work.
A promising direction of research is the application of Convolutional Neural Networks (CNNs) for such purposes, e.g., for the calibration of Fused Deposition Modeling 3D printers, which has been examined in a recent paper [5]. The applied network has demonstrated its usefulness for the detection of various types of defects, including cracks [6], stringing [7], as well as under-extrusion and over-extrusion [8], which had been detected separately in previously presented solutions. Another advantage of this approach is its ability to detect and quantify the degree of warp deformations [9,10]. Another application of neural networks, related to printing error detection and correction using a multi-camera machine vision system, is presented in the paper [11], where a large number of images of the nozzle tip and material deposition captured at various production stages were used to train the network.
A complicated measurement system designed for dimension accuracy verification was proposed by Pollak et al. [12] that is based on a depth sensor and a precise servomechanism. Nevertheless, precise control of the dimensions of the manufactured object requires the use of dedicated software, a programmable logic controller (PLC), and a servomotor. Lishchenko et al. [13] have proposed an online surface quality-monitoring system based on the analysis of the first and the last layer of the 3D printed samples using a laser profiler. A quite similar approach, however, utilizing micro-lidar, is applied, e.g., in Bambu Lab X1C and X1E devices, where the printing correctness and quality are checked for the first layer. The micro-lidar is also used for the calibration of some other important parameters, such as pressure advance and filament flow.
An original idea of using computer vision for the quality evaluation of 3D printed concrete structures has been proposed in the paper [14]. Similar to earlier papers, e.g., ref. [15] where the plastic filaments have been examined, its authors assumed that higher entropy variation characterizes lower-quality concrete surfaces. Nevertheless, in the approach proposed for concrete structures, Local Binary Patterns (LPBs) were additionally applied for texture analysis. Although the proposed methodology seems promising, its practical implementation and further verification are considered one of the challenges for further research.
The authors of the paper [16] used simple and popular methods, such as color difference calculated for CIELAB color space, Canny edge detector, Hu moments, and median filtering, for quality control of 3D printed parts, achieving an overall accuracy of 86.5%. As stated by the authors, the developed system has some limitations, e.g., related to its sensitivity to lighting conditions and reflections, the necessity of tuning the filter size, as well as some other parameters such as binarization thresholds.
A novel idea of the application of the linear regression, support vector regression (SVR), and long short term memory (LSTM) network for the analysis of the filament flow data has been presented by Mayer et al. [17]. This approach is based on using both scans of the printed flat elements as well as time series from the in-process sensor measuring the filament flow. The method was verified using a dataset of 73 flat samples, similar to the database used in this paper. The authors noticed the necessity of further improvements of the developed system to increase the reliability of the prediction and reduce inaccuracy.
Since the progress in the development of 3D printers is continuously high, there is still a necessity for better quality-assessment methods dedicated to additive manufacturing, which should be as fast and reliable as possible, considering the speed of the 3D printing process. Nevertheless, as stated in Section 4.1, the approach proposed in the paper is fast enough for the assumed application. Some of the methods, such as the detection of spaghetti printing based on artificial intelligence methods, e.g., AlexNet and support vector machine (SVM) [18], may already be found in some devices (e.g., Bambu Lab X1C https://wiki.bambulab.com/en/knowledge-sharing/Spaghetti_detection, accessed on 18 March 2025); however, there is still a lack of a comprehensive quality-assessment solution.
An overview of some monitoring and control methods used in additive manufacturing, based on the idea of Industry 4.0, may be found in the survey paper [19], whereas the description of machine condition-monitoring methods for 3D printing has been presented in the paper [20].
Considering the above analysis, the main objective of the paper is to develop an objective quality metric, that is a combination of some image features and known elementary image quality-assessment metrics, providing high correlation with subjective quality scores, exceeding 0.9. To achieve this goal, the dataset containing images and depth maps of 3D printed specimens together with Mean Opinion Scores (MOS) related to their perceived quality is necessary. Optimization of the parameters of the proposed nonlinear combination model requires the use of full-reference image quality metrics in a modified form, using the idea of mutual similarity.

2. Materials and Methods

To achieve the assumed goal of a relatively fast and accurate aesthetic quality assessment of 3D printed surfaces, a database of images and depth maps obtained for 107 planar samples was prepared. The dataset contains the photos and depth maps of planar samples (their dimensions are 35 mm × 35 mm × 4 mm) captured by a Sony DSC-HX100V camera (manufactured by Sony Corporation, Tokyo, Japan) in controlled illumination conditions and the GOM ATOS 3D scanner (manufactured by GOM GmbH, Braunschweig, Germany), respectively. All samples were produced with popular FDM 3D printers, such as Prusa i3, RepRap Ormerod 3, and da Vinci 1.0 Pro 3-in-1, from two popular types of filaments, namely PLA and ABS, using several colors of materials. The size of captured images obtained using the camera without flash for the exposure time 1/125 s and 5 mm focal length using an automatic white balance is 1600 × 1600 pixels [21]. An illustration of some sample images and depth maps for various quality specimens is presented in Figure 1. The dataset is available on the corresponding author’s website https://okarma.zut.edu.pl/?id=dataset&L=1, (accessed on 16 April 2025).
An integral part of the database is the set of expert assessments expressed on a 4-point scale (good, moderately good, moderately poor, and poor quality), as well as subjective assessment values (Mean Opinion Scores) obtained as a result of perceptual experiments conducted among a group of several dozen volunteers. These MOS values, verified as consistent with expert assessments, were used as a reference for the development and optimization of automatic quality-assessment metrics to achieve the highest possible correlation with subjective quality scores.
The use of MOS values, similar to general-purpose image quality assessment [22], is the result of subjective experiments where the observers were asked about the perceived quality of the 3D printed surfaces. These opinions were related only to the aesthetic assessment in contrast to the assessment of mechanical parameters that may be measured, usually using destructive testing (e.g., using stress, squeezing, tearing, tensile, bending, twisting, etc.) [23]. The word “aesthetic” is used to emphasize this difference, as we do not assess any mechanical properties of the 3D printed samples.
Since the quality evaluation of 3D printed surfaces differs from the general-purpose image quality assessment (IQA) due to the different nature and properties of the assessed images, a direct application of many well-known full-reference IQA metrics is often impossible. An important limitation is the typical lack of a reference image representing the perfect quality surface obtained using the same type and color of the filament, which might be used for comparison purposes. Therefore, several approaches were proposed during our earlier research, leading to some elementary quality metrics and features, which are fundamental for the hybrid approach discussed in the paper.

2.1. Texture-Based Approach

The first idea considered for the evaluation of the 3D printed surfaces is based on the assumption of the regularity of the visible patterns generated during the manufacturing process. Since 3D printers based on the fused deposition modeling technology produce the object by laying the melted filament layer by layer, such layers are usually well visible. In the case of the most typical distortions, caused by lack of filament (so-called “dry printing”) or too high delivery speed of the material, the regularity of these layers is affected. Therefore, considering the photo of the 3D printed surface (obtained from a camera located on the side instead of the typical location above the 3D print) as the texture, its statistical features based on the analysis of the normalized Gray-Level Co-occurrence Matrix (GLCM) may be determined. Among such features, known as Haralick features, the most relevant ones are homogeneity, correlation, energy, and contrast. Since their values depend on the assumed offset used during the GLCM calculations, the analysis of the repeatability of features for varying offsets may be conducted, leading to a good classification of high and poor quality samples as presented in the paper [24].
To analyze the repeatability of the selected Haralick features for various offsets, only the vertical GLCMs should be computed since the use of the horizontal GLCMs leads to unstable results due to the high sensitivity to even relatively small image rotations. Nevertheless, the main drawbacks of this approach are the necessity of calculation of several GLCMs as well as the dependence of the values of individual Haralick features on the quantization [25], and, more importantly, on the color of the filament. Calculation of the GLCM is preceded by the necessary color-to-grayscale conversion of the acquired photos, typically conducted using the well-known ITU Recommendation BT.601-7 implemented, e.g., in MATLAB’s rgb2gray function.

2.2. Application of Image Entropy

Another possibility for analyzing the surface structure of 3D prints is using image entropy. Intuitively, low values of image entropy indicate a relatively small amount of information visible on the image plane. Hence, in the case of 3D prints, one may expect low randomness and the lack of distortions of the structure of the visible layers. Therefore, high image entropy should be a characteristic feature of low-quality 3D prints, with visible distortions that reduce the regularity of visible patterns.
Nevertheless, since the image entropy strongly depends on the filament’s color, its direct use for the quality assessment of 3D printed surfaces does not lead to satisfactory results [15]. Therefore, one may use not only the global entropy calculated for various channels or various color models (e.g., Red, Green, Blue, Hue, or luma component Y from the YUV color model) as the entropy-based features but also their average local values calculated using the division of an image into blocks. Additionally, the global or average local entropy may be combined with its local variance, leading to color-independent quality assessment of 3D printed surfaces as proposed in the paper [15]. These types of local entropy-based metrics have also been used as the elementary metrics/features in the experiments discussed in this paper.
Considering the dependence of the image entropy on the color of the filament, instead of conversion into grayscale or the choice of a selected color channel, a color-independent representation of the surface structure of the 3D prints may be used. Such a possibility may be used by applying the 3D scanning procedure for the manufactured samples, where the varying depth of the 3D scans should reflect the structure of individual layers regardless of the color of the filament. For such depth maps, represented as grayscale images after normalization of data range, such features as image entropy may also be calculated [26].

2.3. Feature-Based Approaches

Another direction od research conducted towards a reliable quality assessment of 3D printed surfaces is the use of various image features that may be calculated using several detectors and feature extractors such as, e.g., Harris corner detector, Scale-Invariant Feature Transform (SIFT) [27], Speeded Up Robust Features (SURF) or Oriented FAST and Rotated BRIEF (ORB) based on the combination of the FAST keypoint detector and BRIEF feature descriptor [28]. Some other possibilities are the use of the Histogram of Oriented Gradients (HOG) [29] or Local Binary Patterns (LBP) [30], as well as the application of Hough transform together with the CLAHE algorithm proposed in the paper [31]. Analyzing the results obtained using the above feature detectors and descriptors, some statistical features such as median, average, standard deviation, kurtosis, or skewness may also be calculated for some of them.
Since the best results have been obtained using the HOG descriptor [32] and Hough transform [31], these methods are considered representative for this group in further experiments. The idea of the application of the Hough transform is based on the assumption that for a high quality 3D printed surface, the number and the length of the detected straight lines should be noticeably higher than for the images representing the contaminated surfaces. To compensate for the influence of non-uniform illumination, the additional application of histogram equalization using the CLAHE algorithm has been proposed. Then, further binarization of the analyzed image may be conducted using the classical Otsu method [33], leading to the binary image, which is further subject to the Hough transform.
To increase the independence of results on some geometrical image deformations, we have proposed using the Monte Carlo method for a random selection of image regions where the straight lines are detected on the obtained binary image [31]. Finally, the proposed metric is based on the average length of the detected lines for a specified number of randomly selected regions. For high quality 3D prints, these values should be close to the width of the analyzed regions, whereas in the presence of distortions, the lines detected using the Hough transform are noticeably shorter.
Another idea is the application of Histogram of Oriented Gradients, where the domination of horizontal or vertical gradients should be observed for high quality surfaces, depending on the orientation of the sample. In the paper [32], the calculation of the local HOG features was proposed where the cell size was 32 × 32 pixels with the block size 4 × 4 pixels with the overlapping by half of the block size to ensure the high classification accuracy. The comparison of the local features for high quality 3D prints should demonstrate relatively high similarity among them; therefore, the values of the calculated standard deviation or variance should be low compared to poor quality samples. Nevertheless, additional statistical measures, such as average, median, skewness, and kurtosis, may also be calculated for HOG features. A similar approach was also considered for Local Binary Patterns, although the obtained results were significantly worse.

2.4. Mutual Image Similarity Based on Full-Reference IQA Metrics

The most promising results of the quality evaluation of the 3D printed surfaces may be achieved using the modified approach based on the use of the full-reference general-purpose image quality-assessment metrics. Since their direct application would require knowledge of the reference image, which is usually unavailable, the proposed solution is the calculation of mutual similarities. In this case, the image may be divided into 4, 9, or 16 fragments using the 2 × 2, 3 × 3, or 4 × 4 blocks grid, respectively. Then, mutual similarities may be calculated between each of the blocks using the same methods as for the comparison of two images using full-reference (FR) IQA metrics, typically using the popular Structural Similarity (SSIM) metric or one of its several modifications and extensions, considered in the paper. The number of mutual similarities, further averaged, for N × N blocks may be expressed as M = 0.5 × N × ( N 1 ) ; hence, for 16 blocks, 120 values are obtained. Therefore, the use of a higher number of blocks would cause much longer computation time with a similar classification accuracy or a similar correlation with subjective quality scores. In further experiments, several elementary metrics were calculated for 4, 9, and 16 blocks, considered—together with some other features discussed above—as the inputs for the optimization of the combined (hybrid) metrics proposed in the paper.
The most widely known metric which is the base for many other IQA methods, namely Structural Similarity (SSIM), was proposed by Wang and Bovik [34] as the product of three components reflecting the most typical distortions, namely loss of contrast, luminance distortions, and the structural distortions. This metric is calculated locally using the sliding window approach, applying the 11 × 11 pixels Gaussian window. These values, being the quality map of the image, are further averaged, and the final SSIM value is obtained. In the case considered in the paper, these SSIM values are calculated mutually between the image fragments and then averaged again. The same approach is applied for the other IQA-based methods used in our experiments.
One of the earliest metrics based on the idea of the SSIM utilizes the observation that heavily blurred images are not always properly measured using this metric. Therefore, Edge-Based Structural Similarity (ESSIM) metric was proposed in 2006 by Chen et al. [35], which is based on the Sobel edge filtering and further calculation of edge direction vectors and their histograms for 16 × 16 pixels image blocks. Then, the comparison of edge histograms is used in the original SSIM formula instead of the structural part together with luminance and contrast comparisons. The application of the local variance calculation in the preprocessing step was proposed in the paper [36], leading to the Quality Index based on Local Variance (QILV).
Another modification of the SSIM metric, proposed in 2009 by Sampat et al. [37], is known as Complex Wavelet SSIM (CW-SSIM). This metric is based on the assumption that some distortions cause phase changes reflected in the local wavelet coefficients. Additionally, it is insensitive to small rotations and shifts between the compared images.
On the other hand, one of the most widely known modifications of the SSIM was proposed by Zhang et al. in 2011. This metric, referred to as Feature Similarity (FSIM) [38], is based on comparison of phase congruency and gradient magnitude using a similar formula to the original SSIM implementation. Both these low-level features are complementary as phase congruency reflects the importance of a local structure, whereas gradient magnitude is sensitive to contrast information. Another idea, proposed by Zhang and Li [39] was based on the use of spectral residual similarity. The proposed SR-SIM metric utilized the specific visual saliency model designed to reflect the attention mechanisms and finally the perceived image quality. The local similarity maps between the compared images are calculated using the gradient modulus determined using the Scharr filter. Obtained spectral residual visual saliency (SRVS) maps and gradient values are compared using the same formulas as those used in SSIM and FSIM metrics.
Liu et al. proposed the application of Gradient Similarity (GSIM) [40], also based on the mathematically similar formula to the SSIM metric, where the gradient values are determined using 5 × 5 pixels kernels, making it possible to measure both contrast and structure changes. Later, the idea of DCT Subbands Similarity (DSS) was proposed by Balanov et al. [41] where the structural changes in subbands of Discrete Cosine Transform were measured and appropriately weighted.
Another approach proposed by Wang et al. in 2016 is the use of Multiscale Contrast Similarity Deviation (MCSD) [42]. This metric also has relatively low computational complexity and high correlation with subjective quality scores determined for the most relevant IQA datasets. Its authors proposed the calculation of contrast similarity maps in three scales and further pooling of three contrast similarity deviations determined for those maps. On the other hand, two Analysis of Distortion Distribution-based metrics, namely ADD-SSIM and ADD-GSIM, were proposed by Gu et al. [43] where a new pooling model was the main focus. Nevertheless, the authors analyzed distributions of distortion position, distortion intensity, frequency changes, and histogram changes as the main elements influencing the overall image quality. Since the proposed pooling scheme was used mainly for the SSIM and GSIM metrics, both these versions were used in our paper.
The SSIM-like formula may also be applied for the comparison of various features obtained using some transforms, similarly to already mentioned CW-SSIM or DSS metrics. Another such example is known as Riesz transform and Visual contrast sensitivity-based feature Similarity index (RVSIM) [44], where the use of Riesz transform was preceded by the application of a Log-Gabor filter for the decomposition of reference and distorted images. The Riesz transform feature matrix was then combined with gradient magnitude similarity and weighted using the visual contrast sensitivity function (CSF). The combination of contrast maps and visual saliency, as an extension of the MCSD metric, was proposed by Jia et al. [45], and is referred to as Contrast and Visual Saliency Similarity-Induced Index (CVSSI).
An original idea for extending the structural similarity metric, known as SSIM4, was proposed by Ponomarenko et al. [46] where the fourth multiplicative component of the metric reflects a predictability of image blocks, defined as a minimum mean squared error (MSE) between the block and its neighboring blocks. Color versions of metrics considered in this paper (CSSIM, CSSIM4) were used in our experiments as well.

3. Proposed Approach

Considering the above-discussed methods, over 100 various metrics and features were examined and their correlations with subjective MOS values were calculated for 107 images from the dataset considered in the paper. It is worth noting that some of the methods proposed in our earlier papers were initially verified for a smaller number of samples, hence the necessity of a reliable comparison of various approaches for the same dataset of images representing the photos of the 3D printed surfaces.
In one of our previous conference papers [26], the simplified combination of some features as the weighted product with optimized exponent weights was proposed, leading to Pearson’s Linear Correlation Coefficient with subjective scores reaching 0.8353. This was achieved for the combination of FSIM (for 4 blocks), averaged local entropy of depth maps, Hough transform-based metric (with the use of CLAHE), and the kurtosis of the HOG descriptor. On the other hand, the application of the same model for the combination of five SSIM-like IQA metrics made it possible to achieve the PLCC = 0.8565 [21]. The five metrics used in this weighted product, assuming the division of the assessed image into four blocks, were FSIM, CW-SSIM, MCSD, ESSIM, and GSIM. A similar correlation (PLCC = 0.8537) was obtained for the combination of the same metrics applying the division of the image into 16 fragments.
Since the nonlinear combination of metrics using the weighted product may not be an optimal solution, we propose to extend this model using additionally the weighted sum of the same metrics. This approach does not increase the computational cost in a significant way, as for N metrics it requires only the additional N summation operations, N multiplications, and N operations related to raising to a power, which is the result of optimization of the model’s parameters. The proposed model may be expressed as:
Q combined = n = 1 N a n · Q n b n + a N + 1 · n = 1 N Q n c n ,
where Q combined denotes the proposed combined metric, and Q n are the values of N elementary metrics. The variable n stands for the number of the elementary metric, and the parameters a n , b n , and c n , as well as a N + 1 , determine the weighting coefficients, which are obtained as the result of the optimization. The optimization is conducted with the use of the absolute value of the PLCC between the combined metric Q combined and the MOS values for 107 samples as the goal function subject to maximization. For this purpose MATLAB® (ver. R2024a) fminsearch function may be used, which is based on the simplex search method, jointly with well-known gradient-based optimization methods. The operating principles of the proposed approach are illustrated in Figure 2.
To find the best possible combination of metrics, the PLCC values obtained after the optimization of seven parameters ( a 1 , a 2 , a 3 , b 1 , b 2 , c 1 , and c 2 ) for all combinations of two metrics (N = 2) were checked. Then, the ”best” combination of the two metrics was used together with all remaining metrics used as the third one and a similar optimization procedure was used—in this case the addition of an additional metric to the model causes the addition of three additional parameters ( a n , b n , and c n ). The starting point for the optimization was the set of parameters leading to the highest PLCC value obtained without the additional metric. To verify the linearity of the relation between the combined metric and the subjective quality scores, the scatter plots were also analyzed. These obtained for the “best” combined metrics are presented in Section 4, together with the obtained correlation values and the results of the ablation study.

4. Discussion

4.1. Analysis of Experimental Results

The analysis of the correlation obtained for the proposed combined metrics should be preceded by the presentation of the correlation values achieved for elementary metrics and features. Top ten values, including also rank-order correlations, are presented in Table 1, whereas the results achieved for the five ”best” combinations of two metrics are shown in Table 2.
As presented in Table 1, only two SSIM-like elementary metrics (FSIM and CW-SSIM) may be found among those leading to the highest correlation with subjective quality scores, whereas the others utilize various variants of entropy-based calculations. Nevertheless, the entropy values strongly depend on image contrast; therefore, local calculations are conducted to reduce the direct influence of contrast on the evaluation results, as well as preprocessing operations, e.g., before the calculation of the HOG features.
As may be noted, although the highest correlation for elementary metrics is obtained for the metric based on the entropy of depth maps, which requires the use of the 3D scanner, the “best” combinations of two elementary metrics are based on the mutual calculations of the IQA metrics or their combination with standard deviation computed for the HOG features. The increase in Pearson’s correlation from below 0.7 to over 0.87 fully confirms the validity of the proposed approach. It is also worth noting that even the use of two metrics with the proposed model lead to better results than presented in earlier papers where only the weighted product has been used (0.8353 in the paper [26], and 0.8537 in the paper [21], obtained for the combinations of four and five metrics, respectively).
Nevertheless, even better results may be achieved using the combination of three or four metrics, using the “best” combination of two metrics as the starting point and adding each of the remaining metrics or features to the model with further optimization of its parameters. The results obtained for five “best” combinations of three and four metrics are presented in Table 3, where only the additional metrics used together with FSIM and SR-SIM (both computed using 16 blocks) are listed.
Analyzing the results presented in Table 3, the advantage of the use of the proposed approach may be easily observed as even the use of three metrics makes it possible to exceed PLCC = 0.9, and the use of four mutually calculated IQA metrics leads to PLCC over 0.91. Further optimization of five metrics does not lead to significantly better results; hence, considering the increased computational complexity, the only reasonable combination of five metrics is the use of FSIM (for 16 blocks), SR-SIM (for 16 blocks), MCSD (for 4 blocks), CSSIM4 (for 9 blocks), and MV calculated for red channel using 256 blocks. Such a combination with optimized parameters makes it possible to achieve PLCC = 0.9116, SROCC = 0.9092, and KROCC = 0.7454, whereas the use of some other metrics leads to values similar to those achieved for 4 metrics (slightly over 0.91).
The illustration of the increasing linearity of the relation between the combined metrics and subjective quality scores (MOS values) for a growing number of elementary metrics is presented on the scatter plots in Figure 3. It is also clearly visible that due to the use of the absolute correlation value as the goal function, in some cases the correlation is negative, whereas in some others, e.g., for the combination of 2, 3, or 4 elementary metrics, positive.
Comparing the time necessary for the production of an individual specimen and the time required for computation of 5 metrics and their combination, the proposed approach may be successfully applied in real-time situations. An automatic quality assessment of a single specimen takes not more than a few seconds or less, depending on the computational power of the computer used, whereas the same time would be necessary for printing of just a small fragment of an object containing just a few layers of the filament. All the considered algorithms are much faster than the 3D printing process.

4.2. Ablation Study

To confirm the usefulness of the proposed model based on the combination of metrics using the weighted sum and weighted product, according to Equation (1), an additional ablation study was conducted to determine the correlations obtained using the simplified models: using only the weighted product, and using only the weighted sum. The results achieved for such simplified metrics are presented in Table 4. Analyzing the results obtained for the same sets of metrics as presented as the “best” in Section 4.1, a significant decrease in correlations may be observed in comparison to the proposed model containing both the weighted sum and the weighted product, especially if the weighted sum is removed. A graphical comparison of the obtained PLCC values is presented in Figure 4.

5. Conclusions

The application of the nonlinear combination of various methods used for the quality assessment of 3D printed surfaces with the use of both weighted sum and weighted product of the same set of metrics makes it possible to achieve high correlation with subjective aesthetics assessment collected from volunteer observers. As demonstrated in the paper, the proposed model outperforms not only elementary metrics but also the combinations of them based exclusively on the weighted product as well as only on the weighted sum. Due to the use of various general-purpose FR IQA metrics with the assumed calculation of the mutual similarities, the compensation of their non-linearity is possible without the necessity of using the reference image of the 3D printed surface, which is usually unavailable.
The obtained experimental results fully confirm that the idea of combined metrics is useful not only for general-purpose IQA but also for the automatic quality evaluation of the 3D printed surfaces. The limitations of the proposed approach may cause possible errors that may occur as the result of the presence of some other types of distortions, using some other materials (filaments), as well as strongly varying lighting conditions. Nevertheless, a further increase of the correlation with Mean Opinion Scores would require the use of some other metrics or features since the use of their higher number in the proposed model does not lead to a significant increase in the PLCC values. This approach may be considered one of the directions of our further research.

Author Contributions

Conceptualization, J.F. and K.O.; methodology, J.F. and K.O.; software, J.F., M.T. and K.O.; validation, J.F. and K.O.; formal analysis, J.F. and K.O.; investigation, J.F. and K.O.; resources, J.F., M.T. and K.O.; data curation, J.F. and K.O.; writing—original draft preparation, J.F. and K.O.; writing—review and editing, K.O.; visualization, K.O.; supervision, K.O.; project administration, K.O.; funding acquisition, K.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The WEZUT 3DPrint Quality Dataset used in experiments ia available at https://okarma.zut.edu.pl/?id=dataset&L=1 (accessed on 16 April 2025).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ABSacrylonitrile butadiene styrene
ADDAnalysis of Distortion Distribution
BRIEFBinary Robust Independent Elementary Features
CLAHEContrast Limited Adaptive Histogram Equalization
CNNConvolutional Neural Network
CSFcontrast sensitivity function
CVSSIContrast and Visual Saliency Similarity-Induced Index
CW-SSIMComplex Wavelet Structural Similarity
DSSDiscrete Cosine Transform Subbands Similarity
FASTFeatures from Accelerated Segment Test
FDMfused deposition modeling
FR IQAfull-reference image quality assessment
FSIMFeature Similarity
GLCMGray-Level Co-occurrence Matrix
GSIMGradient Similarity
KROCCKendall Rank Order Correlation Coefficient
LBPLocal Binary Patterns
LSTMlong short term memory
MCSDMultiscale Contrast Similarity Deviation
MOSMean Opinion Score
MSEmean squared error
mSLAmasked Stereolithography
ORBOriented FAST and rotated BRIEF
PLApolyactic acid
PLCprogrammable logic controller
PLCCPearson’s Linear Correlation Coefficient
QILVQuality Index based on Local Variance
RVSIMRiesz transform and Visual contrast sensitivity-based feature Similarity index
SIFTScale-Invariant Feature Transform
SLAStereolithography
SROCCSpearman Rank Order Correlation Coefficient
SR-SIMSpectral Residual based Similarity
SRVSspectral residual visual saliency
SSIMStructural Similarity
SURFSpeeded Up Robust Features
SVMsupport vector machine
SVRsupport vector regression

References

  1. Shahrubudin, N.; Lee, T.; Ramlan, R. An Overview on 3D Printing Technology: Technological, Materials, and Applications. Procedia Manuf. 2019, 35, 1286–1296. [Google Scholar] [CrossRef]
  2. Chua, C.; Wong, C.; Yeong, W. Standards, Quality Control, and Measurement Sciences in 3D Printing and Additive Manufacturing; Academic Press: Cambridge, MA, USA, 2017. [Google Scholar]
  3. Dave, H.; Davim, J. Fused Deposition Modeling Based 3D Printing; Materials Forming, Machining and Tribology; Springer International Publishing: Cham, Switzerland, 2021. [Google Scholar]
  4. Seifert, D.; Grzona, P.; Raval, K.; Thürer, M. Quality control techniques in additive manufacturing: Current trends and their prototypical implementation. Procedia Comput. Sci. 2025, 253, 1206–1215. [Google Scholar] [CrossRef]
  5. Ganitano, G.S.; Maruyama, B.; Peterson, G.L. Accelerated Multiobjective Calibration of Fused Deposition Modeling 3D Printers Using Multitask Bayesian Optimization and Computer Vision. Adv. Intell. Syst. 2025, 7, 2400523. [Google Scholar] [CrossRef]
  6. Wang, Y.; Huang, J.; Wang, Y.; Feng, S.; Peng, T.; Yang, H.; Zou, J. A CNN-Based Adaptive Surface Monitoring System for Fused Deposition Modeling. IEEE/ASME Trans. Mechatron. 2020, 25, 2287–2296. [Google Scholar] [CrossRef]
  7. Paraskevoudis, K.; Karayannis, P.; Koumoulos, E.P. Real-Time 3D Printing Remote Defect Detection (Stringing) with Computer Vision and Artificial Intelligence. Processes 2020, 8, 1464. [Google Scholar] [CrossRef]
  8. Jin, Z.; Zhang, Z.; Gu, G.X. Autonomous in-situ correction of fused deposition modeling printers using computer vision and deep learning. Manuf. Lett. 2019, 22, 11–15. [Google Scholar] [CrossRef]
  9. Saluja, A.; Xie, J.; Fayazbakhsh, K. A closed-loop in-process warping detection system for fused filament fabrication using convolutional neural networks. J. Manuf. Process. 2020, 58, 407–415. [Google Scholar] [CrossRef]
  10. Brion, D.A.; Shen, M.; Pattinson, S.W. Automated recognition and correction of warp deformation in extrusion additive manufacturing. Addit. Manuf. 2022, 56, 102838. [Google Scholar] [CrossRef]
  11. Brion, D.A.J.; Pattinson, S.W. Generalisable 3D printing error detection and correction via multi-head neural networks. Nat. Commun. 2022, 13, 4654. [Google Scholar] [CrossRef]
  12. Pollák, M.; Sabol, D.; Goryl, K. Measuring the Dimension Accuracy of Products Created by 3D Printing Technology with the Designed Measuring System. Machines 2024, 12, 884. [Google Scholar] [CrossRef]
  13. Lishchenko, N.; Piteľ, J.; Larshin, V. Online Monitoring of Surface Quality for Diagnostic Features in 3D Printing. Machines 2022, 10, 541. [Google Scholar] [CrossRef]
  14. Senthilnathan, S.; Raphael, B. Using Computer Vision for Monitoring the Quality of 3D-Printed Concrete Structures. Sustainability 2022, 14, 15682. [Google Scholar] [CrossRef]
  15. Okarma, K.; Fastowicz, J. Improved quality assessment of colour surfaces for additive manufacturing based on image entropy. Pattern Anal. Appl. 2020, 23, 1035–1047. [Google Scholar] [CrossRef]
  16. Nascimento, R.; Martins, I.; Dutra, T.A.; Moreira, L. Computer Vision Based Quality Control for Additive Manufacturing Parts. Int. J. Adv. Manuf. Technol. 2022, 124, 3241–3256. [Google Scholar] [CrossRef]
  17. Mayer, J.; Thi Thanh, T.B.; Caglar, T.; Schulz, H.; Lallahom, O.B.; Li, D.; Niemella, M.; Toptas, B.; Jochem, R. Surface quality prediction for FDM-printed parts using in-process material flow data. Procedia CIRP 2024, 126, 645–650. [Google Scholar] [CrossRef]
  18. Yean, F.P.; Chew, W.J. Detection of Spaghetti and Stringing Failure in 3D Printing. In Proceedings of the 2024 International Conference on Green Energy, Computing and Sustainable Technology (GECOST), Miri Sarawak, Malaysia, 17–19 January 2024; pp. 293–298. [Google Scholar] [CrossRef]
  19. Tartici, I.; Kilic, Z.M.; Bartolo, P. A Systematic Literature Review: Industry 4.0 Based Monitoring and Control Systems in Additive Manufacturing. Machines 2023, 11, 712. [Google Scholar] [CrossRef]
  20. He, H.; Zhu, Z.; Zhang, Y.; Zhang, Z.; Famakinwa, T.; Yang, R. Machine condition monitoring for defect detection in fused deposition modelling process: A review. Int. J. Adv. Manuf. Technol. 2024, 132, 3149–3178. [Google Scholar] [CrossRef]
  21. Okarma, K.; Fastowicz, J.; Lech, P.; Lukin, V. Quality Assessment of 3D Printed Surfaces Using Combined Metrics Based on Mutual Structural Similarity Approach Correlated with Subjective Aesthetic Evaluation. Appl. Sci. 2020, 10, 6248. [Google Scholar] [CrossRef]
  22. Bull, D.R.; Zhang, F. Measuring and managing picture quality. In Intelligent Image and Video Compression; Elsevier: Amsterdam, The Netherlands, 2021; pp. 335–384. [Google Scholar] [CrossRef]
  23. Rouf, S.; Raina, A.; Irfan Ul Haq, M.; Naveed, N.; Jeganmohan, S.; Farzana Kichloo, A. 3D printed parts and mechanical properties: Influencing parameters, sustainability aspects, global market scenario, challenges and applications. Adv. Ind. Eng. Polym. Res. 2022, 5, 143–158. [Google Scholar] [CrossRef]
  24. Okarma, K.; Fastowicz, J. No-reference quality assessment of 3D prints based on the GLCM analysis. In Proceedings of the 2016 21st International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 29 August–1 September 2016; pp. 788–793. [Google Scholar] [CrossRef]
  25. Löfstedt, T.; Brynolfsson, P.; Asklund, T.; Nyholm, T.; Garpebring, A. Gray-level invariant Haralick texture features. PLoS ONE 2019, 14, e0212110. [Google Scholar] [CrossRef]
  26. Fastowicz, J.; Lech, P.; Okarma, K. Combined Metrics for Quality Assessment of 3D Printed Surfaces for Aesthetic Purposes: Towards Higher Accordance with Subjective Evaluations. In Computational Science, Proceedings of the ICCS 2020, Amsterdam, The Netherlands, 3–5 June 2020; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12143, pp. 326–339. [Google Scholar] [CrossRef]
  27. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  28. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
  29. Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar] [CrossRef]
  30. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  31. Fastowicz, J.; Okarma, K. Quality Assessment of Photographed 3D Printed Flat Surfaces Using Hough Transform and Histogram Equalization. J. Univers. Comput. Sci. 2019, 25, 701–717. [Google Scholar] [CrossRef]
  32. Lech, P.; Fastowicz, J.; Okarma, K. Quality Evaluation of 3D Printed Surfaces Based on HOG Features. In Computer Vision and Graphics, Proceedings of the ICCVG 2018, Warsaw, Poland, 17–19 September 2018; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11114, pp. 199–208. [Google Scholar] [CrossRef]
  33. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man, Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  34. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  35. Chen, G.H.; Yang, C.L.; Po, L.M.; Xie, S.L. Edge-Based Structural Similarity for Image Quality Assessment. In Proceedings of the 2006 IEEE International Conference on Acoustics Speed and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; Volume 2, pp. II-933–II-936. [Google Scholar] [CrossRef]
  36. Aja-Fernandez, S.; Estepar, R.S.J.; Alberola-Lopez, C.; Westin, C.F. Image Quality Assessment based on Local Variance. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006. [Google Scholar] [CrossRef]
  37. Sampat, M.; Wang, Z.; Gupta, S.; Bovik, A.; Markey, M. Complex Wavelet Structural Similarity: A New Image Similarity Index. IEEE Trans. Image Process. 2009, 18, 2385–2401. [Google Scholar] [CrossRef] [PubMed]
  38. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef]
  39. Zhang, L.; Li, H. SR-SIM: A fast and high performance IQA index based on spectral residual. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 1473–1476. [Google Scholar] [CrossRef]
  40. Liu, A.; Lin, W.; Narwaria, M. Image Quality Assessment Based on Gradient Similarity. IEEE Trans. Image Process. 2012, 21, 1500–1512. [Google Scholar] [CrossRef]
  41. Balanov, A.; Schwartz, A.; Moshe, Y.; Peleg, N. Image quality assessment based on DCT subband similarity. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 2105–2109. [Google Scholar] [CrossRef]
  42. Wang, T.; Zhang, L.; Jia, H.; Li, B.; Shu, H. Multiscale contrast similarity deviation: An effective and efficient index for perceptual image quality assessment. Signal Process. Image Commun. 2016, 45, 1–9. [Google Scholar] [CrossRef]
  43. Gu, K.; Wang, S.; Zhai, G.; Lin, W.; Yang, X.; Zhang, W. Analysis of Distortion Distribution for Pooling in Image Quality Prediction. IEEE Trans. Broadcast. 2016, 62, 446–456. [Google Scholar] [CrossRef]
  44. Yang, G.; Li, D.; Lu, F.; Liao, Y.; Yang, W. RVSIM: A feature similarity method for full-reference image quality assessment. EURASIP J. Image Video Process. 2018, 2018, 6. [Google Scholar] [CrossRef]
  45. Jia, H.; Zhang, L.; Wang, T. Contrast and Visual Saliency Similarity-Induced Index for Assessing Image Quality. IEEE Access 2018, 6, 65885–65893. [Google Scholar] [CrossRef]
  46. Ponomarenko, M.; Egiazarian, K.; Lukin, V.; Abramova, V. Structural Similarity Index with Predictability of Image Blocks. In Proceedings of the 2018 IEEE 17th International Conference on Mathematical Methods in Electromagnetic Theory (MMET), Kyiv, Ukraine, 2–5 July 2018; pp. 115–118. [Google Scholar] [CrossRef]
Figure 1. Illustration of some sample images and depth maps from the developed dataset for various quality specimens together with their MOS values.
Figure 1. Illustration of some sample images and depth maps from the developed dataset for various quality specimens together with their MOS values.
Applsci 15 04852 g001
Figure 2. Illustration of the idea of the proposed approach.
Figure 2. Illustration of the idea of the proposed approach.
Applsci 15 04852 g002
Figure 3. Scatter plots obtained for the “best” elementary metric and the “best” combined metrics constructed from 2, 3, 4, and 5 elementary metric or features.
Figure 3. Scatter plots obtained for the “best” elementary metric and the “best” combined metrics constructed from 2, 3, 4, and 5 elementary metric or features.
Applsci 15 04852 g003
Figure 4. Illustration of the PLCC values with MOS values for the considered dataset obtained during the ablation study for the weighted product, weighted sum and the proposed model.
Figure 4. Illustration of the PLCC values with MOS values for the considered dataset obtained during the ablation study for the weighted product, weighted sum and the proposed model.
Applsci 15 04852 g004
Table 1. The correlation values of 10 “best” elementary metrics with subjective scores obtained for 107 images from the considered dataset.
Table 1. The correlation values of 10 “best” elementary metrics with subjective scores obtained for 107 images from the considered dataset.
MetricPLCCSROCCKROCC
EV (16 blocks) 10.69360.66730.4811
FSIM (9 blocks)0.68200.68450.5185
FSIM (4 blocks)0.67800.68260.5114
FSIM (16 blocks)0.67560.68650.5195
CW-SSIM (9 blocks)0.63230.60980.4232
CW-SSIM (16 blocks)0.59290.58230.4027
CW-SSIM (4 blocks)0.58070.56330.3981
kurtosis of HOG0.50750.51770.3874
MV (RGB+hue 256 blocks) 20.48160.49200.3480
EVAR (16 blocks) 30.44620.53000.3660
1 EV is the average local entropy obtained for the depth maps. 2 MV is the product of the mean of the local entropy and its variance calculated separately for each RGB channel and hue channel. MV values calculated for three RGB channels are then averaged and multiplied by MV obtained for the hue channel. 3 EVAR is the variance of the local entropy calculated for the depth maps.
Table 2. The correlation values of 5 “best” combined metrics based on the weighted sum and weighted product of two elementary metrics with subjective scores obtained for 107 images from the considered dataset.
Table 2. The correlation values of 5 “best” combined metrics based on the weighted sum and weighted product of two elementary metrics with subjective scores obtained for 107 images from the considered dataset.
MetricsPLCCSROCCKROCC
FSIM (16 blocks) & SR-SIM (16 blocks)0.87050.86530.6879
standard deviation of HOG & MCSD (9 blocks)0.86760.87310.6978
standard deviation of HOG & DSS (16 blocks)0.86470.88160.7063
standard deviation of HOG & MS-SSIM (9 blocks)0.85680.87270.6960
FSIM (4 blocks) & SR-SIM (16 blocks)0.85540.86500.6879
Table 3. The correlation values of 5 “best” combined metrics based on the weighted sum and weighted product of three and four elementary metrics with subjective scores obtained for 107 images from the considered dataset.
Table 3. The correlation values of 5 “best” combined metrics based on the weighted sum and weighted product of three and four elementary metrics with subjective scores obtained for 107 images from the considered dataset.
Additional MetricsPLCCSROCCKROCC
MCSD (4 blocks)0.90120.90340.7327
CVSSI (9 blocks)0.89780.90270.7352
QILV (9 blocks)0.89410.90050.7377
CVSSI (4 blocks)0.88630.89520.7253
SR-SIM (9 blocks)0.88590.88690.7161
MCSD (4 blocks) & CSSIM4 (9 blocks)0.91010.90920.7472
MCSD (4 blocks) & MV (mean 256 blocks) 10.90890.90620.7366
MCSD (4 blocks) & MV (green 256 blocks) 20.90880.90570.7377
MCSD (4 blocks) & Yavg (with CLAHE) 30.90580.90900.7419
MCSD (4 blocks) & EV (16 blocks)0.90460.89800.7377
1 MV is the product of the mean of the local entropy and its variance calculated for the average of RGB channels. 2 MV is the product of the mean of the local entropy and its variance calculated for the green channel. 3 Yavg is the average brightness calculated after the histogram equalization using CLAHE algorithm.
Table 4. The correlation values obtained for the “best” combined metrics using only on the weighted sum and only the weighted product achieved for 107 images from the considered dataset.
Table 4. The correlation values obtained for the “best” combined metrics using only on the weighted sum and only the weighted product achieved for 107 images from the considered dataset.
Number of MetricsPLCCSROCCKROCC
2 (weighted product)0.78620.78570.6078
3 (weighted product)0.82380.82770.6515
4 (weighted product)0.82660.83000.6501
5 (weighted product)0.82650.83020.6505
2 (weighted sum)0.85450.86780.6925
3 (weighted sum)0.86370.87420.7041
4 (weighted sum)0.88410.89720.7285
5 (weighted sum)0.88770.90100.7411
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fastowicz, J.; Tecław, M.; Okarma, K. A Multi-Feature Automatic Evaluation of the Aesthetics of 3D Printed Surfaces. Appl. Sci. 2025, 15, 4852. https://doi.org/10.3390/app15094852

AMA Style

Fastowicz J, Tecław M, Okarma K. A Multi-Feature Automatic Evaluation of the Aesthetics of 3D Printed Surfaces. Applied Sciences. 2025; 15(9):4852. https://doi.org/10.3390/app15094852

Chicago/Turabian Style

Fastowicz, Jarosław, Mateusz Tecław, and Krzysztof Okarma. 2025. "A Multi-Feature Automatic Evaluation of the Aesthetics of 3D Printed Surfaces" Applied Sciences 15, no. 9: 4852. https://doi.org/10.3390/app15094852

APA Style

Fastowicz, J., Tecław, M., & Okarma, K. (2025). A Multi-Feature Automatic Evaluation of the Aesthetics of 3D Printed Surfaces. Applied Sciences, 15(9), 4852. https://doi.org/10.3390/app15094852

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop