Next Article in Journal
Design and Development of Interactive Software Models for Teaching Coding Theory: A Case Study on Hamming Codes—General Algorithm
Next Article in Special Issue
Coffee-Leaf Diseases and Pests Detection Based on YOLO Models
Previous Article in Journal
Harnessing Digital Twins for Sustainable Agricultural Water Management: A Systematic Review
Previous Article in Special Issue
Evaluating the FLUX.1 Synthetic Data on YOLOv9 for AI-Powered Poultry Farming
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Prototype for Computing the Distance of Features of High-Pressure Die-Cast Aluminum Products

by
Luis Alberto Arroniz Alcántara
1,2,
Óscar Hernández-Uribe
3,*,
Leonor Adriana Cárdenas-Robledo
4 and
José Alejandro Fernández Ramírez
5
1
Posgrado CIATEQ A.C., Av. del Retablo #150, Constituyentes-Fovissste, Queretaro 76150, Mexico
2
AUMA (BOCAR Group), Carretera San Luis Potosí Matehuala Km 11.1, San Luis Potosi 78439, Mexico
3
CIATEQ A.C., Av. Manantiales #23-A, Parque Industrial Bernardo Quintana, Queretaro 76246, Mexico
4
CIATEQ A.C., Parque Industrial Tabasco Business Center, Tabasco 86693, Mexico
5
Facultad de Ingeniería, Universidad Politécnica de Tlaxcala, Av. Universidad Politécnica #1, San Pedro Xalcaltzinco, Tlaxcala 90180, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4230; https://doi.org/10.3390/app15084230
Submission received: 28 February 2025 / Revised: 4 April 2025 / Accepted: 7 April 2025 / Published: 11 April 2025
(This article belongs to the Special Issue Applied Computer Vision in Industry and Agriculture)

Abstract

:
Automotive manufacturers are changing their product models faster due to the customization of users’ demands. In response, suppliers must react by improving the flexibility of their means of production and making the changeover process more efficient and agile to avoid monetary losses. This article reports a prototype that uses computer vision, deep learning algorithms, and mathematical methods to derive the spatial position (x, y, z) of features of the machined parts of high-pressure die-casting (HPDC) aluminum products. It uses an RGB-D sensor to capture and process an image with the you only look once (YOLO) algorithm to determine the center of specific workpiece features. With this information, the feature depth of each center is obtained from the depth matrix and then introduced into a polynomial regression formula to acquire the spatial position (x, y, z) in millimeters. The prototype is a complementary tool for quickly sampling workpieces in the production line and verifying that they meet the requirements and specifications of spatial distances among features. With this evidence, only if necessary, the piece is sent for further and comprehensive measurement by a coordinate-measuring machine (CMM), in line with the accuracy demanded by the automotive industry.

1. Introduction

The automotive industry relies on trustworthy suppliers that consistently comply with quality, time, and cost demands [1]. In the beginning stages of this industry, the lifetime of vehicle components was 20 years [2]. Currently, to comply with constant changes in user demand and the widespread introduction of electronic devices in the central functions of cars [3], manufacturers have decreased the vehicle development time and lifetime of their products, causing changes in the supply chain [4]. Hence, auto parts manufacturers produce different models of automotive pieces with complex geometries by executing short batches supported by digital transformation [5]. In this vein, coordinate-measuring machines (CMMs) are employed to guarantee the machined quality of vehicle parts from the early stages of production as a preventive way to avoid automotive recalls [6].
The CMM validates the measurement tolerances of the piece, and based on the results, the production department makes fine adjustments to or conducts the recalibration of the manufacturing equipment to start the production order. While CMMs are considered the benchmark in the automotive industry by being micro-precise, such machines depend on operator skills, tactile methods, or probe calibration [7]. It is worth noting that when parts change frequently or possess complex geometries, operators invest most of their time in analyzing and designing measurement strategies, reprogramming the CMM, or creating a fixture and a clamping strategy [8,9], making the process slow and expensive. In addition, the costs associated with CMM installation and maintenance are very high to prevent measurements from being affected by contamination or temperature changes [10]. Thus, such a machine is impractical for in situ quality control applications [11].
Artificial intelligence (AI) has solved industrial problems and undergone rapid growth via hardware architectures and software frameworks [12,13] that support machine learning (ML) and deep learning (DL) applications [14]. A DL architecture that has evolved quite fast is the convolutional neural network (CNN), which several models employ as a backbone [15]. In the automotive sector, AI algorithms are applied successfully in several computer vision (CV) tasks, such as the virtual construction of vehicles [16], the design of electrodes and electrolytes for vehicles’ batteries [17], and electric vehicle air conditioning control [18]. Thus, AI-driven software and CV systems offer the flexibility and adaptability required by the industry (without interrupting the manufacturing process) to face constant changes due to a short development time and lifetime and the small-batch production of pieces with a diversity of materials and intricate surfaces [19,20].
The flexibility patterns required by the automotive industry make suppliers face the need to produce different types of parts in small batches, creating changes in their workflow. Manual defect detection methods suffer from low efficiency [21], and the CMM’s limitations are barriers that restrict rapid adaptation to such changes. CV key tasks, such as object and feature detection, support the manufacture of any object to meet specific criteria related to quality control, measurement, and monitoring [22,23] and provide an alternative way to face several industrial challenges. For instance, Liu et al. [24], based on the fusion of an RGB and depth image, extracted the spatial position parameters of a tire and identified four features, and Cuesta et al. [25] created a ceramic artifact with a broad assortment of dimensional and geometrical tolerances (e.g., sphere, cylinder, cone) for the metrology benchmarking of four 3D-scanning sensors.
The motivation for conducting this work is related to the elapsed time (minutes or even hours) to confirm that an automotive machined workpiece is within the specified tolerances using a CMM. In this regard, the production line is inactive until approval to continue the process is received, affecting the non-productive costs per minute. Thus, this work presents a prototype to support inline verification to detect possible quality defects in machined pieces made of high-pressure die-casting (HPDC) aluminum, serving as a decision-making tool. It acts as a measurement instrument to verify whether the piece complies with the set parameters and to detect errors early that serve as evidence before sending a workpiece to the CMM laboratory. The proposed prototype uses a low-cost depth sensor and AI to locate the machined features on HPDC automotive parts. It takes advantage of CV systems’ speed to compute feature distances belonging to an HPDC aluminum piece, taking a shorter time than a CMM. Thus, it reduces time, continuous movements, and stoppage on production lines.
This document is organized as follows: Section 2 presents a literature review with related works; Section 3 describes the materials and methods used in this work; Section 4 outlines the results and discussion, shows a couple of cases to measure workpiece features, and discusses the outcomes and findings; and Section 5 presents the conclusions and future work.

2. Literature Review

The first subsection presents the evolution of you only look once (YOLO) family models and research works applying DL to solve industrial problems using RGB-D sensors. The second subsection describes related works in measuring or computing distances and geometric features in the automotive industry.

2.1. Background

Researchers seek to generate DL models with a lower computational cost and higher speed, such as Faster R-CNN [26] or YOLO [27], YOLOv3 [28], YOLOv5 [29], YOLOv7 [30], and recently, YOLOv12 [31] algorithms.
The YOLO family has been used to search for and classify objects within images. YOLOv3 delivers high precision and accuracy in detecting small objects, achieved by presenting the same image at three scales using a network called Darknet-53 [28]. However, it struggles to align the bounding boxes with the object perfectly. The YOLOv3 up to YOLOv5 algorithms have a similar architecture (moving from Darknet-53 to CSPDarknet-53), with slight changes to focus on small objects [32]. Research works report close results among them and other recent architectures [33,34,35], particularly when the YOLOv3 architecture is enhanced [36]. For defect detection on steel pipes, one work reports a mean average precision (mAP) value for YOLOv3 higher than that for YOLOv6 and YOLOv7 [37]. Recent versions of YOLOv8, YOLOv9, YOLOv10, and YOLOv11 introduce architectural improvements to the CSPDarknet-53 backbone, and new programmable gradient information and generalized efficient layer aggregation network components in the neck [38].
The industrial applications of image processing are often limited by hardware restrictions (e.g., the resolution, quality, and precision of the image) [39] or environmental conditions (i.e., illumination and background of the surroundings) [40], and the distance of the object and its features or defects [41]. Nevertheless, numerous applications successfully overcome these limitations, for instance, employing high-resolution cameras to locate typical defects in product manufacturing [42,43]. Regarding sensor modalities, RGB-D cameras with a limited operation range operate effectively under diverse lighting conditions [44]. The Intel RealSense sensor family includes a stereo-depth module, RGB sensor, and infrared projector, and has been widely used for prototype applications [45]. Specifically, the D400 series is employed as a fruit spatial position estimator [46] and in ergonomics [47]. Additionally, there are approaches for the segmentation of 3D objects using AI to determine what 3D objects exist in a 2D image [48] and for the digital reconstruction and visualization of 3D objects from a series of 2D images [49].

2.2. Related Works

In the automotive industry, particularly in the context of aluminum or steel products, there are alternatives to CMM that use contact metrology, as in the work of Rajamohan et al. [50], who employed a five-axis CNC machining center and a probe to measure a high-precision workpiece, where most geometric deviations were smaller than the CMM measurements. As for non-contact equipment or devices for the achievement of similar goals, several works use CV systems, DL, and ML techniques in the workpiece detection of defects or the measurement of features. Patel and Kiran [51] used linear regression, whereas Palani and Natarajan [52] implemented a self-organized artificial neural network to predict the surface roughness of pieces. Jiang et al. [53] classified X-ray images of pieces with small inter-class and large intra-class differences as defective or non-defective casting with a CNN. Similarly, Parlak and Emel [33] used YOLOv5 to classify defects as gas holes and shrinkages based on the ASTM E155 standard. Schlotterbeck et al. [54] performed an inline evaluation for tomography scanning to automate defect detection in alloy-casted parts.
Yi-Cheng and Syh-Shiuh [55] implemented an on-machine measurement system for the thread dimensions of a workpiece. They compared its efficiency with measurements from a 3D microscope, obtaining high-accuracy results. Pérez et al. [56] presented a 3D real-time quality inspection platform to measure automotive cast-iron parts using 3D line scan sensors. Likewise, Zou et al. [57] proposed a roughness estimation method via the line blur functions of edges reflected from machined surfaces.
Huang et al. [58] developed a target inspection method to detect, classify, and measure welding studs. They replaced the YOLOv8’s backbone with an HGNetV2 network reparametrized to Rep_HGNetV2, improving small-target detection. Khow et al. [59] proposed an enhanced YOLOv8 that adds a formula to compute the ratio between the object size and the size indicated by bounding boxes. With this method, the output returns the distance of the objects detected with an accuracy of 90%. Similarly, Gąsienica-Józkowy et al. [60] used YOLOv8, homography-based mapping, and polynomial regression algorithms to estimate the distance and height of objects in monocular and thermal images, achieving an accuracy of 98.86%. Huang et al. [61] employed a 3D CV system for the defect detection of an aluminum welding surface and a Gaussian planar correction algorithm to measure the area of different defects, obtaining a measurement error lower than 20%.

3. Materials and Methods

This section presents the elements of the prototype and the algorithms involved in its development, summarized in Figure 1, which illustrates the sequence followed to obtain an output matrix with the labeled spatial location for the main characteristics of an HPDC aluminum product. The subsequent subsections describe the digitizer environment functionality, the hardware and software used, and their central components. The Dataset Preparation Subsection reveals the creation and content of the files derived from the image capture. The RGB-D images taken from the sample that includes HPDC aluminum workpieces have three to four features each (e.g., cast bore, drilled bore, machined flange, screw seat). In the final subsection, the algorithm to obtain distances shows a matrix containing the data (i.e., label, spatial position). The proposal provides an inline decision-making tool for partial verification to know whether or not a piece is within the specified tolerances without using a CMM.

3.1. Digitizer Environment

Figure 2a depicts the digitizer environment to avoid external environmental influence, indicating the main components with letters. It has a pyramidal frustum made of laminated wood and painted internally with matte black. The internal closed base area, measured in millimeters (mm), has a length of 400 mm × 400 mm, and the external open base has a length of 580 mm × 610 mm. The apothem lengths that join the bases are as follows: inferior—580 mm; superior—600 mm; and lateral—610 mm. A circular rotative plate (a1) spins the piece, and the captured position images are obtained by an Intel RealSense D415 camera (a2) with an LED ring illumination source (a3). The camera captures a 2D mesh for the color image (RGB) and a matrix with the depths of each pixel, known as a depth matrix. It has a depth range of 300 mm–3000 mm and a resolution of 1 mm.
The prototype employs an Asus gaming laptop with an Intel Core I5 processor, an Nvidia GeForce video card, an Ubuntu 22.04.2 LTS operating system, and the Python programming language (a4). On the other hand, the HPDC aluminum parts present different geometrical shapes with several cast bores, which are CNC-machined for a posterior component assembly. Figure 2b depicts the features of a machined workpiece, such as formed threads (b1), the machined flange (b2), and the cast bore (b3).

3.2. Dataset Preparation

The authors have created a database with images of HPDC aluminum parts based on the workflow depicted in Figure 3. First, the user initializes and configures the RealSense camera to obtain an RGB-D image with a pixel resolution of 640 × 480 and places a casted part on the rotative plate. Next, a data frame stores the information, structured as [[R, G, B, D], [pixel 2], …, [pixel n]]. Then, this data frame is trimmed of its outer parts, reducing its dimensions to 448 × 448 pixels without sacrificing resolution, and three files in CSV format save the data: one contains image information, the other the corresponding configuration of the bounding boxes, and the last the correlation between them. The circular plate rotates a few degrees, restarting the sequence to capture enough images of each casted workpiece. As part of the preprocessing and cleaning task, the user identifies the adequate bounding boxes of each image and labels every feature. Finally, the above files serve as a dataset to train the YOLOv3 and YOLOv11 algorithms.
Figure 4 displays the workflow to identify labels and bounding boxes. The user selects the image to be analyzed and scales it to identify the features, picking a box that better fits the feature, pressing the enter key to accept the selection, and repeating these actions with a different feature.
Figure 5 depicts an example of a workpiece with the features’ locations after the user locates the features in the piece identified by bounding boxes. The user interface presents five tags for each bounding box: label, x_center, y_center, width, and height. The label represents a number between 0 and 3 that identifies the feature; x_center and y_center define the geometrical center; and width and height limit the rectangle’s dimensions. The dataset contains 360 images corresponding to 20 workpieces captured in different positions. The resulting image data captured for each piece are normalized to recover the data with the bounding boxes at every scale.

3.3. Training and Testing

The YOLO algorithms utilize 1000 images corresponding to 20 different workpieces (e.g., housing, bracket, or cover) with three to four features each. The proposal uses data augmentation (i.e., translation, rotation, and horizontal and vertical flip) to reach 50 images for each workpiece. Figure 6 presents the workflow for the training process, and a CSV file stores the information at the end. The algorithms apply three prediction blocks (e.g., big, medium, and small objects) as inputs of a max pool layer that predicts the label of the feature, the center point of the feature (x, y), and the width and height of the bounding box. For YOLOv3 and YOLOv11, the parameters used in the proposed prototype are a batch size of 4, image size of 448, number of features of 4, and learning rate of 1 × 10−4. Since the prototype is employed in Mexico, Table 1 shows a Spanish to English translation.

3.4. Obtaining Distances

The model outputs feed a polynomial algorithm, observed in Equation (1), that converts pixels into mm using the depth matrix. Figure 7a depicts an aluminum 3D-printed pattern to obtain the indexes by placing it in front of the sensor at 40 different distances. Each iteration builds a matrix and considers a cross-section to locate the initial and final point of the pattern, representing the piece in 200 mm and computing its value in pixels—the distance ratio results from the points at a given depth by dividing them by 200 mm. Figure 7b shows the resulting polynomial behavior.
Finally, the least square method was applied using Equation (2), and the point represented by [x_center, y_center] in pixels is located in the distance matrix to obtain the spatial location and depth in mm. In a workpiece, given two drilled bore centers measured in pixels, for the first, P0(x, y) = (321, 754) as the origin point, and for the second, P1(x, y) = (318, 435); using Equation (2) results in P0(x, y) = (109.151547, 525.7794206) and a depth of 493 mm, and P1(x, y) = (108.250914, 214.2178846) with a depth of 482 mm.
y = a 0 + a 1 x i + a 2 x i 2 + a 3 x i 3 + + a m x i m + ϵ i
3.063809041 6.292055889 x p i x e l s + 4.168540127 x p i x e l s 2 2.701041432 5.547050558 y p i x e l s + 3.674967808 y p i x e l s 2 d e p t h = x m m y m m z m m
Furthermore, the Pythagoras theorem is employed to find the relative distance between P0 and P1 with a value of 239.0 mm. Therefore, to calculate the relative distances for any piece, the point on the x-axis and y-axis that belongs to the feature closest to the center point of the image was selected as the origin of the piece.

4. Results and Discussion

This section presents the following: 1. the results of YOLOv3 and YOLOv11 training, validation, and testing with the dataset created; 2. the spatial distances computed by the program for three HPDC aluminum workpieces; 3. an on-site test to compare the specified distances for each piece against the measurements obtained with the prototype; and 4. a discussion section highlighting the models employed.

4.1. Training, Validation, and Testing

The YOLO algorithms used 1000 images corresponding to 20 pieces, divided into validation, training, and testing sets. The training set comprises 700 images, 35 for each piece. Of the images, 150 correspond to the validation set and 150 to the test set; each piece has 15 images. The training and validation loss curve converges at epoch 200, whereas the mAP curve reaches its best behavior at epoch 240 (YOLOv3) and 220 (YOLOv11). In the mentioned epochs, feature selection and object detection have accuracies above 90% for both algorithms. In addition, the model YOLOv3 employed four learning rate values—lr = 1 × 10−3, lr = 5 × 10−4, lr = 1 × 10−4, and lr = 5 × 10−5—to evaluate the network’s performance according to the above metrics. It is worth noting that YOLOv3 presents better results with an lr = 1 × 10−4. Therefore, for YOLOv11, the lr used is 1 × 10−4, obtaining better results than YOLOv3. Figure 8 illustrates the performance graphs with validation sampled every ten epochs. Figure 8a shows the loss behavior, which is better when it is closer to 0. YOLOv3 obtains a value of 1.51 and YOLOv11 a value of 0.78.
Figure 8b depicts the mAP behavior; the best value is that closest to 100%. YOLOv3 achieves a value of 67%, whereas that of YOLOv11 is 69%. Figure 8c indicates how accurate the algorithm is in identifying features; YOLOv3 attains a value of 95%, whereas that of YOLOv11 is 96%. Figure 8d presents the behavior for object detection, where YOLOv3 reaches a performance of 95%, and YOLOv11 delivers a value of 98%. Table 2 summarizes the performance of the four indicators for each YOLO algorithm, assuming an lr = 1 × 10−4 for both versions, with the best performance at epoch 247 for YOLOv3 and 243 for YOLOv11. Both algorithms locate the features of the HPDC aluminum part. As mentioned, the user interface includes a section that presents some images from the validation dataset, with the prediction outcome, their bounding boxes, and labels (useful for manual validation). Figure 9 illustrates two analyzed images, the bracket and housing pieces, and features (i.e., drilled bore, screw seat, and machined flange), identifying them appropriately with their closest centers.

4.2. Spatial Distances

This section presents the results and analysis of using YOLOv3 with three HPDC aluminum workpieces: a housing, bracket, and cover. Each piece includes a figure illustrating the sequence of image processing and two tables: one with a summary of the detected features (e.g., cast bore, drilled bore, machined flange, and screw seat) and another with a particular feature (drilled bore) positions in pixels and mm for comparison with the specified measurements, the columns x and y represent the center of a feature, and x, y, and z are the corresponding values in mm. Extra columns show the calculated distance, the specified distance by design, and the absolute error. The first row represents the center of the feature considered as the base to compute the distances in the subsequent rows.
Housing. The frontal image of the housing shows eight different drilled bores (8 mm), two screw seats (10 mm), and one casted bore (10 mm). Figure 10 depicts the workpiece on the circular plate, the detected features, and the depth image. Table 3 summarizes the number of features found in the current workpiece. For this piece, the prototype’s detection accuracy reaches an average of 75.0%. Table 4 presents the results for each drilled bore’s calculated distance (Calculated D) compared to the distance specified by design (Specified D). It describes six drilled bores, with an average error of 0.219 mm and a maximum error of 0.277 mm for a distance of 150.5 mm, representing a 0.18% error.
Bracket. The frontal image of the bracket displays seven drilled bores (8 mm), seven screw seats (10 mm), and five casted bores (10 mm). Figure 11 depicts the workpiece on the prototype, the detected features, and the depth image. Table 5 presents the number of features of the workpiece. In this case, the prototype’s detection accuracy reaches an average of 80.0%. Table 6 shows the results for the drilled bores’ distances compared to the specified distances. It describes seven drilled bores, with an average error of 0.133 mm and a maximum error of 0.233 mm for a distance of 56.5 mm, representing a 0.41% error.
Cover. This workpiece was new to the model (not seen before). The frontal image depicts three machined flanges (12 mm), twelve drilled bores (6 mm), six screw seats (8 mm), and three casted bores. Figure 12 illustrates the workpiece on the circular plate, the detected features, and the depth image. Table 7 indicates the number of features of the workpiece. The prototype’s detection accuracy reaches an average of 81.2% for the cover piece. Table 8 exhibits the results for the drilled bores’ distances compared to the specified distances. It describes eleven drilled bores, with an average error of 0.123 mm and a maximum error of 0.27 mm for a distance of 113.0 mm, representing a 0.24% error.

4.3. On-Site Test with the Prototype

An automotive parts manufacturing plant served as the setting to test the prototype in a machining process line, including 17 CNC machines, two industrial washers, and 33 assembly stations. The production line manufactures two types of cylinder head covers used for vehicles, and the workpiece quality must be verified by sampling every 4 h at the final CNC operation. A worker randomly picks six CNC machines and takes the finished pieces, cleaning and tempering the sample of each machine, which takes 40 min. Following this, the operator sends the samples for measurement to the CMM laboratory. The laboratory has two CMMs available for testing purposes. Each machine measures one piece in 56 min, working for 2 h and 48 min. The remaining time is 1 h and 12 min for the operator to confirm the results and perform either the calibration or preventive maintenance of the CMM before the subsequent sampling. Figure 13 presents on-site tests conducted at the manufacturing plant, using a workpiece not employed previously by the YOLO algorithms, known as a cylinder head cover.
Figure 13a shows a piece not previously seen by the model and clamped on a bench in front of the RGB-D sensor in the production area. Figure 13b depicts the original image captured by the sensor. Figure 13c and Figure 13d present the features detected using YOLOv3 and YOLOv11, respectively.
Both measurement processes are executed five times with a single piece (taking 2 min with the prototype and 25 min with the CMM each time). The process includes preparation, the identification of features, the computation of distances, analysis, and the average computation. Table 9 displays the number of features and their accuracy for the drilled bores using YOLOv3 and YOLOv11. Table 10 and Table 11 present the spatial position of drilled bore features with the distance specified and the average values calculated by the prototype and obtained with the CMM. Table 10 presents the results of YOLOv3 with an average error of feature detection by the prototype equal to 0.128 mm, and the maximum error is 0.24 mm, representing 0.15%. Likewise, Table 11 delivers the outcomes of YOLOv11 with an average error of 0.183 mm, and the maximum error is 0.296 mm, representing 0.17%. It is worth mentioning the constraints specified for measurement validation: the piece is reincorporated into the production process only when the absolute error of the analyzed feature between the columns Specified D and Calculated D is lower than 0.3 mm. Indeed, such an error should represent less than 0.30% regarding Specified D.

4.4. Discussion

The prototype supports operators’ decision-making to ensure that workpiece production meets quality standards by verifying that the absolute error is in range. Otherwise, the piece is sent to and measured by the CMM, optimizing this resource. In this sense, the prototype works as an in situ verification support tool, for a preliminary sampling quality assessment of manufactured workpieces. Due to the nature of the piece (i.e., morphology and restrictions), a CMM RENISHAW AGILITY s12129 is used to obtain the measurements of each feature and compute the average of five measurements as a reference to validate the prototype. In this work, the metrics employed are mean absolute percentage error (MAPE) and accuracy [62]. For the CMM, the average values were a MAPE of 0.0180%, an accuracy of 99.9820%, and a standard deviation of 0.0003 mm. For the prototype using YOLOv3, MAPE was 0.1781%, accuracy was 99.8219, and the standard deviation was 0.2578 mm. Moreover, with the use of YOLOv11, MAPE was 0.1257%, accuracy was 99.8444%, and standard deviation was 0.1989 mm. In addition, the accuracy of the feature detected (drilled bore) in off-site tests had a mean of 88.9%, whereas the on-site tests reached 80.9% for YOLOv3 and 90.5% for YOLOv11, both promising values.
This work used the YOLO architecture to detect a few workpiece features. The user iteratively identifies adequate bounding boxes to tailor the dataset to the use case to ensure the model’s training on relevant features. On the other hand, the literature has reported that YOLOv3 obtained better average precision and mAP values than YOLOv4 in detecting internal defects in aluminum alloy welds [63]. Another example is the work developed by Shao et al. [64], who used a custom-designed dataset tailored to specific fire and smoke detection. They executed several YOLO models and reported that YOLOv3 scored better than YOLOv10 for fire detection and YOLOv7 for smoke detection according to their mAP values. However, the authors acknowledge that replacing YOLO with newer architectures in the fast-evolving object detection field could offer further improvements in accuracy and speed. The results reported in the above subsections demonstrate that the mAP values of YOLOv11 are slightly better than those of YOLOv3.

5. Conclusions

The presented approach provides an insightful perspective on the noninvasive measurement of HPDC aluminum products. It is relevant as a low-cost solution for small and medium enterprises that cannot afford investment in CMM or 3D scanner devices. The designed prototype contributes to applied science and industrial practice in the implementation of a workflow for spatial location, enabling the identification of the characteristics of interest of an aluminum workpiece. The design and implementation of the prototype involve the computation of the distance between an origin point and features of HPDC aluminum parts for the automotive industry, using YOLOv3 and YOLOv11 algorithms with different workpieces for training, validation, and testing. The prototype can locate such features using pictures taken with an RGB-D sensor.
One of the benefits of the on-site prototype is the employment of the feature detection of the workpiece using a DL method and the computation of the distances based on a polynomial approach supported by an RGB-D sensor. It allows the identification of the same features in other pieces with different shapes and complexities. The prototype works as a complementary tool for quickly sampling workpieces in a production line and verifying that they meet the requirements and specifications for spatial distances among features. Thus, only when the piece exceeds the limit values is it sent for further and comprehensive measurement at the CMM laboratory, saving time and resources. The results were promising due to the accurate feature identification achieved through immediate computations, which foster prompt decision-making. This is critical for greater flexibility in manufacturing automotive parts in response to new demands, requirements, and needs, mainly in non-repetitive production, where subsequent pieces may differ slightly.
Future work should include enhancements to the prototype, given that not all features are detected, such as the adjustment of the prototype design. For instance, changes in the size of the structure and plate would enable the measurement of pieces of a bigger size. Also, researchers should consider tuning environmental conditions (e.g., lighting quality) to work with the same settings for sampling images and commissioning. Regarding YOLO algorithms, replacing such models with newer architectures and increasing the training dataset’s size to enhance feature detection should be considered. In addition, it is essential to provide the exact location of the bounding boxes by applying accurate manual identification methods. In the pixel-to-mm conversion stage, an improvement could be adding an automated metric extraction method of the features to measure the perimeter and locate its center accurately. Further tests are necessary using an RGB-D system with a micrometer scale to reach the measurement accuracy of a CMM.

Author Contributions

Conceptualization, L.A.A.A. and Ó.H.-U.; methodology, all authors; software, L.A.A.A.; validation, L.A.A.A., Ó.H.-U. and L.A.C.-R.; investigation, all authors; data curation, L.A.A.A.; writing—original draft preparation, L.A.A.A. and Ó.H.-U.; writing—review and editing, L.A.A.A., Ó.H.-U. and L.A.C.-R.; visualization, L.A.A.A., Ó.H.-U. and J.A.F.R.; supervision, Ó.H.-U. and L.A.C.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The aluminum casted parts used for the development of this prototype were provided by Bocar Group S.A. de C.V. Thanks to the Secretaría de Ciencia, Humanidades, Tecnología e Innovación (SECIHTI) for Ph.D. scholarship support under the number CVU 771170 and SECIHTI SNI.

Conflicts of Interest

Author Luis Alberto Arroniz Alcántara was employed by the company AUMA (BOCAR Group). The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Börold, A.; Teucke, M.; Rust, J.; Freitag, M. Recognition of car parts in automotive supply chains by combining synthetically generated training data with classical and deep learning based image processing. Procedia CIRP 2020, 93, 377–382. [Google Scholar] [CrossRef]
  2. Paskert, L. Additive Manufacturing for the Spare Part Management of Classic Cars. Master Thesis, Stellenbosch University, Sudáfrica, South Africa, 2022. [Google Scholar]
  3. Boissie, K.; Addouche, S.-A.; Baron, C.; Zolghadri, M. Obsolescence management practices overview in automotive industry. IFAC-PapersOnLine 2022, 55, 52–58. [Google Scholar] [CrossRef]
  4. Zamazal, K.; Denger, A. Product lifecycle management in automotive industry. In Systems Engineering for Automotive Powertrain Development; Hick, H., Küpper, K., Sorger, H., Eds.; Springer: Cham, Switzerland, 2021; pp. 443–469. [Google Scholar] [CrossRef]
  5. Rosa, E.S.; Godina, R.; Rodrigues, E.M.G.; Matias, J.C.O. An industry 4.0 conceptual model proposal for cable harness testing equipment industry. Procedia Comput. Sci. 2022, 200, 1392–1401. [Google Scholar] [CrossRef]
  6. AIAG. Potential Failure Mode and Effects Analysis FMEA Reference Manual, 4th ed.; Automotive Industry Action Group: Southfield, MI, USA, 2008. [Google Scholar]
  7. Kugunavar, S.; Iyer, S.V.; Sangwan, K.S.; Bera, T.C. Data-driven model for CMM probe calibration to enhance efficiency and eustainability. Procedia CIRP 2024, 122, 885–890. [Google Scholar] [CrossRef]
  8. Luque-Morales, R.A.; Hernandez-Uribe, O.; Mora-Alvarez, Z.A.; Cardenas-Robledo, L.A. Ontology development for knowledge representation of a metrology lab. Eng. Technol. Appl. Sci. Res. 2023, 13, 12348–12353. [Google Scholar] [CrossRef]
  9. Kiraci, E.; Palit, A.; Attridge, A.; Williams, M.A. The effect of clamping sequence on dimensional variability of a manufactured automotive sheet metal sub-assembly. Int. J. Prod. Res. 2023, 61, 8547–8559. [Google Scholar] [CrossRef]
  10. Thalmann, R.; Meli, F.; Küng, A. State of the art of tactile micro coordinate metrology. Appl. Sci. 2016, 6, 150. [Google Scholar] [CrossRef]
  11. Kalajahi, E.G.; Mahboubkhah, M.; Barari, A. On detailed deviation zone evaluation of scanned surfaces for automatic detection of defected regions. Measurement 2023, 221, 113462. [Google Scholar] [CrossRef]
  12. Dhilleswararao, P.; Boppu, S.; Manikandan, M.S.; Cenkeramaddi, L.R. Efficient hardware architectures for accelerating deep neural networks: Survey. IEEE Access 2022, 10, 131788–131828. [Google Scholar] [CrossRef]
  13. Adewumi, T.; Liwicki, F.; Liwicki, M. State-of-the-art in open-domain conversational ai: A survey. Information 2022, 13, 298. [Google Scholar] [CrossRef]
  14. Sharifani, K.; Amini, M. Machine learning and deep learning: A review of methods and applications. World Inf. Technol. Eng. J. 2023, 10, 3897–3904. [Google Scholar]
  15. Bhatt, D.; Patel, C.; Talsania, H.; Patel, J.; Vaghela, R.; Pandya, S.; Modi, K.; Ghayvat, H. CNN variants for computer vision: History, architecture, application, challenges and future scope. Electronics 2021, 10, 2470. [Google Scholar] [CrossRef]
  16. Ding, Y.; Luo, X. A virtual construction vehicles and workers dataset with three-dimensional annotations. Eng. Appl. Artif. Intell. 2024, 133, 107964. [Google Scholar] [CrossRef]
  17. Sui, C.; Jiang, Z.; Higueros, G.; Carlson, D.; Hsu, P.-C. Designing electrodes and electrolytes for batteries by leveraging deep learning. Nano Res. Energy 2024, 3, e9120102. [Google Scholar] [CrossRef]
  18. He, L.; Li, P.; Zhang, Y.; Jing, H.; Gu, Z. Intelligent control of electric vehicle air conditioning system based on deep reinforcement learning. Appl. Therm. Eng. 2024, 245, 122817. [Google Scholar] [CrossRef]
  19. Islam, M.R.; Zamil, M.Z.H.; Rayed, M.E.; Kabir, M.M.; Mridha, M.F.; Nishimura, S.; Shin, J. Deep learning and computer vision techniques for enhanced quality control in manufacturing processes. IEEE Access 2024, 12, 121449–121479. [Google Scholar] [CrossRef]
  20. Junaid, A.; Siddiqi, M.U.R.; Mohammad, R.; Abbasi, M.U. In-process measurement in manufacturing processes. In Functional Reverse Engineering of Machine Tool, 1st ed.; Khan, W.A., Abbas, G., Rahman, K., Hussain, G., Edwin, C.A., Eds.; CRC Press: Boca Raton, FL, USA, 2019; pp. 105–134. [Google Scholar]
  21. Li, N.; Wang, Z.; Zhao, R.; Yang, K.; Ouyang, R. YOLO-PDC: Algorithm for aluminum surface defect detection based on multiscale enhanced model of YOLOv7. J. Real-Time Image Process. 2025, 22, 86. [Google Scholar] [CrossRef]
  22. Ercetin, A.; Der, O.; Akkoyun, F.; Gowdru Chandrashekarappa, M.P.; Şener, R.; Çalışan, M.; Olgun, N.; Chate, G.; Bharath, K.N. Review of image processing methods for surface and tool condition assessments in machining. J. Manuf. Mater. Process. 2024, 8, 244. [Google Scholar] [CrossRef]
  23. Tzampazaki, M.; Zografos, C.; Vrochidou, E.; Papakostas, G.A. Machine vision—Moving from industry 4.0 to industry 5.0. Appl. Sci. 2024, 14, 1471. [Google Scholar] [CrossRef]
  24. Liu, W.; Li, F.; Jing, C.; Wan, Y.; Su, B.; Helali, M. Recognition and location of typical automotive parts based on the RGB-D camera. Complex Intell. Syst. 2021, 7, 1759–1765. [Google Scholar] [CrossRef]
  25. Cuesta, E.; Meana, V.; Álvarez, B.J.; Giganto, S.; Martínez-Pellitero, S. Metrology benchmarking of 3D scanning sensors using a ceramic GD&T-based artefact. Sensors 2022, 22, 8596. [Google Scholar] [CrossRef]
  26. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  27. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
  28. Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  29. Mery, D. Aluminum casting inspection using deep object detection methods and simulated ellipsoidal defects. Mach. Vis. Appl. 2021, 32, 72. [Google Scholar] [CrossRef]
  30. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable Bag-Of-Freebies Sets new State-Of-The-Art for Real-Time Object Detectors. In Proceedings of the Conference on Computer Vision and Pattern Recognarxition, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar] [CrossRef]
  31. Tian, Y.; Ye, Q.; Doermann, D. Yolov12: Attention-centric real-time object detectors. arXiv 2025, arXiv:2502.12524. [Google Scholar] [CrossRef]
  32. Hussain, M. Yolov1 to v8: Unveiling each variant–a comprehensive review of yolo. IEEE Access 2024, 12, 42816–42833. [Google Scholar] [CrossRef]
  33. Parlak, İ.E.; Emel, E. Deep learning-based detection of aluminum casting defects and their types. Eng. Appl. Artif. Intell. 2023, 118, 105636. [Google Scholar] [CrossRef]
  34. Wang, P.; Jing, P. Deep learning-based methods for detecting defects in cast iron parts and surfaces. IET Image Proc. 2024, 18, 47–58. [Google Scholar] [CrossRef]
  35. Xing, J.; Jia, M. A convolutional neural network-based method for workpiece surface defect detection. Measurement 2021, 176, 109185. [Google Scholar] [CrossRef]
  36. Duan, L.; Yang, K.; Ruan, L. Research on automatic recognition of casting defects based on deep learning. IEEE Access 2021, 9, 12209–12216. [Google Scholar] [CrossRef]
  37. Wang, L.; Song, C.; Wan, G.; Cui, S. A surface defect detection method for steel pipe based on improved YOLO. Math. Biosci. Eng. 2024, 21, 3016–3036. [Google Scholar] [CrossRef] [PubMed]
  38. Terras, N.; Pereira, F.; Ramos Silva, A.; Santos, A.A.; Lopes, A.M.; Silva, A.F.d.; Cartal, L.A.; Apostolescu, T.C.; Badea, F.; Machado, J. Integration of deep learning vision systems in collaborative robotics for real-time applications. Appl. Sci. 2025, 15, 1336. [Google Scholar] [CrossRef]
  39. Sun, Z.; Caetano, E.; Pereira, S.; Moutinho, C. Employing histogram of oriented gradient to enhance concrete crack detection performance with classification algorithm and Bayesian optimization. Eng. Fail. Anal. 2023, 150, 107351. [Google Scholar] [CrossRef]
  40. Wang, W.; Chen, Z.; Yuan, X. Simple low-light image enhancement based on Weber–Fechner law in logarithmic space. Signal Process. Image Commun. 2022, 106, 116742. [Google Scholar] [CrossRef]
  41. Shen, H.; Wei, B.; Ma, Y. Unsupervised anomaly detection for manufacturing product images by significant feature space distance measurement. Mech. Syst. Signal Process. 2024, 212, 111328. [Google Scholar] [CrossRef]
  42. Mery, D. Aluminum casting inspection using deep learning: A method based on convolutional neural networks. J. Nondestruct. Eval. 2020, 39, 12. [Google Scholar] [CrossRef]
  43. Nguyen, T.P.; Choi, S.; Park, S.-J.; Park, S.H.; Yoon, J. Inspecting method for defective casting products with convolutional neural network (CNN). Int. J. Precis. Eng. Manuf. Green Technol. 2021, 8, 583–594. [Google Scholar] [CrossRef]
  44. Brenner, M.; Reyes, N.H.; Susnjak, T.; Barczak, A.L. RGB-D and thermal sensor fusion: A systematic literature review. IEEE Access 2023, 11, 82410–82442. [Google Scholar] [CrossRef]
  45. Servi, M.; Mussi, E.; Profili, A.; Furferi, R.; Volpe, Y.; Governi, L.; Buonamici, F. Metrological characterization and comparison of D415, D455, L515 RealSense devices in the close range. Sensors 2021, 21, 7770. [Google Scholar] [CrossRef]
  46. Andriyanov, N.; Khasanshin, I.; Utkin, D.; Gataullin, T.; Ignar, S.; Shumaev, V.; Soloviev, V. Intelligent system for estimation of the spatial position of apples based on YOLOv3 and Real Sense depth camera D415. Symmetry 2022, 14, 148. [Google Scholar] [CrossRef]
  47. Van Crombrugge, I.; Sels, S.; Ribbens, B.; Steenackers, G.; Penne, R.; Vanlanduit, S. Accuracy assessment of joint angles estimated from 2D and 3D camera measurements. Sensors 2022, 22, 1729. [Google Scholar] [CrossRef] [PubMed]
  48. Xiang, Y.; Kim, W.; Chen, W.; Ji, J.; Choy, C.; Su, H.; Mottaghi, R.; Guibas, L.; Savarese, S. ObjectNet3D: A Large Scale Database for 3D Object Recognition. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016. [Google Scholar] [CrossRef]
  49. González Izard, S.; Sánchez Torres, R.; Alonso Plaza, Ó.; Juanes Méndez, J.A.; García-Peñalvo, F.J. Nextmed: Automatic imaging segmentation, 3D reconstruction, and 3D model visualization platform using augmented and virtual reality. Sensors 2020, 20, 2962. [Google Scholar] [CrossRef] [PubMed]
  50. Rajamohan, G.; Sangeeth, P.; Nayak, P.K. On-machine measurement of geometrical deviations on a five-axis CNC machining centre. Adv. Mater. Process. Technol. 2022, 8, 269–281. [Google Scholar] [CrossRef]
  51. Patel, D.R.; Kiran, M.B. Vision based prediction of surface roughness for end milling. Mater. Today Proc. 2021, 44, 792–796. [Google Scholar] [CrossRef]
  52. Palani, S.; Natarajan, U. Prediction of surface roughness in CNC end milling by machine vision system using artificial neural network based on 2D Fourier transform. Int. J. Adv. Manuf. Technol. 2011, 54, 1033–1042. [Google Scholar] [CrossRef]
  53. Jiang, L.; Wang, Y.; Tang, Z.; Miao, Y.; Chen, S. Casting defect detection in X-ray images using convolutional neural networks and attention-guided data augmentation. Measurement 2021, 170, 108736. [Google Scholar] [CrossRef]
  54. Schlotterbeck, M.; Schulte, L.; Alkhaldi, W.; Krenkel, M.; Toeppe, E.; Tschechne, S.; Wojek, C. Automated defect detection for fast evaluation of real inline CT scans. Nondestruct. Test. Eval. 2020, 35, 266–275. [Google Scholar] [CrossRef]
  55. Yi-Cheng, L.; Syh-Shiuh, Y. Using Machine Vision to Develop an On-Machine Thread Measurement System for Computer Numerical Control Lathe Machines. In Proceedings of the International Multi-Conference of Engineers and Computer Scientists, Hong Kong, China, 13–15 March 2019. [Google Scholar]
  56. Pérez, J.; León, J.; Castilla, Y.; Shahrabadi, S.; Anjos, V.; Adão, T.; López, M.Á.G.; Peres, E.; Magalhães, L.; Gonzalez, D.G. A cloud-based 3D real-time inspection platform for industry: A case-study focusing automotive cast iron parts. Procedia Comput. Sci. 2023, 219, 339–344. [Google Scholar] [CrossRef]
  57. Zou, L.; Fang, H.; Li, Y.; Wu, S. Roughness estimation of high-precision surfaces from line blur functions of reflective images. Measurement 2021, 182, 109677. [Google Scholar] [CrossRef]
  58. Huang, H.; Peng, X.; Wu, S.; Ou, W.; Hu, X.; Chen, L. An automotive body-in-white welding stud flexible and efficient recognition system. IEEE Access 2025, 13, 51938–51955. [Google Scholar] [CrossRef]
  59. Khow, Z.J.; Tan, Y.F.; Karim, H.A.; Rashid, H.A.A. Improved YOLOv8 model for a comprehensive approach to object detection and distance estimation. IEEE Access 2024, 12, 63754–63767. [Google Scholar] [CrossRef]
  60. Gąsienica-Józkowy, J.; Cyganek, B.; Knapik, M.; Głogowski, S.; Przebinda, Ł. Deep learning-based monocular estimation of distance and height for edge devices. Information 2024, 15, 474. [Google Scholar] [CrossRef]
  61. Huang, H.; Zhou, B.; Cao, S.; Song, T.; Xu, Z.; Jiang, Q. Aluminum reservoir welding surface defect detection method based on three-dimensional vision. Sensors 2025, 25, 664. [Google Scholar] [CrossRef]
  62. Myśliwiec, P.; Kubit, A.; Szawara, P. Optimization of 2024-T3 aluminum alloy friction stir welding using random forest, XGBoost, and MLP machine learning techniques. Materials 2024, 17, 1452. [Google Scholar] [CrossRef] [PubMed]
  63. Chen, Y.; Wu, Y. Detection of welding defects tracked by YOLOv4 algorithm. Appl. Sci. 2025, 15, 2026. [Google Scholar] [CrossRef]
  64. Shao, D.; Liu, Y.; Liu, G.; Wang, N.; Chen, P.; Yu, J.; Liang, G. YOLOv7scb: A small-target object detection method for fire smoke inspection. Fire 2025, 8, 62. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the prototype with subtasks for its deployment.
Figure 1. Flowchart of the prototype with subtasks for its deployment.
Applsci 15 04230 g001
Figure 2. The prototype setup: (a) digitizer environment; (b) features of a machined workpiece.
Figure 2. The prototype setup: (a) digitizer environment; (b) features of a machined workpiece.
Applsci 15 04230 g002
Figure 3. Image capture and dataset processing workflow (# indicates the image number with the corresponding bounding box data).
Figure 3. Image capture and dataset processing workflow (# indicates the image number with the corresponding bounding box data).
Applsci 15 04230 g003
Figure 4. Workflow for manual identification of labels and bounding boxes (# indicates corresponding bounding box data).
Figure 4. Workflow for manual identification of labels and bounding boxes (# indicates corresponding bounding box data).
Applsci 15 04230 g004
Figure 5. Location of features identified by bounding boxes: cast bore (red), screw seat (yellow), and drilled bore (blue).
Figure 5. Location of features identified by bounding boxes: cast bore (red), screw seat (yellow), and drilled bore (blue).
Applsci 15 04230 g005
Figure 6. YOLOv3 and YOLOv11 training workflow (# indicates related bounding box data).
Figure 6. YOLOv3 and YOLOv11 training workflow (# indicates related bounding box data).
Applsci 15 04230 g006
Figure 7. Adjustment to obtain ratio of pixels to mm: (a) 3D pattern used to obtain the polynomial indexes; (b) resulting polynomial curve.
Figure 7. Adjustment to obtain ratio of pixels to mm: (a) 3D pattern used to obtain the polynomial indexes; (b) resulting polynomial curve.
Applsci 15 04230 g007
Figure 8. Validation metrics for different learning rate values (1 × 10−3 is gray, 5 × 10−4 is orange, 1 × 10−4 is blue, and 5 × 10−5 is yellow for YOLOv3; and 1 × 10−4 is purple for YOLOv11): (a) loss performance; (b) mAP performance; (c) feature selection accuracy performance; (d) object selection accuracy.
Figure 8. Validation metrics for different learning rate values (1 × 10−3 is gray, 5 × 10−4 is orange, 1 × 10−4 is blue, and 5 × 10−5 is yellow for YOLOv3; and 1 × 10−4 is purple for YOLOv11): (a) loss performance; (b) mAP performance; (c) feature selection accuracy performance; (d) object selection accuracy.
Applsci 15 04230 g008
Figure 9. Two examples of analyzed images: (a) a bracket; (b) a housing.
Figure 9. Two examples of analyzed images: (a) a bracket; (b) a housing.
Applsci 15 04230 g009
Figure 10. Sequence of image processing for the housing: (a) original image; (b) RGB-analyzed image; (c) depth-processed image.
Figure 10. Sequence of image processing for the housing: (a) original image; (b) RGB-analyzed image; (c) depth-processed image.
Applsci 15 04230 g010
Figure 11. Sequence of image processing for the bracket: (a) original image, (b) RGB-analyzed image, (c) depth-processed image.
Figure 11. Sequence of image processing for the bracket: (a) original image, (b) RGB-analyzed image, (c) depth-processed image.
Applsci 15 04230 g011
Figure 12. Sequence of image processing for the cover: (a) original image; (b) RGB-analyzed image; (c) depth-processed image.
Figure 12. Sequence of image processing for the cover: (a) original image; (b) RGB-analyzed image; (c) depth-processed image.
Applsci 15 04230 g012
Figure 13. On-site test, a piece not used in the training or validation phase: (a) prototype installed close to the production area; (b) original image of the piece; (c) YOLOv3-analyzed image of a drilled bore (green); (d) YOLOv11-analyzed image of a drilled bore (green).
Figure 13. On-site test, a piece not used in the training or validation phase: (a) prototype installed close to the production area; (b) original image of the piece; (c) YOLOv3-analyzed image of a drilled bore (green); (d) YOLOv11-analyzed image of a drilled bore (green).
Applsci 15 04230 g013aApplsci 15 04230 g013b
Table 1. Features and labels used in prototype.
Table 1. Features and labels used in prototype.
Feature in SpanishFeature in English
Barreno de fundición Cast bore
Barreno maquinado Drilled bore
Brida maquinada Machined flange
Asiento de tornilloScrew seat
Table 2. Summary of validation performance.
Table 2. Summary of validation performance.
ModelDescriptionLossmAPFeature SelectionObject Selection
YOLOv3Value (best epoch)1.44 (247)0.652 (240)95.91% (246)94.62% (203)
Average for last 50 data points1.580.57595.32%93.67%
YOLOv11Value (best epoch)1.02 (243)0.613 (203)97.54% (220)97.69% (240)
Average for last 50 data points1.160.62887.62%96.22%
Table 3. Summary of detected characteristics for the housing.
Table 3. Summary of detected characteristics for the housing.
FeatureQuantityDetectedAccuracy
Cast Bore11100.0%
Drilled Bore8675.0%
Screw Seat2150.0%
Table 4. Detected centers of drilled bores and relative distance to the first point (housing).
Table 4. Detected centers of drilled bores and relative distance to the first point (housing).
xyxyzCalculated DSpecified D|Error|
Pixelsmmmmmmmm
188253−44314590.0000.0000.000
26733750116475127.738128.0000.262
66286−18865466148.125148.0000.125
66207−186−18472150.777150.5000.277
89333−159113472141.838142.0000.162
94193−155−33465128.269128.000.269
Table 5. Summary of detected characteristics for the bracket.
Table 5. Summary of detected characteristics for the bracket.
FeatureQuantityDetectedAccuracy
Cast Bore5240.0%
Drilled Bore77100.0%
Screw Seat77100.0%
Table 6. Detected centers of drilled bores and relative distance to the first point (bracket).
Table 6. Detected centers of drilled bores and relative distance to the first point (bracket).
xyxyzCalculated DSpecified D|Error|
Pixelsmmmmmmmm
2252681425150.0000.0000.000
24621423−952456.26756.5000.233
40932518991541196.021196.0000.021
41926719137560195.320195.5000.180
24151−232−75479263.199263.0000.199
70332−173107494186.928187.0000.072
173198−59−2748097.90898.0000.092
Table 7. Summary of detected characteristics for the cover.
Table 7. Summary of detected characteristics for the cover.
FeatureQuantityDetectedAccuracy
Cast bore33100.0%
Drilled bore121191.7%
Machined flange3133.3%
Screw seat66100.0%
Table 8. Detected centers of drilled bores and relative distance to the first point (cover).
Table 8. Detected centers of drilled bores and relative distance to the first point (cover).
xyxyzCalculated DSpecified D|Error|
Pixelsmmmmmmmm
246357371983030.0000.0000.000
2723867422134459.82560.0000.175
29031511313729997.53597.5000.035
314355150193309113.270113.0000.270
42425432042557324.193324.0000.193
345137201−217312364.146364.0000.146
34921209−298309525.010525.0000.010
35226121755302229.891230.0000.109
14870−127−227307455.562455.5000.062
102348−216193282253.919254.0000.081
120180−177−66301339.847340.0000.153
Table 9. Summary of the detected characteristics of cylinder head covers used for vehicles.
Table 9. Summary of the detected characteristics of cylinder head covers used for vehicles.
FeatureQuantityDetectedAccuracy
Drilled bore: YOLOv3211780.9%
Drilled bore: YOLOv11211990.5%
Table 10. Detected centers of the drilled bore and the relative distance to the first point (YOLOv3).
Table 10. Detected centers of the drilled bore and the relative distance to the first point (YOLOv3).
xyxyzCalculated DSpecified D|Error|CMM
Pixelsmmmmmmmmmm
246357371985080.0000.0000.0000.000
11122−229−97515251.970252.0000.030251.866
25182−205−38536221.172221.0000.172220.884
378251−16826508164.760165.0000.240164.889
58235−18411501192.172192.0000.172191.787
76353−161123511213.675213.5000.175213.384
74290−15761530179.666179.5000.166179.384
90123−148−98504178.779179.0000.221179.014
129180−97−40541113.428113.5000.072113.616
141117−92−104502137.339137.5000.161137.622
185419−42186512201.102201.0000.102200.893
180156−45−6153879.42379.5000.07779.643
234292116352774.57274.5000.07274.361
332157112−61534120.021120.0000.021120.111
374204160−19518153.652153.5000.152153.489
3742281614517154.810155.0000.190155.012
379337184119471290.026290.0000.026289.879
Table 11. Detected centers of the drilled bore and the relative distance to the first point (YOLOv11).
Table 11. Detected centers of the drilled bore and the relative distance to the first point (YOLOv11).
xyxyzCalculated DSpecified D|Error|CMM
PixelsmmmmMmmmmm
246357371985080.0000.00000.000
11124−230−95513252.046252.0000.046251.866
25183−205−37536220.924221.0000.076220.884
378252−16827508164.889165.0000.111164.889
58243−18419501192.265192.0000.265191.787
75351−162121511213.738213.5000.238213.384
74289−15660530179.597179.5000.097179.384
91122−146−99504178.704179.0000.296179.014
129180−97−40541113.265113.5000.235113.616
141116−92−105502137.766137.5000.266137.622
186419−41186512200.795201.0000.205200.893
179157−46−6153879.75379.5000.25379.643
235292126352774.67374.5000.17374.361
333158113−60534120.226120.0000.226120.111
374204160−19518153.723153.5000.223153.489
3742281614517154.709155.0000.291155.012
334420118185518210.623210.5000.123210.672
323405106170519191.044191.0000.044190.994
379337184119471289.879290.0000.121289.879
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alcántara, L.A.A.; Hernández-Uribe, Ó.; Cárdenas-Robledo, L.A.; Ramírez, J.A.F. A Prototype for Computing the Distance of Features of High-Pressure Die-Cast Aluminum Products. Appl. Sci. 2025, 15, 4230. https://doi.org/10.3390/app15084230

AMA Style

Alcántara LAA, Hernández-Uribe Ó, Cárdenas-Robledo LA, Ramírez JAF. A Prototype for Computing the Distance of Features of High-Pressure Die-Cast Aluminum Products. Applied Sciences. 2025; 15(8):4230. https://doi.org/10.3390/app15084230

Chicago/Turabian Style

Alcántara, Luis Alberto Arroniz, Óscar Hernández-Uribe, Leonor Adriana Cárdenas-Robledo, and José Alejandro Fernández Ramírez. 2025. "A Prototype for Computing the Distance of Features of High-Pressure Die-Cast Aluminum Products" Applied Sciences 15, no. 8: 4230. https://doi.org/10.3390/app15084230

APA Style

Alcántara, L. A. A., Hernández-Uribe, Ó., Cárdenas-Robledo, L. A., & Ramírez, J. A. F. (2025). A Prototype for Computing the Distance of Features of High-Pressure Die-Cast Aluminum Products. Applied Sciences, 15(8), 4230. https://doi.org/10.3390/app15084230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop