Automatic Method for Vickers Hardness Estimation by Image Processing

Hardness is one of the most important mechanical properties of materials, since it is used to estimate their quality and to determine their suitability for a particular application. One method of determining quality is the Vickers hardness test, in which the resistance to plastic deformation at the surface of the material is measured after applying force with an indenter. The hardness is measured from the sample image, which is a tedious, time-consuming, and prone to human error procedure. Therefore, in this work, a new automatic method based on image processing techniques is proposed, allowing for obtaining results quickly and more accurately even with high irregularities in the indentation mark. For the development and validation of the method, a set of microscopy images of samples indented with applied forces of 5N and 10N on AISI D2 steel with and without quenching, tempering heat treatment and samples coated with titanium niobium nitride (TiNbN) was used. The proposed method was implemented as a plugin of the ImageJ program, allowing for obtaining reproducible Vickers hardness results in an average time of 2.05 seconds with an accuracy of 98.3% and a maximum error of 4.5% with respect to the values obtained manually, used as a golden standard.


Introduction
The study of the properties of materials is of great importance to determine their behavior in specific applications. Mechanical properties such as hardness, ductility, or stiffness can be studied from laboratory tests to determine the appropriate characteristics for their use. Hardness is one of the most important mechanical properties of materials, as it allows for determining the resistance to deformation by a harder material [1,2]. There are different hardness tests and these vary depending on the type of material, e.g., the Brinell hardness test is best suited to determine the hardness of wood-based materials, or materials with relatively low hardnesses [3]. On other hand, the Mohs hardness is the most commonly used to identify minerals [4], Shore hardness implemented in polymeric materials [5] and Vickers hardness to determine the hardness of metals, ceramics, and other materials. Studies have shown that modifications can be made to Vickers hardness equipment to determine other properties such as elasticity or surface stresses of the material [6,7].
Materials such as steel used in automotive axles, cutting tools, among other applications, are subjected to constant forces or loads, which can cause deformation or breakage of the material. The search for continuous improvement has led to the implementation of materials in the form of thin films, which consist of improving the surface properties of the substrate such as steel, providing high hardness, low coefficient of friction, and wear and corrosion resistance in chemically aggressive environments, allowing for increasing its useful life and covering a wide field of applications [8][9][10][11][12].
There are different methods to modify the hardness of the material, either by heat treatment such as quenching or tempering to modify the microstructure [11,12] or by depositing thin film coatings on the surface of the base material. Among the most commonly used coatings in the industry are Titanium Nitride (TiN) and Niobium Nitride (NbN). In the present work, D2 steel samples with different heat treatments and steels with Titanium Niobium Nitride (TiNbNx) coatings were used, which are employed for industrial applications as a cutting tool [13][14][15].
As mentioned above, one way to determine the quality of the material is from tests such as the Vickers hardness test (HV), allowing for estimating this property by measuring the plastic deformation or imprint produced on its surface after applying a force with a squarebased, pyramid-shaped diamond indenter with face angles of 136 • (see Figure 1) [16]. Thus, once the test has been performed, the impression is visualized through an optical microscope and the indentation diagonals are measured to determine the Vickers hardness value. This type of test is widely used for quality control, as it allows for determining if a material is suitable for a given application. The equation for determining the Vickers hardness value is given by: where D represents the average diagonal distance (d1 and d2) of the indentation mark and P the force applied to the indenter [16]. Although simple, this measure is error-prone, as it is obtained manually. Moreover, this task is tedious, repetitive, time-consuming, and results vary depending on the measurement criteria of each observer. Therefore, several approaches have been proposed to make its evaluation based on image processing and machine learning techniques. Sugimoto and Kawaguchi proposed an automatic process to determine the Vickers hardness of three types of samples, specular, etched specular, and rough finishes, using a method for the determination of indentation edges and corners based on statistical moments, obtaining the hardness measurement with a variable mean tolerance value depending on the applied force, being 4% when the applied force varies between 500 and 1000 g-force [17]. Dominguez and Wiederhold implemented an algorithm in which the image is binarized based on its mean value, regardless of the shape of the histogram and determines the vertices of the slit, using the Harris-Stephen corner detection method. The lengths of the diagonals are then calculated to determine the Vickers hardness number, obtaining a maximum error of 6% [18]. Fedotkin et al. proposed a method based on the comparison of the optical properties inside and outside the indentation, for which a series of circles are generated that allow information to be obtained from the histogram and the Hue distributions inside and outside them. Subsequently, the circle that is closest to the indentation area is selected, the value of the diagonals is determined and subsequently the value of the hardness, presenting 95% confidence intervals [19]. On the other hand, Privezentsev et al. developed a method to estimate a hardness value using neural networks, concluding that image processing techniques should be combined with neural networks to obtain better results [20]. Likewise, Tanaka et al. implemented an automatic method based on convolutional neural networks (CNN) using as inputs images with ideal indentation, surface roughness, distorted indentations, and cracks to measure indentation diagonals and Vickers hardness automatically and robustly, obtaining a maximum error of 6% according to the reported results [21]. Jalilian and Uhl implemented deep learning techniques from a fully convolutional neural network (FCN) to locate, segment the indentation trace, determine the position and value of the diagonals, and subsequently obtain the hardness value, presenting high robustness to the size, location, and rotation of the indentation print, as well as the brightness and surface defects contained in the image [22]. Li and Yin implemented CNN to segment the indentation footprint of the image background, and, in turn, use a bounding box to measure the length of the diagonals and determine the value of hardness in different materials, showing maximum relative errors for the diagonals length of 0.39% for TiO 2 , 1.67% Cu and 1.64% Nylon [23]. Cheng et al. used indentation images of medium carbon chrome-molybdenum steel alloys (SCM 440) with revealed microstructure, making it difficult to see the indentation trace. In addition, they implemented convolutional neural networks with different backbones used to extract different characteristics, obtaining as a result the hardness value, presenting an absolute error of about 10.2 [24].
Although machine learning techniques such as neural networks have been implemented, they require a large amount of images for training and a high computational cost for processing. In general, hardness estimation using image processing and machine learning approaches presents difficulties in application when the indentation trace is not well defined or noisy. Therefore, this paper proposes a new indentation measurement method that combines three different solutions, allowing for obtaining a higher accuracy, reducing the analysis time and achieving reproducible results. The method was implemented as a plugin of the ImageJ software, allowing for determining the Vickers hardness of the indentation mark of D2 steel.

Materials
For the development of this work, 28 color images with different sizes of D2 steel in initial state, quenching, tempering heat treatment, and TiNbN coating were acquired with an optical microscope, as illustrated in Figure 2. An expert obtained the hardness value of each sample by zooming in on the image to determine the corners of the indentation more accurately. These values were used as a golden standard for the validation of the proposed method.
For the development of the application, an Intel Core (TM) i5 8300H CPU @ 2.30 GHz computer with 12 GB RAM, running on the x64 bits Windows 10 platform, was employed. The method was implemented in Java language as a plugin of the free access application ImageJ. Figure 3, the indentation images have poorly defined edges and corners. In addition, there are elements (noise) in the background that do not correspond to the object of interest, which affects its detection, so the result depends on the perspective of the observer. Therefore, to improve accuracy and reduce measurement time and result variation, a method based on image processing and computer vision is proposed that automatically detects indentation and estimates the Vickers hardness value. The flowchart describing the process is shown in Figure 4.

As shown in
The corresponding algorithm works, in essence, as shown in the pseudocode below, obtaining the indentation region using thresholding in the color channel that has maximum entropy. Then, the corners of the indentation are obtained using three different methods, and the best one is selected according to the index proposed in this work as follows:

Input:
Color image Q

Find indentation mark
Get color channels R,G,B from Q Find channel A with Maximun entropy Binarize A, using the proposed method Fill holes and remove noise using mathematical morphology Label objects Select larger object 2. Indentation corner detection: Using local maximum radius: Find corners C R Get Quadrature index Q R Using indentation perimeter: Find corners C P Get Quadrature index Q P Using Hough transform: Find corners C H Get Quadrature index Q H 3. Find best indentation result: Select Hough solution } 4. Calculate hardness estimation HV.

Output: HV
Since color does not provide relevant information for the study of microindentation, the channel with the highest entropy was chosen as it contains the most information for further processing.
Once the channel to be processed is selected, the region of interest (ROI) is separated from the background. As can be seen, the indentation area has a darker color with respect to the surrounding background. Therefore, the use of a thresholding method for segmentation is appropriate.
However, since the size of the ROI is small with respect to the image background and the gray levels between the background and the ROI are similar, the corresponding mode of the ROI is not clearly defined (see Figure 5). Therefore, the use of classical thresholding techniques such as Otsu or maximum entropy are not appropriate in this case, as they do not always give correct segmentation results (see Figure 6).  Hence, a new thresholding method was developed based on the histogram characteristics. As can be seen, the mode corresponding to the ROI is next to the one corresponding to the background and is not always distinguishable. Therefore, the following thresholding procedure was designed.
Initially, the highest value of the histogram is searched for, which allows for identifying the mode (m f ) corresponding to the background. Since the mode corresponding to the ROI is below m f , and is distinguished from m f by a local minimum that appears between them, this value is identified as the upper threshold (th) of the mode (see Figure 5a). In other histograms, the ROI is much smaller and no mode corresponding to it appears. In this case, the threshold is detected by the change of slope, as shown in Figure 5b. The low threshold (tl) is located at the point where a new slope change is found. Once the image has been segmented, some holes are visible in the ROI, to fill them, a mathematical morphology method called fill holes is used (see Figure 7). Due to the fact that heat treatment processes were applied to the material, the image shows stains in the indentation zone, which translates into noise or unnecessary elements in the zone of interest (ROI). In order to reduce this noise, mathematical morphology techniques based on erosion and dilation operations were used, performing an opening process in order to obtain the lowest possible noise without modifying the shape and size of the indentation mark as seen in Figure 7.
Since the indented area corresponds to the largest object, to obtain the indentation trace, all objects were labeled and the largest one, i.e., that composed of the highest number of pixels, was selected, as seen in Figure 8. In order to evaluate the shape of the indentation mark, it is necessary to obtain the edges of the figure. For this purpose, mathematical morphology techniques are used by performing the difference between the image filtered by area, and this same image but eroded by one pixel. (see Figure 9 first section).
The indentation mark can take different shapes depending on the type of material and the force exerted on it by the indenter. Due to this, in many cases, it is not possible to approximate the shape of the indentation to a square. Therefore, three solutions are proposed to detect the corners of the indentation: (a) Solution by local maximum radius: This solution works in the case where the shape of the indentation trace can be approximated to a square. If this is the case, the centroid of the figure is calculated and the four major diagonals are found, which would correspond to the four corners. In this solution, the distances of the pixels of the perimeter to the centroid of the figure are found, in such a way that, to find the corners, it is taken into account that their distance is a local maximum in the function of distances with respect to the centroid. This strategy is a rather simplified version of the method used to recognize figures from the distances relative to the centroid [25].
Perimeter solution: In the case that the region of interest is affected by "noise" that may be due to the heat treatment applied or the type of material, this solution would be useful.  Solution by Hough transform: this solution has the advantage that the pixels do not need to be contiguous to determine a line; therefore, it favors the detection under a certain level of noise. It also does not limit us to obtain a single solution as in the case of a linear regression; with this transform, multiple lines can be drawn by adjusting to the irregularity of the object of interest [26].
To select the best solution (see Figure 10), the quadrature index (Q) given by Equation (2) was used. The proposed index allows for quantifying from the coordinates of four corners their correspondence to a square; this is achieved by taking a value between one and zero, being one for the case of an approximation to a perfect square, and decreasing towards zero as it moves away from the ideal case. Two normalized error coefficients between zero and one, Maximum Absulute Error (MaAE) and Dice Coefficient (DC), are used to take into account two characteristics of a square, the perimeter and area, respectively: The MaAE arises as a measure of the maximum absolute error normalized to Equation (3). Knowing the coordinates of the four corners, the distance from each side, s i , and the average side, s, are calculated. In this way, the value of MaAE reflects the maximum error encountered when estimating the sides as the perimeter divided into four parts. If all sides are equal, the error given by MaAE is zero, although this does not guarantee that it is indeed a square; for this reason, the estimation of the sides taking into account the area is also taken into account. The coefficient DC is proposed as a measure of the error associated with the estimation of the sides assuming that the area corresponds to a square; such side is denoted as s a , and is equal to the square root of the area defined by the four corners. Formula (4) is used to calculate DC, which is based on the Dice Score formula.

Results and Discussion
Tables 1 and 2 show the Vickers hardness results obtained manually (M) and with the proposed method (A), as well as the Error Rate, defined in Equation (5) and execution time of the algorithm to determine the corners of the indentation mark selected by the technique. To choose the best corner detection solution, the highest quadrature index is selected. In order to improve the accuracy in the selection of these solutions, a threshold of 0.05 is set for the cases in which the values of these indices are close to each other. For one of the solutions to be selected, its index value must be at least 0.05 above the other two quadrature indices. If this condition is not met, the Maximum Local Radius solution (Q R ) is given priority for selection: As shown in Tables 1 and 2, the proposed method presents an error between 0.21% and 4.45% when using the algorithm based on the local maximum radius, between 1.38% and 2.87% for the perimeter solution, and between 0.02% and 4.00% for the Hough transform one.
As already mentioned, manual measurement is prone to errors because the results are variable and depend on the observer, the condition of the samples, the difficulty in detecting corners, and the number of measurements to be performed. The time required to determine the hardness manually is an estimated 4 min per sample, including the identification and measurement of the diagonals under the microscope and the determination of the hardness. The proposed method reduces the measurement time, has no variable results, and can be adapted to perform multiple measurements from the images of the indentation marks. The proposed thresholding method allows for good indentation region separation to be obtained in most cases, but it can fail if the acquisition protocol used is inadequate, producing poorly focused images, shadows, or the region of interest being too small with respect to the background. Figure 11 illustrates the cases for which one of the three solutions works. For the first case, when the edge detection processing is performed, we notice that a rather irregular figure is obtained since the illumination gradient is taken into account when thresholding, which does not allow for defining the corners correctly. For this case, the solution by local maximum radius would be imprecise and also the solution by perimeter since, if it is evaluated from the centroid of the figure, it would take into account the deformities within it. The second case is the ideal case, since the figure can be very close to a square, so this is the solution automatically chosen by the algorithm.
In the third case, it is observed that in one of the corners there is a small deformation due to the characteristics of the material and its coating. If evaluated with the solution by local maximum radii, the end of this deformation would be taken as a local maximum (see section on edge detection) since it is evaluated from the centroid of the figure, which makes the calculation of the hardness of the material imprecise. If the Hough transform is used instead, more processing time will be required.
Despite the fact that the images contain an irregular surface such as pores, scratches or microdroplets, the algorithm did not present any problems in determining the hardness of the material (see Appendix A), which shows the results obtained, thus validating its high robustness. Compared to other methods, these have been developed with marks that by their shape can be approximated to a square; this one proposes three different solutions depending on the characteristics of the figure given by the indentation mark.
The execution time for the tested images was an average of 2.05 s, taking into account that this time varies depending on the selected solution and the resolution of each image.

Conclusions
In this work, a new automatic method based on image processing is proposed to determine the Vickers hardness of AISI D2 steel in different thermal conditions such as: steel with and without quenching, tempering heat treatment and samples coated with titanium niobium nitride (TiNbN). The algorithm includes three options to determine the corners of the indentation pattern by automatically selecting the best one according to the calculated squareness indices, which allows for obtaining good results in the presence of background noise such as spots or pores on the surface and irregularities present in the indentation pattern. The proposed method allows for obtaining reproducible Vickers hardness results in an average time of 2.05 s with an accuracy of 98.3% and a maximum error of 4.5% with respect to the values measured manually, used as a gold standard, surpassing the results achieved in other previously published works.  Acknowledgments: Acknowledgments to CIDESI: Centro de Ingeniería y Desarrollo Industrial, Querétaro, Qro. Mexico for providing support in image acquisition.

Conflicts of Interest:
The authors declare no conflict of interest.