Next Article in Journal
Digital Signal Processing of the Inharmonic Complex Tone
Next Article in Special Issue
Development of Laser Ultrasonic Robotic System for In Situ Internal Defect Detection
Previous Article in Journal
Identification of Novel Biomarkers in Huntington’s Disease Based on Differential Gene Expression Meta-Analysis and Machine Learning Approach
Previous Article in Special Issue
Study of Online Testing of Void Defects in AM Components with Grating Laser Ultrasonic Spectrum Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Titanium Alloy Defect Detection Method Based on Optical–Acoustic Image Fusion

School of Information Science and Engineering, Harbin Institute of Technology (Weihai), Weihai 264209, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8294; https://doi.org/10.3390/app15158294
Submission received: 12 June 2025 / Revised: 13 July 2025 / Accepted: 16 July 2025 / Published: 25 July 2025
(This article belongs to the Special Issue Industrial Applications of Laser Ultrasonics)

Abstract

Featured Application

A new nondestructive testing method for titanium alloys was proposed in this paper; it can overcome the shortcomings of current testing technologies such as single-signal mode and a limited detection range. Practical application has shown that the new method can obtain comprehensive and accurate nondestructive testing results.

Abstract

Nowadays, a single detection method is insufficient for comprehensively and clearly identifying both surface defects and inner defects in titanium alloys. To address this limitation, this paper proposes a titanium alloy defect detection method based on optical–acoustic image fusion. A detection system was developed to achieve comprehensive and precise inspection of titanium alloys by integrating advanced deep learning-based optical testing technology, reliable C-scan ultrasonic detection technology, and information fusion techniques. Furthermore, the PC software can output interactive fusion results and generate decision-level detection reports. The experimental results demonstrate that the surface defect detection algorithm achieves an accuracy of 99.0%, with a surface defect size measurement resolution of 0.01 mm, an internal defect size measurement resolution of 1 mm, and a positional error within 2 mm. It was found that the proposed method provides a potential solution for the practical application of inspecting surface defects and inner defects in the materials.

1. Introduction

Titanium alloys are widely used in fields such as aerospace, medical equipment, energy, and so on, with the features of high strength, lightweight, and corrosion resistance being important [1]. Additive manufacturing technology is a representative and popular technology in the field of intelligent manufacturing [2]. However, titanium alloys may be affected by a harsh environment during production and service, resulting in surface or internal defects, especially when these defects occur in the same area, which can accelerate the breakdown of titanium alloys and lead to serious consequences. Therefore, the nondestructive testing (NDT) of titanium alloy defects is the key to solving these problems. It is noteworthy that the detection of crystalline defects is outside the scope of our study because of the extremely difficulty of crystalline defect detection using acoustic-based techniques.
In recent years, with the rapid development of artificial intelligence and machine vision, target detection technology has attracted widespread attention in academia and industry in the field of metal surface defect detection [3]. The target detection technology can effectively overcome the limitation of manual detection methods with the improvement of efficiency and accuracy in detection. In 2025, Xie [4] proposed a lightweight and efficient real-time metal surface defect detection method, LDE-YOLO, based on YOLOv8. Compared with the traditional approach, this method had the advantages of high precision and efficiency. The accuracy of this model can reach 80.8% on NEU-DET. In 2024, Zhao [5] proposed a rail defect detection algorithm based on YOLO-FCA. The model was improved based on YOLOv7, which had the advantages of stability, high efficiency, and accuracy. In the rail defect detection experiment, the accuracy of the YOLO-FCA algorithm reached 80.7%, and the detection speed reached 212.5 FPS.
It is hard to find internal metal defects via conventional appearance detection because of their concealment. NDT has developed rapidly in recent years [6]. Under the premise of not damaging the components to be tested or not affecting their performance, the internal or surface structural state of the components is checked by means of physical or chemical methods and advanced ultrasonic, ray, infrared, and electromagnetic, among others, instruments and equipment, and the location, size, and shape of the defects are given.
In 2024, Ban [7] proposed a detection method for laser ultrasonic microdefects based on multi-channel optical fiber interference sensing. Using laser ultrasonic nondestructive testing technology, high precision positioning, and accurate size measurements, circular hole defects with a radius of 2 mm inside the metal were realized. In 2024, Seleznev [8] proposed an acoustic emission technology for detecting internal damage in metals. This technology combined the Fourier transform, adaptive filtering, and other techniques to detect internal defects in metals, preventing material failure effectively.
With the development of science and technology, multi-information fusion methods have received extensive attention from experts and scholars. In 2016, Sun [9] proposed a metal material defect detection method based on optical, photoacoustic, and ultrasonic three-mode signals, aiming at the shortcomings of the single-signal mode and the limited detection range of the existing metal material nondestructive testing technology, as well as realized the comprehensive structural health diagnosis of metal materials. In 2022, Canil [10] designed a joint system using millimeter wave radar and infrared imaging sensors. The system fuses and tracks the subject’s torso and face information and realizes human body temperature screening and contact tracking on the basis of protecting privacy. In 2024, Xiong [11] designed a target-level fusion system using millimeter wave radar and an infrared camera to solve the problem where the detection and tracking performance of an unmanned combat vehicle sensing system in battlefield environments decreases sharply due to factors such as smoke and dust.
Based on the aforementioned research, we propose a method for detecting defects in titanium alloys through optical–acoustic image fusion. On the one hand, a lightweight and efficient Ti-YOLO titanium alloy optical surface defect detection algorithm is designed to achieve optical surface defect testing, building upon the well-known YOLO framework [12,13,14,15,16,17,18,19]. On the other hand, internal defect imaging is achieved using a straightforward and effective ultrasonic C-scan technique for internal defect detection. Furthermore, an image information fusion algorithm that integrates these detection modalities was introduced to provide comprehensive and accurate nondestructive testing of titanium alloys. The complementary nature of the optical and acoustic detection mechanisms addresses the limitations inherent in existing methods. Additionally, this approach offers new insights for practical applications in detection engineering.

2. Principle

2.1. Optical Detection of Surface Defects

A titanium alloy is a silver-gray metal with a shiny surface. Linear defects such as cracks and scratches occur on its surface during the forging and use of titanium alloys. In addition, these surface defects have obvious gray-level differences compared with the background, as well as significant differences in shape and texture.
The optical testing process for surface defects on titanium alloys is shown in Figure 1. Specifically, the raw images captured by a charge-coupled device (CCD) industrial camera require image preprocessing [20,21,22,23,24], such as image rectification, image cropping, image gray-scale converting, image median filtering, image sharpening, and so on.
In summary, image rectification can improve the distortion of image capture; image cropping can remove the unnecessary background in the raw pictures; image gray-scale converting can convert color images into grayscale images, which can reduce the computational load of image processing, and image median filtering can remove background noise and highlight the defect targets, as shown in Equation (1), and image sharpening can improve image quality, as shown in Equation (2).
g ( x , y ) = m e d f ( x m , y n ) , ( m , n w )
In this equation, g ( x , y ) is the gray value of the original optical image at position ( x , y ) after median filtering, f ( x m , y n ) is the gray value of the original optical image at position ( x m , y n ) , and w is the two-dimensional filtering template.
g ( x , y ) = f ( x + 1 , y ) + f ( x 1 , y ) + f ( x , y + 1 ) + f ( x , y 1 ) 4 f ( x , y )
In this equation, g ( x , y ) is the gray value of the sharpened image at position ( x , y ) , f ( x + 1 , y ) is the gray value of the original image at position ( x + 1 , y ) , f ( x 1 , y ) is the gray value of the raw image at position ( x 1 , y ) , f ( x , y + 1 ) is the gray value of the original image at position ( x , y + 1 ) , f ( x , y 1 ) is the gray value of the raw image at position ( x , y 1 ) , and f ( x , y ) is the gray value of the original image at position ( x , y ) .
The preprocessed images are put into the lightweight and efficient Ti-YOLO model for defect detection. This model has been improved based on the original YOLOv10n model, with its structure shown in Figure 2. Ti-YOLO is mainly composed of the backbone network, the neck network, and the detection head. In addition, Ti-YOLO performs multiple convolution operations on the input images through the backbone network to extract the local features of the titanium alloy’s surface defects. Moreover, in the neck network part, the module elements of the SlimNeck structure including VoVGSCSP and ghost convolution (GSConv) are used to effectively enhance the network’s features fusion capability, which can improve the detection of small defects. Finally, the lightweight LiteHead model is adopted in the detection head part, including depth-wise convolution (DWConv) and split-based convolution (SPConv). It can improve the performance of the model comprehensively while reducing the computational load.
The module can output the detection image and the resulting document after defect testing, which includes the type of surface defects as well as the normalized position and length of each defect. In addition, the optical surface defect detection results of the titanium alloy are the foundation of the following image information fusion method.

2.2. Acoustic Detection of Internal Defects

Ultrasonic is a kind of mechanical wave that transmits in an elastic medium with a frequency of more than 20 kHz. It can be classified into shear waves, longitudinal waves, and surface waves, according to the relationship between the particle’s vibration direction and the transmitting direction of the ultrasonic. Because of good directivity and high energy, ultrasonic is widely used in various fields of NDT.
The principle of acoustic detection is closely related to the transmitting characteristics of acoustic waves in the medium. When defects occur inside titanium alloys, parameters such as material elastic modulus and density will change, resulting in interfaces with different acoustic impedances. In addition, the acoustic waves will reflect when encountering these interfaces.
During the ultrasonic NDT process via the reflection method, a probe is used to emit ultrasonic waves toward the titanium alloy plate, and the transmitting description is shown in Equation (3).
f ( t ) = A e ( t t s i g n a l ) 2 2 / f c e n t e r 2
In this equation, f ( t ) is the modulus value of the acoustic signal, while A is the amplitude of the acoustic signal. t is the transmitting time of the ultrasonic wave. When internal defects are detected, t s i g n a l is the occurring moment of the signal peak value, while f c e n t e r is the working center frequency of the ultrasonic probe.
Only the initial wave and the bottom wave appear when there are no defects in the detecting object. On the contrary, three kinds of reflecting waves will occur including the initial wave, bottom wave, and defect wave in existing internal defects. The principle of ultrasonic NDT via the reflection method is shown in Figure 3.
With the data obtained from ultrasonic NDT, ultrasonic imaging work can be carried out. Ultrasonic imaging is a technology that combines ultrasonic NDT with a set of manual or automatic scanning devices to display the inner defects of the object via an image. When carrying out reflection NDT, the detection acoustic beam can only detect a small area. By using a manual or automatic scanning device for the movement and then reconstructing the image according to the positions of each point and the defect signal, the defect distribution image of the entire workpiece can be obtained. Nowadays, there are three common ultrasonic imaging methods, such as the A-scan, B-scan and C-scan, as shown in Figure 4.
In Figure 5, the result of the A-scan is displayed in the form of a waveform diagram. The horizontal axis represents the transmitting time of the acoustic wave, and the vertical axis represents the amplitude of the acoustic wave. The scanning result records the characteristics of the signal at a detection point. The result of the B-scan is an image of the cross-section perpendicular to the surface of the detecting object. The horizontal axis of the image represents the scanning trajectory of the probe on the detecting object, and the vertical axis represents the propagation time of the acoustic wave. The result of the C-scan is a top view of the defects of the detecting object being inspected. The horizontal axis represents the scanning distance along the X-axis, and the vertical axis represents the scanning distance along the Y-axis.
Although an A-scan is simple and easy to use, its result is not intuitive and lacks in information. For the ultrasonic NDT of the titanium alloy, a combination of A-scans and C-scans can be adapted to display the defect information of the detecting object visually. Moreover, the frequency of the probe is closely related to the imaging quality. Generally speaking, the higher the probe working frequency is, the higher the imaging sensitivity and quality will be.
Through ultrasonic C-scan detection, the internal pore defects with different diameters can be visually displayed, and the plane coordinates, depth, diameter, and other information of each pore can be further obtained, which lays the foundation for subsequent photoacoustic image information fusion. Specifically, the coordinates of the center position of each pore and the pore diameter can be obtained by image post-processing, including image threshold segmentation and image morphological processing. The burial depth of each pore defect can be calculated via Expression 4.
h = v T i × ( t 2 t 1 ) 2
In this equation, v T i is the transmitting speed of ultrasonic waves in the titanium alloy plate, which is about 6100 m/s; t 1 is the occurring moment of the wave peak of the initial signal; t 2 is the occurring moment of the wave peak of the bottom signal; and h is the buried depth of the internal defect at this detection point.

2.3. Technology of Optical–Acoustic Image Information Fusion

Belonging to the field of multi-source information fusion technology, optical–acoustic image information fusion is a technology based on the result images under two different mechanisms of optical and ultrasonic detection [25,26,27,28,29,30]. The multi-source information fusion method is a signal processing method designed for systems with multiple sensors (or information sources), which is similar to the comprehensive information processing method used by the brain. The goal of information fusion is to obtain reliable and effective data. There is a variety of advantages in the field of the NDT of titanium alloys via information fusion technology. On the one hand, it can enhance the capabilities of monitoring and detection; on the other hand, it can reduce the fuzziness of data. In addition, it can overcome the limitations of a single detection method and obtain comprehensive, accurate, and intuitive detection results.
The input of the information fusion algorithm includes the optical surface defect detection results and the acoustic internal defect detection results. Due to the fact that the optical detection mechanism and the acoustic detection mechanism are two different modes, there are some issues with temporal and spatial image registration during the fusion process. In terms of temporal image registration, the signal’s post-processing can ignore the issues of temporal image registration. In terms of spatial image registration, we proposed a spatial image registration method based on coordinate mapping in this paper. The diagram of coordinate mapping is shown in Figure 6.
Specifically, it is essential to clarify that the surface defect detection image of the titanium alloy and the key points on it are located in the plane of Cartesian coordinate system A; the acoustic detection imaging of internal defects of the titanium alloy and the key points on it are located in the plane of Cartesian coordinate system B. Also, the background field of fusion detection image information is located in the plane of Cartesian coordinate system C. Under the above premise, coordinate mapping between the optical image of surface defect detection and the internal defect detection acoustic imaging of the titanium alloy should be performed. And then obtain the correct positions of the optical image of surface defect detection and internal defect detection of the titanium alloy in Cartesian coordinate system C. Moreover, the defect symbols need to be made to obtain the optical–acoustic defect fusion detection result. The algorithm flow of optical–acoustic image fusion titanium alloy defect detection is shown in Algorithm 1.
Algorithm 1. The flow of optical–acoustic image fusion titanium alloy defect detection
(1) Input the optical surface detection image and data of the titanium alloy.
(2) Input the image and data of titanium alloy acoustic internal detection.
(3) Generate the fusion background field according to the surface detection image.
(4) The optical surface detection image and data results are vertically projected to the
fusion background field.
(5) The spatial registration of optical and acoustic detection image information fusion in the background field is completed by using coordinate mapping technology.
(6) Output fusion detection results.
The reasoning formulas of coordinate mapping are as follows:
A s x L T = C s x D 1
A s y W T = C s y D 2
B i x L T = C i x D 1
B i y W T = C i y D 2
In the equations, A s x and A s y are, respectively, the abscissa and ordinate of the points on the optical image of the surface defect detection of the titanium alloy part within planar rectangular coordinate system A. L T and W T are, respectively, the overall length and width of the titanium alloy part reflected in planar rectangular coordinate system A. B i x and B i y are, respectively, the abscissa and ordinate of the points on the image of the internal defect detection of the titanium alloy part within planar rectangular coordinate system B. L T and W T are, respectively, the overall length and width of the titanium alloy part reflected in planar rectangular coordinate system B. D 1 and D 2 are, respectively, the real length and width of the titanium alloy part. C s x and C s y are, respectively, the abscissa and ordinate of the points on the optical image of the surface defect detection of the titanium alloy part within planar rectangular coordinate system C of the background field of photoacoustic detection information fusion. C i x and C i y are, respectively, the abscissa and ordinate of the points on the image of the internal defect detection of the titanium alloy part within planar rectangular coordinate system C of the background field of photoacoustic detection information fusion.
The fusion algorithm outputs an interactive detection result image and a decision-level detection report, both of which play an important role in titanium alloy defect detection.

3. Experiment Systems

Surface defect detection relies on the optical detection system, which includes an industrial camera, a pallet, a bracket, and a PC, as shown in Figure 7.
In addition, the camera supports the acquisition of pictures with a maximum resolution of 8000 × 6000 pixels. The bracket makes it easy to adjust the object distance with a range of 50 cm. In addition, the PC has excellent performance and can train the YOLO detection models efficiently, as shown by the specific parameters in Table 1.
The main functions of the optical part of the detection PC 1.0 software include image acquisition, image correction, image cropping, image median filtering, Ti-YOLO detecting, and results saving, as shown in Figure 8.
Internal defect detection relies on the ultrasonic detection system, as shown in Figure 9. The ultrasonic detection system consisted of a programmable motion controller, a 3D guideway, a stepper motor, an encoder, a probe, an ultrasonic signal acquisition card, a PC, a tank, and so on.
Moreover, the effective scanning range of this system is 1000 × 600 × 700 mm, and the sampling rate of the ultrasonic board card can reach up to 100 MHz. The board card is connected to the PC through a Gigabit Ethernet port, and the interface of the data acquisition software is shown in Figure 10. This software is mainly used to cooperate with the board card and the motion device to achieve the timely and accurate collection of the echo signal of each detection point in ultrasonic detection. In addition, it also has a variety of functions such as real-time waveform display, as well as B-scan and C-scan detection.
The PC software ultrasonic detection part is responsible for processing the signals, as shown in Figure 11. Its functions mainly include data loading, signal cutting, signal filtering, gate setting, ultrasound imaging, image post-processing, results saving, and so on.
In addition, the optical–ultrasonic fusion part in the PC software is shown in Figure 12. Its functions mainly included loading optical images and data, loading ultrasonic images and data, optical–ultrasonic fusion, obtaining test report, saving results, and so on.

4. Practical Experiment

4.1. Experiment Setup

In the optical detection part, it is essential to build a good dataset for training, such as that used on the Ti-YOLO detection model. In order to build the dataset on titanium alloy surface defects, a total of 13 titanium alloy samples were prepared, with a size of 150 mm × 150 mm × 5 mm.
There are artificial linear defects on the surface with lengths of 20 mm, 15 mm, and 10 mm; widths of 1.0 mm and 0.5 mm; and depths of 2.0 mm, 1.0 mm, and 0.5 mm, as shown in Figure 13.
Firstly, 140 raw images were acquired with a resolution of 1280 × 720 via the industrial camera in the optical detection system. And 2800 images were obtained through the dataset improvement technology. In addition, the dataset was divided into three parts. There were a total of 2420 images in the training set, accounting for 80% of the dataset, and there were 280 images in the validation set, accounting for 10% of the dataset, and there were 280 images in the test set, accounting for 10% of the dataset. The parameters of the program for deep learning model training are shown in Table 2.
In the matter of acoustic internal defect detection of titanium alloys, there are many probes in the laboratory, including focused and non-focused probes as well as different working center frequencies. The specific parameters of the probes are shown in Table 3.
In addition, the detecting object is a titanium alloy sample with a size of 260 * 200 * 15 mm. There are some artificial holes with different diameters at different positions and depths, as shown in Figure 14.

4.2. Results and Analysis

After the training using the above experimental conditions, the parameters of the Ti-YOLO model were 2.6 million, and the calculation amount was 7.3 billion, both of which are 3.7% and 13.1% lower than the basic YOLOv10n model. In addition, the detection accuracy of Ti-YOLO can reach 99.0% with a detection speed of more than 160 FPS. The training results are shown in Figure 15 and Figure 16.
The training results above showed that the Ti-YOLO model had great performance. The optical detection result image and original data are shown in Figure 17 and Table 4. It can be seen that the three linear defects in the detection image are boxed and displayed. Taking the upper left vertex of the entire image as the origin of the planar coordinate system, the three rows of data in the initial data matrix, respectively, were related to the information of the three defects. The first data in each row of data represented the type; the second data represented the normalized coordinate in the X direction of the upper-left vertex of the target box; the third data represented the normalized coordinate in the Y direction of the upper-left vertex of the target box; the fourth data represented the width in the X direction of the target box; the fifth data represented the width in the Y direction of the target box, and the sixth data represented the detection confidence. To obtain the size data from the initial data, it is necessary to clarify the conversion of the pixel scale. Through multiple calibration experiments and by taking the average, it can be considered that 2646 pixels in the figure correspond to 100 mm. In addition, there was a 25% redundancy between the test results and the actual results, which should be considered.
The comparison between the test results and the actual results which have been bolded is shown in Table 5. It can be seen that optical detection can accurately determine the type of surface defect, which ensures that the absolute error is no more than 3 mm in the aspect of the defects’ center position and 1 mm in the aspect of the defects’ length. So, the optical detection system performed well within error permissibility.
In the future, measures should be adopted to reduce the errors. Specifically, errors about the defects’ position can mainly be caused by image rectification and image cropping. But errors can be reduced by improving the image processing program. In addition, errors about the defects’ length can mainly be caused by the box selection of targets in the Ti-YOLO model. But errors can be reduced by optimizing the parameters of the algorithm.
Acoustic detection imaging is shown in Figure 18. The plane rectangular coordinate system is established by taking the upper-left vertex of the image as the coordinate origin. It can be seen that the imaging inspection results show internal pore defects of different sizes, positions, and depths. After image post-processing, each defect is identified and labeled with a serial number. In addition, the information matrix was generated, and it is related to the detection image, which recorded the type, position, diameter, and buried depth of each defect in detail.
The comparison between the real data and the test results which have been bolded is shown in Table 6. Obviously, the acoustic C-scan detection technology was efficient and accurate in judging the type of defect. In addition, the absolute error was no more than 1 mm in the aspect of the defects’ diameter and the defects’ buried depth, as well as no more than 2 mm in the aspect of the defects’ center position. So, the acoustic detection system performed well within error permissibility.
The errors were mainly caused by the signal processing program. Specifically, the program cannot process the critical signals at the edge of defects completely. So, measures should be adopted to increase the performance of the acoustic detection imaging program. In addition, it is vital to use the motion control system with high precision and flexibility.
As mentioned earlier in the article, the result from the information matrix can be generated via the optical surface defect detection task of the titanium alloy, which mainly includes the defect type, defect position, and defect length in detail. In addition, the result from the data matrix can be obtained by the acoustic internal defect detection task of the titanium alloy, which mainly includes the defect type, defect position, defect diameter, and defect buried depth. Moreover, the optical–acoustic detection image information fusion technology took the optical detection results and acoustic detection results as the input and outputted interactive fusion results and a decision-level test report.
The interactive fusion results are shown in Figure 19. The fusion results had an interactive feature, and the operator can easily obtain the type, position, size, buried depth, and other information of each defect by clicking the mouse. In addition, the fusion results were comprehensive and intuitive, which played an important role in the diagnosis of material structure health. The decision-level test report is shown in Figure 20. One the one hand, the test report included the specific detection information of each defect. On the other hand, the test report showed the risk rating of the detection object and the guiding suggestions about subsequent production processes.

5. Discussion

The new nondestructive testing method for titanium alloys proposed in this paper can overcome the shortcomings of current testing technologies such as single-signal mode and a limited detection range, as well as those of previous methods. The spatial registration of optical and acoustic detection has been achieved through coordinate mapping, and comprehensive and accurate detection results were obtained in the end. In summary, this study provides a new approach for the defect and health diagnosis of titanium alloys.
In the future, the method on the one hand can be useful for the preliminary identification of defects in complex manufacturing processes such as nanocoating deposition; on the other hand, the method can be useful for the preliminary identification of defects in aviation equipment and materials, such as rockets, airplanes, and missiles.
To be honest, the defect detection method for titanium alloys proposed in our study has limitations. The detection object is limited to titanium alloys, and the method is maladaptive to complex-shaped parts. In addition, crystalline defects are extremely difficult to detect because the method is acoustic-based.

6. Conclusions

This work describes an optical–acoustic image fusion method used to identify both surface defects and inner defects in titanium alloys. In terms of optical detection, the Ti-YOLO algorithm was developed by enhancing the backbone, neck, and detection head of the YOLOv10n model. The Ti-YOLO algorithm is both efficient and lightweight, achieving an accuracy of 99.0% with a processing speed exceeding 160 FPS. Regarding acoustic detection, C-scan technology was employed to acquire detection data. Furthermore, we successfully resolved the issue of spatial registration between optical and acoustic images. The experimental results demonstrate that the proposed fusion detection method performs effectively and offers a promising approach for practical detection engineering. In the future, it is essential to develop optical–acoustic defect detection methods to minimize actual errors. On the one hand, greater emphasis should be placed on utilizing more advanced equipment. On the other hand, improvements in signal processing programs are necessary to meet the demands of emerging detection tasks.

Author Contributions

Conceptualization, Y.Z. and M.W.; methodology, Y.Z.; software, M.W.; validation, M.W., Y.H. and G.Z.; formal analysis, Y.H.; investigation, M.W.; resources, Y.Z.; data curation, Y.Z.; writing—original draft preparation, M.W.; writing—review and editing, M.W.; visualization, M.W.; supervision, Y.Z.; project administration, M.W.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The financial support for this research work was provided in part by the Natural Science Foundation of China under Award No. 52275524, the Shandong Provincial Natural Science Foundation under Award No. ZR2024MF082, and major scientific and technological innovation projects in Shandong Province under Grant No. 2022ZLGX04.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to the privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
YOLOYou only look once
NDTNondestructive testing
CCDCharge-coupled device

References

  1. Zhao, Y.Q.; Ge, P.; Xin, S.W. Research and development progress of titanium alloy materials in recent five years. Mater. China 2020, 39, 527–534. [Google Scholar]
  2. Wang, Y.; Zhou, X.F. Research front and trend of specific laser additive manufacturing techniques. Laser Technol. 2021, 45, 475–484. [Google Scholar]
  3. Bao, X.M.; Wang, S.Q. An overview of target detection algorithms based on deep learning. Transducer Microsyst. Technol. 2022, 41, 5–9. [Google Scholar]
  4. Xie, W.N.; Ma, W.F.; Sun, X.Y. An efficient re-parameterization feature pyramid network on YOLOv8 to the detection of steel surface defect. Neurocomputing 2025, 614, 128775. [Google Scholar] [CrossRef]
  5. Zhao, Y.F.; Song, W.H.; Liu, X.L. Rail defect detection method based on improved YOLOv7. Electron. Meas. Technol. 2024, 47, 177–185. [Google Scholar]
  6. Li, J.W.; Chen, J.M. Nondestructive Testing Manua, 1st ed.; China Machine Press: Beijing, China, 2001; pp. 2–15. [Google Scholar]
  7. Ban, R.; Zhang, R.F.; Tao, Z.Y. Laser ultrasonic nondestructive testing of small defects in metals using multi-channel fiber interference. Chin. J. Lasers 2024, 51, 152–160. [Google Scholar]
  8. De, S.; Gupta, K.; Stanley, R.J. A comprehensive structural analysis process for failure assessment in aircraft lap-joint mimics using intramodal fusion of eddy current data. Res. Nondestruct. Eval. 2012, 23, 146–170. [Google Scholar] [CrossRef]
  9. Canil, M.; Pegoraro, J.; Rossi, M. milliTRACE-IR: Contact tracing and temperature screening via mmWave and infrared sensing. IEEE J. Sel. Top. Signal Process. 2022, 16, 208–223. [Google Scholar] [CrossRef]
  10. Xiong, G.M.; Luo, Z.; Sun, D. Target detection and tracking of smoke obscured driverless vehicles based on fusion of infrared camera and millimeter wave radar. J. Mil. Ind. 2024, 45, 893–906. [Google Scholar]
  11. Sun, M.J.; Liu, T.; Cheng, X.Z. Nondestructive testing method of metal material defects based on multi-model signals. Acta Phys. Sin. 2016, 65, 227–240. [Google Scholar]
  12. Redmon, J.; Divvala, S.; Girshick, R. You only look once: Unified real-time object detection. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 12 December 2016. [Google Scholar]
  13. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. IEEE 2017, 1, 6517–6525. [Google Scholar]
  14. Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  15. Bochkovskiy, A.; Wang, C.Y. YOLOv4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  16. Li, C.Y.; Jia, H.L. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
  17. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state of the art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  18. Wang, C.Y.; Ye, H.; Liao, H.Y.M. YOLOv9: Learning what you want to learn using programmable gradient information. In Proceedings of the European Conference on Computer Vision (ECCV), Milan, Italy, 29 September–4 October 2024. [Google Scholar]
  19. Wang, A. YOLOv10: Real-time end-to-end object detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
  20. Shen, H.W.; Fu, M.M. Optical element surface roughness measurement system based on image preprocessing. Laser J. 2024, 42, 252–256. [Google Scholar]
  21. Gong, J.Y.; Fu, W.H.; Liu, N.A. Design of preprocessing module for SAR image target contour enhancement. Syst. Eng. Electron. 2024, 46, 4010–4017. [Google Scholar]
  22. Wu, L.; Hao, H.Y.; Song, Y. Review of industrial metal surface defect detection based on computer vision. Acta Autom. Sin. 2024, 50, 1261–1283. [Google Scholar]
  23. Wang, X.C.; Peng, F.L.; Li, X.Y. Infrared target detection algorithm based on improved Faster R-CNN. J. Appl. Opt. 2024, 45, 346–353. [Google Scholar] [CrossRef]
  24. Qin, H.; Li, Y.J.; Liang, Q. K Asymmetric geometric correction network for document images. J. Image Graph. 2023, 28, 2314–2329. [Google Scholar]
  25. Liu, T. Research on Multi-Object Tracking Technology Based on Infrared and Radar Fusion. Master’s Thesis, Harbin Institute of Technology, Harbin, China, 1 May 2023. [Google Scholar]
  26. Shang, H.; Sun, L.B.; Qin, W.H. Pedestrian detection at night based on fusion of infrared camera and millimeter wave radar. Chin. J. Sens. Actuators 2021, 34, 1137–1145. [Google Scholar]
  27. Kong, S.; Gan, L.; Wang, R. Target tracking algorithm of radar and infrared sensor based on multi-source information fusion. In Proceedings of the 2022 International Conference on Artificial Intelligence, Information Processing and Cloud Computing (AIIPCC), Kunming, China, 19–21 August 2022. [Google Scholar]
  28. Wang, Q.R.; Tian, X.Y.; Li, D.C. Multimodal soft jumping robot with self-decision ability. Smart Mater. Struct. 2021, 30, 085038. [Google Scholar] [CrossRef]
  29. Masalkhi, M.; Ong, J.; Waisberg, E. Google DeepMind’s gemini AI versus ChatGPT: A comparative analysis in ophthalmology. Eye 2024, 38, 1412–1417. [Google Scholar] [CrossRef] [PubMed]
  30. Sandler, M.; Howard, A.; Zhu, M. MobileNetV2: Inverted residuals and Linear Bottlenecks. arXiv 2018, arXiv:1801.04381. [Google Scholar]
Figure 1. Optical testing process.
Figure 1. Optical testing process.
Applsci 15 08294 g001
Figure 2. Ti-YOLO structure.
Figure 2. Ti-YOLO structure.
Applsci 15 08294 g002
Figure 3. Principle of reflection NDT.
Figure 3. Principle of reflection NDT.
Applsci 15 08294 g003
Figure 4. Ultrasonic detection imaging methods in common use.
Figure 4. Ultrasonic detection imaging methods in common use.
Applsci 15 08294 g004
Figure 5. Results of three scanning methods. (a) A-scan; (b) B-scan; (c) C-scan.
Figure 5. Results of three scanning methods. (a) A-scan; (b) B-scan; (c) C-scan.
Applsci 15 08294 g005
Figure 6. The diagram of coordinate mapping.
Figure 6. The diagram of coordinate mapping.
Applsci 15 08294 g006
Figure 7. Optical detection system.
Figure 7. Optical detection system.
Applsci 15 08294 g007
Figure 8. Interface of the PC software optical detection part.
Figure 8. Interface of the PC software optical detection part.
Applsci 15 08294 g008
Figure 9. Ultrasonic detection system.
Figure 9. Ultrasonic detection system.
Applsci 15 08294 g009
Figure 10. Ultrasonic signal acquisition software interface.
Figure 10. Ultrasonic signal acquisition software interface.
Applsci 15 08294 g010
Figure 11. Interface of the PC software ultrasonic detection part.
Figure 11. Interface of the PC software ultrasonic detection part.
Applsci 15 08294 g011
Figure 12. Interface of the PC software optical–ultrasonic fusion part.
Figure 12. Interface of the PC software optical–ultrasonic fusion part.
Applsci 15 08294 g012
Figure 13. Titanium alloy samples.
Figure 13. Titanium alloy samples.
Applsci 15 08294 g013
Figure 14. Titanium alloy detection object. (a) The front of the detecting object. (b) The back of the detecting object.
Figure 14. Titanium alloy detection object. (a) The front of the detecting object. (b) The back of the detecting object.
Applsci 15 08294 g014
Figure 15. The confusion matrix and P-R curve generated by Ti-YOLO training: (a) confusion matrix; (b) P-R curve.
Figure 15. The confusion matrix and P-R curve generated by Ti-YOLO training: (a) confusion matrix; (b) P-R curve.
Applsci 15 08294 g015
Figure 16. Overview of Ti-YOLO training results.
Figure 16. Overview of Ti-YOLO training results.
Applsci 15 08294 g016
Figure 17. Optical detection result image.
Figure 17. Optical detection result image.
Applsci 15 08294 g017
Figure 18. Acoustic detection imaging.
Figure 18. Acoustic detection imaging.
Applsci 15 08294 g018
Figure 19. The interactive fusion results.
Figure 19. The interactive fusion results.
Applsci 15 08294 g019
Figure 20. The decision-level test report.
Figure 20. The decision-level test report.
Applsci 15 08294 g020
Table 1. Software and hardware conditions of the PC for the optical detection system.
Table 1. Software and hardware conditions of the PC for the optical detection system.
SettingParameter
Operating systemWindows 10
CPUIntel Xeon Sliver 4210R
GPUNVIDIA RTX A5000
RAM24 G
Deep learning frameworkPytorch 2.0.1
GPU general parallel computing architectureCUDA 11.7
Programming languagePython 3.9
Neural network libraryCUDNN 8500
Table 2. Program running setting.
Table 2. Program running setting.
SettingParameter
optimization algorithmrandom gradient descent
lr00.01
lrf0.01
momentum0.937
weight decay0.0005
confidence0.25
IoU0.7
works8
mosaicoff
pretraining modeloff
epochs200
batch size16
image size640 × 640
Table 3. The specific parameters of the probes.
Table 3. The specific parameters of the probes.
NoTypeFrequency
(MHz)
Wafer Diameter
(mm)
Focal Distance
(mm)
1non-focused510
2non-focused1515
3non-focused2020
4focused51050
5focused101060
6focused20660
7focused25625
Table 4. Original data for optical detection.
Table 4. Original data for optical detection.
NoData 1Data 2Data 3Data 4Data 5Data 6
100.9255830.5797050.0292590.0675350.771795
200.9255830.8799000.0204250.0655950.733762
300.9249440.7288560.0244450.0646460.643998
Table 5. Comparison between optical detection data and real data.
Table 5. Comparison between optical detection data and real data.
NoTypePosition(mm)Length(mm)
1line defectX = 241.19, Y = 114.3210.82
line defectX = 240, Y = 11510
2line defectX = 240.36, Y = 145.4610.35
line defectX = 240, Y = 14510
3line defectX = 239.17, Y = 177.4010.50
line defectX = 240, Y = 17510
absolute error δ 3 m m δ 1 m m
Table 6. Comparison between the final results of internal defect detection and the real data.
Table 6. Comparison between the final results of internal defect detection and the real data.
NoTypePosition(mm)Diameter(mm)Depth(mm)
1circleX = 50, Y = 10085
circleX = 50, Y = 9885
2circleX = 100, Y = 10065
circleX = 99, Y = 9875
3circleX = 150, Y = 6048
circleX = 148, Y = 5848
4circleX = 150, Y = 10045
circleX = 148, Y = 9845
5circleX = 150, Y = 20411
circleX = 149, Y = 18411
absolute error δ 2 m m δ 1 m m δ 1 m m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, M.; Zhao, Y.; Huang, Y.; Zhao, G. A Titanium Alloy Defect Detection Method Based on Optical–Acoustic Image Fusion. Appl. Sci. 2025, 15, 8294. https://doi.org/10.3390/app15158294

AMA Style

Wang M, Zhao Y, Huang Y, Zhao G. A Titanium Alloy Defect Detection Method Based on Optical–Acoustic Image Fusion. Applied Sciences. 2025; 15(15):8294. https://doi.org/10.3390/app15158294

Chicago/Turabian Style

Wang, Mingzhen, Yang Zhao, Yufeng Huang, and Gang Zhao. 2025. "A Titanium Alloy Defect Detection Method Based on Optical–Acoustic Image Fusion" Applied Sciences 15, no. 15: 8294. https://doi.org/10.3390/app15158294

APA Style

Wang, M., Zhao, Y., Huang, Y., & Zhao, G. (2025). A Titanium Alloy Defect Detection Method Based on Optical–Acoustic Image Fusion. Applied Sciences, 15(15), 8294. https://doi.org/10.3390/app15158294

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop