Next Article in Journal
Chemical Interactions of Deicing Salts with Concrete Pastes Containing Slag Cement
Previous Article in Journal
The Influence of Two-Region Morphology and Grain Shape on the Transport Critical Current Density in the Range from 15 K to 30 K in SiC-Doped MgB2 Wires Fabricated by the Powder-in-Tube Method
Previous Article in Special Issue
Optimisation of Response Surface Methodology Based on Finite Element Analysis for Laser Cladding of Highly Hardened WC(Co,Ni) Coatings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transfer Learning-Based Multi-Sensor Approach for Predicting Keyhole Depth in Laser Welding of 780DP Steel

1
Flexible Manufacturing R&D Department, Korea Institute of Industrial Technology, Incheon 21999, Republic of Korea
2
Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, Republic of Korea
3
Department of Mechanical and Materials Engineering, Portland State University, Portland, OR 97229, USA
*
Author to whom correspondence should be addressed.
Materials 2025, 18(17), 3961; https://doi.org/10.3390/ma18173961
Submission received: 19 July 2025 / Revised: 19 August 2025 / Accepted: 21 August 2025 / Published: 24 August 2025
(This article belongs to the Special Issue Advances in Plasma and Laser Engineering (Second Edition))

Abstract

Penetration depth is a critical factor determining joint strength in butt welding; however, it is difficult to monitor in keyhole-mode laser welding due to the dynamic nature of the keyhole. Recently, optical coherence tomography (OCT) has been introduced for real-time keyhole depth measurement, though accurate results require meticulous calibration. In this study, deep learning-based models were developed to estimate penetration depth in laser welding of 780 dual-phase (DP) steel. The models utilized coaxial weld pool images and spectrometer signals as inputs, with OCT signals serving as the output reference. Both uni-sensor models (based on coaxial pool images) and multi-sensor models (incorporating spectrometer data) were developed using transfer learning techniques based on pre-trained convolutional neural network (CNN) architectures including MobileNetV2, ResNet50V2, EfficientNetB3, and Xception. The coefficients of determination values (R2) of the uni-sensor CNN transfer learning models without fine-tuning ranged from 0.502 to 0.681, and the mean absolute errors (MAEs) ranged from 0.152 mm to 0.196 mm. In the fine-tuning models, R2 decreased by more than 17%, and MAE increased by more than 11% compared to the previous models without fine-tuning. In addition, in the multi-sensor model, R2 ranged from 0.900 to 0.956, and MAE ranged from 0.058 mm to 0.086 mm, showing better performance than uni-sensor CNN transfer learning models. This study demonstrated the potential of using CNN transfer learning models for predicting penetration depth in laser welding of 780DP steel.

1. Introduction

Laser welding has become an essential joining technique in modern manufacturing due to its high energy density, narrow heat-affected zone (HAZ), and high processing speed [1]. This technology is particularly prevalent in the automotive industry, where it is widely used for joining advanced high-strength steels (AHSS) [2,3,4], such as dual-phase (DP) steels that offer an optimal balance between strength and ductility [2,5,6]. These materials enable the production of lightweight vehicle structures with improved crashworthiness [7,8]. To ensure the mechanical integrity of welded assemblies, it is critical to precisely control and monitor welding quality parameters—among which penetration depth is especially important [9,10], as it directly impacts the structural soundness and load-bearing capacity of the joint. However, in keyhole-mode laser welding, the dynamic and unstable nature of the keyhole significantly complicates real-time depth monitoring [11,12,13]. Traditional evaluation methods, such as destructive testing or radiographic inspection, though reliable, are labor-intensive, costly, and unsuitable for inline inspection in high-throughput industrial environments [14,15]. To overcome these limitations, optical coherence tomography (OCT) [16,17] has been introduced as a non-destructive, high-resolution technique capable of capturing the internal geometry of the keyhole during welding operations [18,19]. While OCT offers distinct advantages in mapping the keyhole morphology, its practical application is often hindered by noise susceptibility, signal degradation, and alignment sensitivity due to process-induced disturbances such as spatter [20] or keyhole fluctuations [21,22]. In response to these challenges, recent advances in deep learning have opened new possibilities for weld quality prediction [23,24]. In particular, convolutional neural networks (CNNs) [25,26], when combined with transfer learning strategies, have shown substantial promise in extracting complex features from high-dimensional sensor data—such as weld pool images and spectral signals—for accurate, real-time estimation of weld characteristics, including penetration depth [27,28,29,30,31].
This study presents a multi-sensor deep learning framework for predicting penetration depth during laser welding of 780DP steel. The proposed model employs coaxial camera images and spectrometer signals as inputs and uses calibrated OCT data as the ground truth. A comparative analysis is conducted using CNN-based transfer learning models—including MobileNetV2 [32,33], ResNet50V2 [34], EfficientNetB3 [35], and Xception [36]—under both uni-sensor and multi-sensor configurations. The results demonstrate that the multi-sensor approach significantly enhances prediction accuracy, highlighting its potential for real-time penetration depth estimation in industrial laser welding applications.

2. Experiments

The base material used in this study was 780 dual-phase (DP) steel with a thickness of 3.5 mm. Bead-on-plate (BOP) welding tests were conducted to evaluate penetration behavior. The chemical composition and mechanical properties of the steel are summarized in Table 1 and Table 2. Steel specimens were prepared as rectangular sheets measuring 50 mm in width and 150 mm in length, with a weld length of 100 mm. The laser system employed a fiber laser (YLS-6000, IPG Photonics, USA) with a beam quality of 2 mm·mrad and a maximum output power of 6 kW. Laser energy was delivered to the welding head through an optical fiber with a core diameter of 200 μm. The laser welding head was equipped with focusing optics (D30 Wobble, IPG Photonics, USA), an illumination laser system, an OCT sensor (LDD-700, IPG Photonics, USA), a charge-coupled device (CCD) camera (UI-3140CP Rev. 2, iDS, Germany), and a spectrometer (HR4000, Ocean Optics, USA) (Figure 1). The focal length of the optics was 200 mm, and the laser beam diameter at the specimen surface was approximately 270 μm. An illumination laser operating at a wavelength of 980 nm was used, with an output power of 100 W and an incident angle of 40° relative to the specimen’s surface. The OCT sensor, operating at a sampling frequency of 135 kHz, was used to monitor the depth of the keyhole in real time. Since the OCT system measures keyhole depth directly, a calibration procedure was conducted to correlate the OCT signal with actual penetration depth, enabling its use as the target output for model training. Images of the weld zone were captured by the CCD camera through a 980 ± 5 nm bandpass filter at a frame rate of 500 Hz. The spectrometer detected signals in the wavelength range of 200 to 1100 nm, with a sampling rate of 100 Hz and an optical resolution of 0.47 nm at half spatial resolution. Both the CCD camera and the spectrometer were optically aligned with the welding head using a dichroic mirror.
Table 3 provides details of the welding parameters used in the experiment. The welding speed was varied from 3 to 7 m/min in 1 m/min increments, and laser power was adjusted accordingly to produce welds with different penetration depths. Data were collected from the CCD camera, OCT sensor, and spectrometer at their respective sampling rates, and were synchronized to 100 Hz and 500 Hz datasets. The first and last 0.3 s of each weld sequence were excluded from model training to eliminate start/end edge effects.

3. Data Preprocessing and Models

3.1. Penetration Depth Calibration

The OCT signal was utilized to estimate the laser weld penetration depth by serving as a proxy for keyhole depth. A calibration process was conducted to establish the correlation between OCT measurements and the actual penetration depth. For this purpose, specimens welded under 15 discrete welding conditions were randomly sectioned to obtain coupons, and the penetration depth was measured through metallographic examination. The mean values of the OCT signals obtained at the corresponding locations and then the measured penetration depths were plotted (Figure 2). A linear regression yielded a coefficient of determination (R2) of 0.8712, indicating a strong correlation.

3.2. Data Preprocessing

The model inputs—CCD camera images and spectrometer signals—and the target output—penetration depth—were collected at different sampling frequencies. Specifically, data were acquired at 100 Hz and 500 Hz depending on the sensor, and separate models were trained and evaluated based on these two sampling rates. CCD images capturing the weld pool, keyhole, and bead were obtained at 500 Hz using a coaxial camera aligned with the laser optics. The image resolution was 292 × 480 pixels. To align with the 100 Hz sampling rate of other sensors, images were downsampled by averaging every five consecutive frames. Although this resampling process resulted in slightly blurred images, the key structural features remained distinguishable (Figure 3). Spectrometer signals were acquired at a sampling frequency of 100 Hz. For the 100 Hz model, these signals were used directly. For models operating at 500 Hz, spectrometer data were upsampled using the Fourier transform method to match the target frequency. Figure 4 illustrates key spectral features, including peaks at 1070 nm (laser wavelength), 980 nm (illumination laser), and approximately 838 nm (OCT reference). The OCT sensor operated at a high frequency of 135 kHz. To synchronize with the models’ sampling rates, OCT data were averaged over windows of 1350 and 270 samples, yielding effective rates of 100 Hz and 500 Hz, respectively. Figure 5 compares raw OCT data with their downsampled counterpart, demonstrating that essential signal characteristics were preserved despite slight attenuation.

3.3. Deep Learning Models

The deep learning framework included three types of models: (1) a baseline single-sensor transfer learning model, (2) a fine-tuned single-sensor model trained on both 100 Hz and 500 Hz data, and (3) a fine-tuned multi-sensor model designed to enhance prediction of laser weld penetration depth. The single-sensor model was based on a conventional convolutional neural network (CNN) architecture and was trained using CCD camera images sampled at either 100 Hz or 500 Hz. Transfer learning was applied using several pre-trained models, including MobileNetV2, ResNet50V2, EfficientNetB3, and Xception. For the 100 Hz configuration, CCD images downsampled to 100 Hz served as inputs, and the corresponding downsampled penetration depth data were used as outputs (Figure 6a). In the fine-tuned single-sensor model (Figure 6b), the last block of the pre-trained architecture was adjusted during training. Both original 500 Hz CCD images and their 100 Hz downsampled versions were used as inputs, with penetration depth data processed similarly via upsampling or downsampling to match. The multi-sensor model (Figure 6c) extended the fine-tuned single-sensor framework by integrating spectrometer data sampled at 100 Hz as an additional input. This fusion of CCD imagery and spectrometer signals aimed to improve model accuracy through sensor complementarity. All models employed the Rectified Linear Unit (ReLU) as the activation function for both hidden layers and output nodes.

3.4. Dataset and Optimization Method

Following data collection and preprocessing, a total of 1769 and 8845 data samples were obtained for the 100 Hz and 500 Hz datasets, respectively. These datasets were randomly split into training, validation, and testing subsets in a 70:15:15 ratio to ensure balanced model evaluation. Model training was conducted using the Adam optimization algorithm. The optimizer was configured with the following hyperparameters: learning rate = 0.001, β1 = 0.9, β2 = 0.999, and ε = 1 × 10−8. Each model was trained over 1000 epochs to ensure convergence.

4. Results

The deep learning model based on the MobileNetV2 transfer learning architecture demonstrated rapid convergence of training loss during the initial epochs, as illustrated in Figure 7. Across various input configurations, the model yielded mean absolute error (MAE) values ranging from 0.086 mm to 0.180 mm. In addition to the MAE, the mean absolute percentage error (MAPE) was calculated to represent the prediction error as a percentage of the actual penetration depth, thereby providing a more intuitive assessment of the model’s accuracy. The MAPE is defined as the mean of the absolute differences between the predicted and actual values, normalized by the actual values and expressed as a percentage. Among the evaluated models, the fine-tuned multi-sensor configuration achieved the lowest MAE and the highest coefficient of determination (R2 = 0.901), indicating superior prediction accuracy. Although both training and validation datasets showed relatively low errors, the test dataset exhibited increased error and a decline in R2, suggesting potential overfitting or limited generalizability. In fine-tuned single-sensor models, a sharp increase in loss was observed during early training, which stabilized after approximately 150 epochs.
In comparison, models based on the ResNet50V2 architecture exhibited a more gradual reduction in training loss, reaching stable convergence after around 200 epochs. As shown in Figure 8, the ResNet50V2 model maintained relatively stable loss from the beginning, even when fine-tuning was applied. For this model, MAEs ranged from 0.085 mm to 0.180 mm, and R2 values ranged from 0.564 to 0.900.
The EfficientNetB3-based models produced the lowest training errors overall. The MAEs ranged from 0.049 mm to 0.152 mm, and the R2 values spanned 0.681 to 0.956, as shown in Figure 9. This result demonstrates that EfficientNetB3 offered improved predictive performance relative to other architectures.
Similarly, models using the Xception architecture showed consistent convergence behavior after a certain number of epochs (Figure 10). These models yielded MAEs between 0.073 mm and 0.196 mm, and R2 values between 0.502 and 0.927.
Outliers were identified during model training. For instance, in the ResNet50V2 model, data points #510, #741, #817, and #1106 deviated significantly from the ideal x = y line in the scatter plots (Figure 11). These anomalies were attributed to imaging artifacts such as spatter, shadows, or scratches that caused inaccuracies in OCT signal acquisition. These errors were addressed during data refinement. Overall, comparative analysis revealed that MobileNetV2 models often achieved the lowest MAEs and highest R2 values—up to 0.998 in some training cases.
Among all configurations, the single-sensor model trained on 500 Hz CCD data achieved the highest training accuracy (Table 4). However, validation performance was notably lower, with MAEs increasing by factors of 6 to 25 and R2 decreasing by 5% to 48% (Table 5), indicating generalization limitations. Despite this, all architectures showed consistent trends between training and validation. The EfficientNetB3 model demonstrated the most robust accuracy, while the multi-sensor model using 100 Hz data produced the most balanced and generalizable performance (Table 6). Figure 12 and Figure 13 present box plots comparing model performance. EfficientNetB3 had the narrowest interquartile range (−0.05347 mm to 0.04045 mm), whereas MobileNetV2 exhibited the widest spread (−0.08005 mm to 0.06178 mm). Although ResNet50V2 presented more frequent outliers, the largest deviations were observed in MobileNetV2 results. In general, multi-sensor models with fine-tuning yielded narrower error distributions, particularly when operating on 100 Hz data. The EfficientNetB3 model demonstrated the most robust accuracy, while the multi-sensor model using 100 Hz data produced the most balanced and generalizable performance. Figure 12 and Figure 13 present box plots comparing model performance. EfficientNetB3 had the narrowest interquartile range (−0.05347 mm to 0.04045 mm), whereas MobileNetV2 exhibited the widest spread (−0.08005 mm to 0.06178 mm). Although ResNet50V2 presented more frequent outliers, the largest deviations were observed in MobileNetV2 results. In general, multi-sensor models with fine-tuning yielded narrower error distributions, particularly when operating on 100 Hz data.

5. Discussion

The most accurate model developed in this study achieved a mean absolute error (MAE) of 0.049 mm in predicting penetration depth during laser welding. Given the base material thickness of 3.5 mm, this corresponds to an error margin of approximately 1.4%, underscoring the high precision of the proposed approach. This research explored four distinct transfer learning architectures; however, many additional deep learning models remain to be evaluated in future work. Among the evaluated architectures, EfficientNetB3 consistently achieved the highest prediction accuracy, while the multi-sensor configuration outperformed single-sensor counterparts overall [26]. Although increasing the size of the training dataset generally enhances model accuracy, it also significantly prolongs training time. Therefore, model selection should be informed by the constraints and requirements of specific industrial applications. Additionally, due to varying sampling rates across different sensors, rigorous preprocessing and synchronization are necessary. In this study, the spectrometer signal—despite its relative and sensitive nature—proved effective as a supplementary input for assessing weld quality [29,31]. By averaging every five images into one, the CCD image data were downsampled from 500 Hz to 100 Hz, ensuring compatibility across input datasets without losing relevant information. In some cases, models trained on CCD data sampled at 100 Hz achieved higher prediction accuracy than those trained on 500 Hz data. This improvement can be attributed to the reduction of noise and extraneous fluctuations inherent in higher-frequency image acquisition. Thus, the 100 Hz dataset retained sufficient informational content while mitigating noise effects, leading to enhanced performance in certain model architectures. Furthermore, the high acquisition frequency of the OCT sensor (135 kHz) was instrumental in reducing the effects of keyhole instability, a known challenge in laser welding processes. In certain fine-tuned configurations, a reduction in prediction accuracy was observed. This phenomenon can be attributed to overfitting, where the model became overly specialized to the characteristic weld pool morphology of 780DP steel present in the training dataset. As a result, its ability to generalize to variations in weld pool appearance—caused by process instabilities, sensor noise, or differing visual features—was diminished, leading to reduced performance on unseen data. Most notably, this study demonstrated that deep learning models can accurately predict penetration depth without the need for destructive testing or radiographic inspection, offering a viable alternative for real-time, non-destructive weld quality assessment. It is anticipated that significant changes in surface conditions, such as oxidation, contamination, or increased roughness, could degrade sensor signal quality and reduce prediction accuracy, as evidenced by spatter-induced artifacts that introduced outliers during model training.
Beyond penetration depth prediction, the proposed multi-sensor framework has strong potential to be extended for comprehensive weld quality assessment. Process instabilities in keyhole-mode laser welding—such as spatter generation, porosity formation, and undercut—are often correlated with the same physical phenomena that influence penetration depth. By incorporating additional labeled datasets containing these defect types, the current model architecture could be adapted for multi-target learning, enabling simultaneous prediction of penetration depth and defect occurrence [37]. Moreover, the integration of supplementary sensing modalities, such as high-speed imaging or acoustic emission monitoring, could provide richer process signatures, thereby improving the model’s capability to capture transient anomalies and subtle defect precursors. Such an extended system would contribute to real-time, inline weld quality monitoring, reducing reliance on destructive or radiographic testing and facilitating the implementation of closed-loop process control. Schmoeller et al. (2022) demonstrated an inline OCT-based control system capable of real-time weld depth regulation using closed-loop feedback, suggesting that similar frameworks could be adapted for industrial deployment [38].
While the experimental methodology demonstrated in this study was validated on bead-on-plate laser welding of 780DP steel, its direct applicability to materials with substantially different thermal or mechanical properties, or to markedly different welding conditions, remains to be verified. Consequently, the conclusions drawn here should be interpreted within the scope of the tested material and process parameters. Future research will aim to extend the framework to a broader range of materials and welding setups to assess its generalizability.

6. Conclusions

In this study, a deep learning model incorporating transfer learning techniques was developed to predict penetration depth in bead-on-plate (BOP) laser welding of 780DP steel. The model utilized CCD camera images and spectrometer signals as input features, while the OCT signal—correlated with actual weld depth—was used as the output reference. Both single-sensor and multi-sensor configurations were explored using preprocessed datasets.
The following conclusions can be drawn:
(1)
A transfer learning-based deep learning model was successfully implemented for predicting weld penetration depth. The best-performing configuration achieved a mean absolute error (MAE) of 0.049 mm and a coefficient of determination (R2) of 0.951, corresponding to approximately 1.5% error relative to the material thickness—indicating high prediction accuracy.
(2)
CCD imagery and spectrometer signals were found to be effective input features. The use of a bandpass filter and illumination laser enhanced the quality and reliability of the captured images. Additionally, the high-frequency OCT sensor provided robust reference measurements, minimizing keyhole instability and contributing to the model’s strong performance.
(3)
The experimental methodology, although based on BOP testing of 780DP steel, shows strong potential for generalization to other steel materials and welding conditions.
This research demonstrates the feasibility of accurate, real-time prediction of weld penetration depth using deep learning and sensor fusion—offering a promising alternative to conventional destructive or radiographic testing methods.

Author Contributions

Conceptualization, C.K.; Methodology, B.-J.K. and C.K.; Investigation, B.-J.K.; Resources, Y.-M.K.; Data curation, Y.-M.K.; Writing—original draft, B.-J.K.; Writing—review & editing, C.K.; Supervision, Y.-M.K.; Funding acquisition, C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Technology Innovation Program (RS-2024-00442314, Field test demonstration of developed high-speed SPOT welding system for Aluminum BIW) funded By the Ministry of Trade Industry & Energy (MOTIE, Republic of Korea).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khan, M.S.; Soleimani, M.; Midawi, A.R.H.; Aderibigbe, I.; Zhou, Y.N.; Biro, E. A review on heat affected zone softening of dual-phase steels during laser welding. J. Manuf. Process. 2023, 102, 663–684. [Google Scholar] [CrossRef]
  2. Xia, M.; Biro, E.; Tian, Z.; Zhou, Y.N. Effects of Heat Input and Martensite on HAZ Softening in Laser Welding of Dual Phase Steels. ISIJ Int. 2008, 48, 809–814. [Google Scholar] [CrossRef]
  3. Zhao, Y.Y.; Zhang, Y.S.; Hu, W. Effect of welding speed on microstructure, hardness and tensile properties in laser welding of advanced high strength steel. Sci. Technol. Weld. Join. 2013, 18, 581–590. [Google Scholar] [CrossRef]
  4. de Andrade Ferreira, C.C.; Braga, V.; de Siqueira, R.H.M.; de Carvalho, S.M.; de Lima, M.S.F. Laser beam welding of DP980 dual phase steel at high temperatures. Opt. Laser Technol. 2020, 124, 105964. [Google Scholar] [CrossRef]
  5. Biro, E.; Vignier, S.; Kaczynski, C.; McDermid, J.R.; Lucas, E.; Embury, J.D.; Zhou, Y.N. Predicting Transient Softening in the Sub-Critical Heat-Affected Zone of Dual-Phase and Martensitic Steel Welds. ISIJ Int. 2013, 53, 110–118. [Google Scholar] [CrossRef]
  6. Senuma, T. Physical metallurgy of modern high strength steel sheets. ISIJ Int. 2001, 41, 520–532. [Google Scholar] [CrossRef]
  7. Lee, H.; Kim, C.; Song, J.H. An evaluation of global and local tensile properties of friction-stir welded DP980 dual-phase steel joints using a digital image correlation method. Materials 2015, 8, 8424–8436. [Google Scholar] [CrossRef] [PubMed]
  8. Perka, A.K.; John, M.; Kuruveri, U.B.; Menezes, P.L. Advanced High-Strength Steels for Automotive Applications: Arc and Laser Welding Process, Properties, and Challenges. Metals 2022, 12, 1051. [Google Scholar] [CrossRef]
  9. Blecher, J.J.; Galbraith, C.M.; Van Vlack, C.; Palmer, T.A.; Fraser, J.M.; Webster, P.J.L.; DebRoy, T. Real time monitoring of laser beam welding keyhole depth by laser interferometry. Sci. Technol. Weld. Join. 2014, 19, 560–564. [Google Scholar] [CrossRef]
  10. Wang, Z.; Shi, Y.; Cui, Y.; Yan, W. Three-Dimensional Weld Pool Monitoring and Penetration State Recognition for Variable-Gap Keyhole Tungsten Inert Gas Welding Based on Stereo Vision. Sensors 2024, 24, 7591. [Google Scholar] [CrossRef]
  11. Kroos, J.; Gratzke, U.; Vicanek, M.; Simon, G. Dynamic behaviour of the keyhole in laser welding. J. Phys. D Appl. Phys. 1993, 26, 481. [Google Scholar] [CrossRef]
  12. Matsunawa, A.; Kim, J.-D.; Seto, N.; Mizutani, M.; Katayama, S. Dynamics of keyhole and molten pool in laser welding. J. Laser Appl. 1998, 10, 247–254. [Google Scholar] [CrossRef]
  13. Matsunawa, A.; Mizutani, M.; Katayama, S.; Seto, N. Porosity formation mechanism and its prevention in laser welding. Weld. Int. 2003, 17, 431–437. [Google Scholar] [CrossRef]
  14. Stavridis, J.; Papacharalampopoulos, A.; Stavropoulos, P. Quality assessment in laser welding: A critical review. Int. J. Adv. Manuf. Technol. 2017, 94, 1825–1847. [Google Scholar] [CrossRef]
  15. Kim, C.-H.; Ahn, D.-C. Coaxial monitoring of keyhole during Yb:YAG laser welding. Opt. Laser Technol. 2012, 44, 1874–1880. [Google Scholar] [CrossRef]
  16. Ji, Y.; Grindal, A.W.; Webster, P.J.; Fraser, J.M. Real-time depth monitoring and control of laser machining through scanning beam delivery system. J. Phys. D Appl. Phys. 2015, 48, 155301. [Google Scholar] [CrossRef]
  17. Sokolov, M.; Franciosa, P.; Sun, T.; Ceglarek, D.; Dimatteo, V.; Ascari, A.; Fortunato, A.; Nagel, F. Applying optical coherence tomography for weld depth monitoring in remote laser welding of automotive battery tab connectors. J. Laser Appl. 2021, 33. [Google Scholar] [CrossRef]
  18. Stadter, C.; Schmoeller, M.; Zeitler, M.; Tueretkan, V.; Munzert, U.; Zaeh, M.F. Process control and quality assurance in remote laser beam welding by optical coherence tomography. J. Laser Appl. 2019, 31, 022408. [Google Scholar] [CrossRef]
  19. Patterson, T.; Panton, B.; Lippold, J. Analysis of the laser welding keyhole using inline coherent imaging. J. Manuf. Process. 2022, 82, 601–614. [Google Scholar] [CrossRef]
  20. Robertson, S.M.; Kaplan, A.F.; Frostevarg, J. Material ejection attempts during laser keyhole welding. J. Manuf. Process. 2021, 67, 91–100. [Google Scholar] [CrossRef]
  21. Volpp, J.; Vollertsen, F. Keyhole stability during laser welding—Part I: Modeling and evaluation. Prod. Eng. 2016, 10, 443–457. [Google Scholar] [CrossRef]
  22. Volpp, J. Keyhole stability during laser welding—Part II: Process pores and spatters. Prod. Eng. 2017, 11, 9–18. [Google Scholar] [CrossRef]
  23. Lee, K.; Yi, S.; Hyun, S.; Kim, C. Review on the recent welding research with application of CNN-based deep learning—Part 1: Models and applications. J. Weld. Join. 2021, 39, 10–19. [Google Scholar] [CrossRef]
  24. Lee, K.; Yi, S.; Hyun, S.; Kim, C. Review on the recent welding research with application of CNN-based deep learning—Part II: Model evaluation and visualizations. J. Weld. Join. 2021, 39, 20–26. [Google Scholar] [CrossRef]
  25. Lee, K.; Kang, S.; Kang, M.; Yi, S.; Kim, C. Estimation of Al/Cu laser weld penetration in photodiode signals using deep neural network classification. J. Laser Appl. 2021, 33, 042009. [Google Scholar] [CrossRef]
  26. Kang, S.; Lee, K.; Kang, M.; Jang, Y.H.; Kim, C. Weld-penetration-depth estimation using deep learning models and multisensor signals in Al/Cu laser overlap welding. Opt. Laser Technol. 2023, 161. [Google Scholar] [CrossRef]
  27. Khumaidi, A.; Yuniarno, E.M.; Purnomo, M.H. Welding defect classification based on convolution neural network (CNN) and Gaussian kernel. In Proceedings of the 2017 International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, IN, USA, 28–29 August 2017; pp. 261–265. [Google Scholar]
  28. Jiao, W.; Wang, Q.; Cheng, Y.; Zhang, Y. End-to-end prediction of weld penetration: A deep learning and transfer learning based method. J. Manuf. Process. 2020. [Google Scholar] [CrossRef]
  29. Kang, S.; Kang, M.; Jang, Y.H.; Kim, C. Deep learning-based penetration depth prediction in Al/Cu laser welding using spectrometer signal and CCD image. J. Laser Appl. 2022, 34, 042035. [Google Scholar] [CrossRef]
  30. Li, C.; Wang, Q.; Jiao, W.; Johnson, M.; Zhang, Y. Deep learning-based detection of penetration from weld pool reflection images. Weld. J. 2020, 99, 239S–245S. [Google Scholar] [CrossRef]
  31. Kang, S.; Kang, M.; Jang, Y.H.; Kim, C. Spectrometer as a quantitative sensor for predicting the weld depth in laser welding. Opt. Laser Technol. 2024, 175, 110855. [Google Scholar] [CrossRef]
  32. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
  33. Sandler, M.; Howard, A.G.; Zhu, M.; Zhmoginov, A.; Chen, L. Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR abs/1801.04381 (2018). arXiv 2018, arXiv:1801.04381. [Google Scholar]
  34. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the European conference on computer vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 630–645. [Google Scholar]
  35. Tan, M.; Le, Q.V.E. rethinking model scaling for convolutional neural networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
  36. Kaiser, L.; Gomez, A.N.; Chollet, F. Depthwise separable convolutions for neural machine translation. arXiv 2017, arXiv:1706.03059. [Google Scholar] [CrossRef]
  37. Luo, Z.; Wu, D.; Zhang, P.; Ye, X.; Shi, H.; Cai, X.; Tian, Y. Laser welding penetration monitoring based on time-frequency characterization of acoustic emission and CNN-LSTM hybrid network. Materials 2023, 16, 1614. [Google Scholar] [CrossRef]
  38. Schmoeller, M.; Weiss, T.; Goetz, K.; Stadter, C.; Bernauer, C.; Zaeh, M.F. Inline weld depth evaluation and control based on OCT keyhole depth measurement and fuzzy control. Processes 2022, 10, 1422. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of data collection system.
Figure 1. Schematic diagram of data collection system.
Materials 18 03961 g001
Figure 2. Relationship between OCT signal and penetration depth.
Figure 2. Relationship between OCT signal and penetration depth.
Materials 18 03961 g002
Figure 3. Pretreatment process of image data: downsampling of images using the averaging algorithm.
Figure 3. Pretreatment process of image data: downsampling of images using the averaging algorithm.
Materials 18 03961 g003
Figure 4. Spectrum intensity of laser welding.
Figure 4. Spectrum intensity of laser welding.
Materials 18 03961 g004
Figure 5. Comparison of raw data and 100 Hz sampled data on OCT signal.
Figure 5. Comparison of raw data and 100 Hz sampled data on OCT signal.
Materials 18 03961 g005
Figure 6. Structure of the models: (a) model of transfer learning (100 Hz); (b) model of transfer learning with fine-tuning (100, 500 Hz); (c) model of transfer learning with fine-tuning on multi-sensor framework (100 Hz).
Figure 6. Structure of the models: (a) model of transfer learning (100 Hz); (b) model of transfer learning with fine-tuning (100, 500 Hz); (c) model of transfer learning with fine-tuning on multi-sensor framework (100 Hz).
Materials 18 03961 g006
Figure 7. Results of deep learning with transfer learning (MobileNetV2).
Figure 7. Results of deep learning with transfer learning (MobileNetV2).
Materials 18 03961 g007
Figure 8. Results of deep learning with transfer learning (ResNet50V2).
Figure 8. Results of deep learning with transfer learning (ResNet50V2).
Materials 18 03961 g008
Figure 9. Results of deep learning with transfer learning (EfficientNetB3).
Figure 9. Results of deep learning with transfer learning (EfficientNetB3).
Materials 18 03961 g009
Figure 10. Results of deep learning with transfer learning (Xception).
Figure 10. Results of deep learning with transfer learning (Xception).
Materials 18 03961 g010
Figure 11. Outliers occurring in deep learning model during training.
Figure 11. Outliers occurring in deep learning model during training.
Materials 18 03961 g011
Figure 12. Outliers occurring in deep learning models during training (transfer learning with fine-tuned multi-sensor model, 100 Hz).
Figure 12. Outliers occurring in deep learning models during training (transfer learning with fine-tuned multi-sensor model, 100 Hz).
Materials 18 03961 g012
Figure 13. Comparison of test results depending on dataset.
Figure 13. Comparison of test results depending on dataset.
Materials 18 03961 g013
Table 1. Chemical composition of base material (wt.%).
Table 1. Chemical composition of base material (wt.%).
CMnSiPS
Steel (780DP)0.122.60.60.30.003
Table 2. Mechanical properties of base material.
Table 2. Mechanical properties of base material.
Tensile Strength (MPa)Elongation (%)
Steel (780DP)Min. 78014
Table 3. Laser welding parameters.
Table 3. Laser welding parameters.
Laser power (W)1429~2750
Welding speed (m/min)3, 4, 5, 6, 7
Laser beam diameter (mm)0.27
Focal length (mm)200
Table 4. Training results according to deep learning models.
Table 4. Training results according to deep learning models.
ModelTransfer learning; uni-sensor 100 HzTransfer learning with fine-tuning;
uni-sensor 100 Hz
MREXMREX
MAE (mm)0.0070.0070.0070.0140.0090.0220.0100.018
R20.9990.9990.9990.9980.9990.9980.9980.998
ModelTransfer learning with fine-tuning;
uni-sensor 500 Hz
Transfer learning with fine-tuning;
multi-sensor 100 Hz
MREXMREX
MAE (mm)0.0090.0090.0070.0080.0100.0160.0170.013
R20.9990.9990.9980.9990.9990.9990.9980.999
M: MobileNetV2; R: ResNet50V2; E: EfficientNetB3; X: Xception.
Table 5. Validation results according to deep learning models.
Table 5. Validation results according to deep learning models.
ModelTransfer learning; uni-sensor 100 HzTransfer learning with fine-tuning;
uni-sensor 100 Hz
MREXMREX
MAE (mm)0.1750.1500.1420.1950.1590.1130.1270.159
R20.6140.7050.6770.5100.6680.8340.7710.668
ModelTransfer learning with fine-tuning;
uni-sensor 500 Hz
Transfer learning with fine-tuning;
multi-sensor 100 Hz
MREXMREX
MAE (mm)0.1400.0970.0520.0780.0850.1590.0630.078
R20.7390.8710.9450.9150.9020.6680.9430.917
M: MobileNetV2; R: ResNet50V2; E: EfficientNetB3; X: Xception.
Table 6. Test results according to deep learning models.
Table 6. Test results according to deep learning models.
ModelTransfer learning; uni-sensor 100 HzTransfer learning with fine-tuning;
uni-sensor 100 Hz
MREXMREX
MAE (mm)0.1800.1800.1520.1960.1590.1130.1280.159
R20.5640.5640.6810.5020.6880.8370.8000.688
ModelTransfer learning with fine-tuning;
uni-sensor 500 Hz
Transfer learning with fine-tuning;
multi-sensor 100 Hz
MREXMREX
MAE (mm)0.1650.0950.0490.0770.0860.0850.0580.073
R20.6170.8750.9510.9150.9010.9000.9560.927
M: MobileNetV2; R: ResNet50V2; E: EfficientNetB3; X: Xception.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, B.-J.; Kim, Y.-M.; Kim, C. Transfer Learning-Based Multi-Sensor Approach for Predicting Keyhole Depth in Laser Welding of 780DP Steel. Materials 2025, 18, 3961. https://doi.org/10.3390/ma18173961

AMA Style

Kim B-J, Kim Y-M, Kim C. Transfer Learning-Based Multi-Sensor Approach for Predicting Keyhole Depth in Laser Welding of 780DP Steel. Materials. 2025; 18(17):3961. https://doi.org/10.3390/ma18173961

Chicago/Turabian Style

Kim, Byeong-Jin, Young-Min Kim, and Cheolhee Kim. 2025. "Transfer Learning-Based Multi-Sensor Approach for Predicting Keyhole Depth in Laser Welding of 780DP Steel" Materials 18, no. 17: 3961. https://doi.org/10.3390/ma18173961

APA Style

Kim, B.-J., Kim, Y.-M., & Kim, C. (2025). Transfer Learning-Based Multi-Sensor Approach for Predicting Keyhole Depth in Laser Welding of 780DP Steel. Materials, 18(17), 3961. https://doi.org/10.3390/ma18173961

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop