Next Article in Journal
Cyber-Physical Vulnerability Assessment in Smart Grids Based on Multilayer Complex Networks
Next Article in Special Issue
Fuzzy Overclustering: Semi-Supervised Classification of Fuzzy Labels with Overclustering and Inverse Cross-Entropy
Previous Article in Journal
Diagnosis Methodology Based on Deep Feature Learning for Fault Identification in Metallic, Hybrid and Ceramic Bearings
Previous Article in Special Issue
Review of Capacitive Touchscreen Technologies: Overview, Research Trends, and Machine Learning Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simultaneous Burr and Cut Interruption Detection during Laser Cutting with Neural Networks

Applied Laser and Photonics Group, Faculty of Engineering, University of Applied Sciences Aschaffenburg, Wuerzburger Straße 45, 63739 Aschaffenburg, Germany
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(17), 5831; https://doi.org/10.3390/s21175831
Submission received: 8 June 2021 / Revised: 24 August 2021 / Accepted: 25 August 2021 / Published: 30 August 2021
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)

Abstract

:
In this contribution, we compare basic neural networks with convolutional neural networks for cut failure classification during fiber laser cutting. The experiments are performed by cutting thin electrical sheets with a 500 W single-mode fiber laser while taking coaxial camera images for the classification. The quality is grouped in the categories good cut, cuts with burr formation and cut interruptions. Indeed, our results reveal that both cut failures can be detected with one system. Independent of the neural network design and size, a minimum classification accuracy of 92.8% is achieved, which could be increased with more complex networks to 95.8%. Thus, convolutional neural networks reveal a slight performance advantage over basic neural networks, which yet is accompanied by a higher calculation time, which nevertheless is still below 2 ms. In a separated examination, cut interruptions can be detected with much higher accuracy as compared to burr formation. Overall, the results reveal the possibility to detect burr formations and cut interruptions during laser cutting simultaneously with high accuracy, as being desirable for industrial applications.

1. Introduction

Laser cutting of thin metal sheets using fiber or disk lasers is now a customary process in the metal industry. The key advantages of laser cutting are high productivity and flexibility, good edge quality and the option for easy process automation. Especially for highly automated unmanned machines, seamlessly combined in line with bending, separation or welding machines, a permanent high cut quality is essential to avoid material waste, downtime or damaging subsequent machine steps in mechanized process chains. As a consequence, besides optimizing the cutting machine in order to reduce the influence of disturbance variables, cut quality monitoring is also of utmost interest.
The most common and disruptive quality defects are cut interruptions and burr formation [1]. To obtain high-quality cuts, process parameters, such as laser power, feed rate, gas pressure, working distance of the nozzle and focus position, respectively, are to be set appropriately. Imprecise process parameters and typical disturbance values like thermal lenses, unclean optics, damaged gas nozzles, gas pressure fluctuations and the variations of material properties may lead to cut poor-quality and, thus, nonconforming products. To ensure a high quality, an online quality monitoring system, which can detect multiple defects, would be the best choice in order to respond quickly and reduce downtime, material waste or cost-extensive rework. Until now, most reviewed sensor systems for monitoring laser cutting focus only on one single fault.
For detecting burr formation during laser cutting, different approaches using cameras, phododiodes or acoustic emission were investigated. In [2,3] burr formation, roughness and striation angle during laser cutting with a 6 kW CO2 laser are determined by using a NIR camera sampling with 40 Hz. By using two cameras in [4], laser cutting with a CO2 laser is monitored by observing the spark trajectories underneath the sheet and melt bath geometries and correlate this to the burr formation or overburning defects. A novel approach is used in [5], employing a convolutional neural network to calculate burr formation from camera images with a high accuracy of 92%. By evaluating the thermal radiation of the process zone with photodiodes [6], the burr height during fiber laser cutting can be measured from the standard deviation of a filtered photodiode signal. Results by using photodiode-based sensors integrated in the cutting head [7] showed that the mean photodiode’s current increases with lower cut qualities, while similar experiments revealed increasing mean photodiode currents at lower cut surface roughness [8]. An acoustic approach was investigated by monitoring the acoustic emission during laser cutting and deducing burr formation by evaluating the acoustic bursts [9].
Also for cut interruption detection, most approaches are based on photodiode signals or camera images. Photodiode-based methods for cut interruption detection are signal threshold-based [10], done by the comparison of different photodiodes [11] or based on cross-correlations [12]. However, all those methods have the disadvantage of requiring thresholds that vary with the sheet thickness or laser parameters. In addition, an adaptation to other materials or sheet thicknesses requires a large engineering effort to define respective threshold values by extensive investigations. To avoid this problem, [13] uses a convolutional neural network to calculate cut interruptions from camera images during fiber laser cutting of different sheet thicknesses with an accuracy of 99.9%. Another approach is performed by using a regression model based on polynomial logistics [14] to calculate the interruptions from laser machine parameters only.
This literature review reveals that for both burr formation monitoring and cut interruption, individual detection schemes have previously been reported, but a combined and simultaneous detection for both failure patterns has not been reported so far. In addition, many of the previous studies applied CO2 lasers, which are often replaced nowadays by modern fiber or disk lasers, for which, in turn, fewer reports are available. To detect both failures with the same system, we chose the evaluation of camera images with neural networks, as they are able to achieve a high accuracy in detecting both cut failures [5,13]. The use of neural networks, especially for convolutional neural networks (CNN), has been demonstrated for various image classification purposes, such as face recognition and object detection [15,16], in medicine for cancer detection [17] and electroencephalogram (EEG) evaluations [18] or in geology for earthquake detection [19]. For failure analyses in technical processes, neural networks have also been successfully used for, e.g., concrete crack detection [20], road crack detection [21] or detecting wafer error determinations [22]. In addition, detecting different failure types with the same system has been successfully proven with neural networks, such as detecting various wood veneer surface defects [23] or different welding defects [24] during laser welding.
The objective of this publication is to detect both burr formation and cut interruptions during single-mode laser cutting of electrical sheets from camera images with neural networks. The advantages of our system are, firstly, easy adaption to industrial cutting heads, which often already have a camera interface. Secondly, images are taken coaxially to the laser beam and are therefore independent of the laser cut direction. Thirdly, due to the use of a learning system the engineering effort is low when the system has to be adapted to other materials or sheet thicknesses. Two different neural network types are used, namely a basic neural network and a convolutional neural network. The basic neural network is faster and can detect bright or dark zones but is less able to extract abstractions of 2D features and needs a lot of parameters when the networks get more complex. On the other hand, convolutional neural networks are much better in learning and extracting abstractions of 2D features and usually need fewer parameters. However, they require a higher calculation effort due to many multiplications in the convolution layers [25,26].
The cutting of electrical sheets is chosen because it is an established process in the production and prototyping of electric motors and transformers [27,28,29,30], i.e., it is a relevant and contributing process step to foster e-mobility. In order to reduce the electrical losses caused by eddy currents, the rotor is assembled of a stack of thin electrical sheets with electrical isolation layers in between the sheets. The sheet thickness varies typically between 0.35 mm to 0.5 mm, with the eddy currents being lower for thinner sheets. As a result, for an electric motor, a large number of sheets with high quality requirements are necessary. Especially burr formations result in gaps between sheets or the burr can pierce the electrical isolation layer and connect the sheets electrically which both reduce the performance of motors drastically. Therefore, quality monitoring during laser cutting is of great interest for industrial applications.

2. Experimental

2.1. Laser System and Cutting Setup

In this study, a continuous wave 500 W single-mode fiber laser (IPG Photonics, Burbach, Germany) is used to perform the experiments. The laser system is equipped with linear stages (X, Y) for positioning the workpiece (Aerotech, Pittsburgh, PA, USA) and a fine cutting head (Precitec, Gaggenau, Germany) is attached to a third linear drive (Z). The assisted gas nitrogen with purity greater than 99.999% flows coaxially to the laser beam. The gas nozzle has a diameter of 0.8 mm and its distance to the workpiece is positioned by a capacitive closed loop control of the z-linear drive. The emitting wavelength of the laser is specified to be 1070 nm in conjunction with a beam propagation factor of M2 < 1:1. The raw beam diameter of 7.25 mm is focused by a lens with a focal length of 50 mm. The according Rayleigh length is calculated to 70 µm and the focus diameter to 10 µm, respectively.
The design of the cutting head with the high-speed camera and a photo of the laser system are illustrated in Figure 1. The dashed lines depict the primary laser radiation from the fiber laser, which is collimated by a collimator and reflected by a dichroic mirror downwards to the processing zone. There the laser radiation is focused by the processing lens through the protective glass onto the work piece, which is placed on the XY stages. The process radiation from the sheet radiates omnidirectional (dash-doted line), thus partly through the nozzle and protective glass and is collimated by the processing lens upwards. The process radiation passes the dichroic mirror and is focused by a lens onto the high-speed camera. The focus of the camera is set to the bottom side of the sheet in order to have a sharp view of possible burr formations.

2.2. Laser Cutting

The laser cuts are performed in electrical sheets of the type M270 (according to EN 10106 this denotes a loss of 2.7 W/kg during reversal of magnetism at 50 Hz and 1.5 T) with a sheet thickness of 0.35 mm. This sheet thickness is chosen because it fits well to the laser focus properties e.g., Rayleigh length and it is one of the most often used sheet thicknesses for electrical motors and transformers, because it provides a good compromise between low eddy currents and high productivity. Stacks of thicker sheets are faster to produce because less sheets are required per stack but with increasing sheet thickness also unwanted eddy currents increase. Thinner sheet thicknesses require a higher production effort per stack and are more difficult to cut because they are very flexible, and warp under the gas pressure and thermal influence. In these experiments only one sheet thickness is used, but please note that in previous publications with similar systems an adaptation of the results to other sheet thicknesses was possible with only minor additional expenses [5,13].
As ad-hoc pre-experiments reveal, the parameter combination of a good quality cut is a laser power of 500 W, a feed rate of 400 mm/s and a laser focus position on the bottom side of the metal sheet. The gas nozzle has a diameter of 0.8 mm and is paced 0.5 mm above the sheet surface and the gas pressure is 7 bar. For the experimental design, the parameters are varied to intentionally enforce cut failures. Burr formations are caused by less gas flow into the cut kerf due to higher nozzle to sheet distance, lower gas pressure, an overvalued power to feed rate ratio or damaged nozzles. Cut interruptions are enforced by too high feed rates or too low laser power.
In the experimental design, 39 cuts with different laser parameters are performed for training the neural network and 22 cuts are performed for testing, with the cuts being evenly distributed to the three cut categories (good cut, cuts with burr formation and cut interruptions). A table of all cuts with laser machine parameters, category and use can be found in the Appendix A. The cuts are designed from a straight line including acceleration and deceleration paths of the linear stages. Exemplifying images of the sheets from all three cut categories taken by optical microscope after the cutting process are shown in Figure 2. Firstly, for a good quality cut, both top and bottom side of the cut kerf are characterized by clear edges without damages. Secondly, for a cut with burr, the top side is similar to the good quality cut, however on the bottom side drops of burr formation are clearly visible. Thirdly, the images of the cut interruption reveal a molten line on the sheet top side and only a slightly discolored stripe on the bottom side with both sides of the sheet not being separated.

2.3. Camera and Image Acquisition

For image acquisition during laser cutting, we used a high-speed camera (Fastcam AX50, Photron, Tokyo, Japan) with a maximum frame rate of 170,000 frames per second. The maximum resolution is 1024 × 1024 pixels, with a square pixel size of 20 × 20 μm2 in combination with a Bayer CFA Color Matrix. For process image acquisition, videos of the laser cutting process are grabbed with a frame rate of 10 kilo frames per second with an exposure time of 2 µs and a resolution of 128 × 64 pixels. Even at this high frame rate, no oversampling occurs and consecutive images are not similar, because the relevant underlying melt flow dynamics are characterized by high melt flow velocities in the range of 10 m/s [31] and vary therefore at estimated frequencies between 100 kHz and 300 kHz [32]. Please note, due to the lack of external illumination in the cutting head, the brightness in the images are caused by the thermal radiation of the process zone.
Two exemplifying images of each cut category are shown in Figure 3 with the cut direction always upwards. The orientation of the images is always the same because the straight lines are cut in the same direction. For complex cuts, images with the same orientation can be transformed from various oriented images by rotation based on the movement direction of the drives. In these images, brightness is caused by the thermal radiation of the hot melt. Good cuts are characterized by a bright circle at the position on the laser focus, and below this, two tapered stripes indicating the flowing melt at the side walls of the cut kerf, because in the middle the melt bath is blown out first. The cuts with burr are similar to the good quality cuts but tapered stripes are formed differently. The cut interruptions are very different to the other categories and are characterized by larger bright areas and a more elliptical shape with no tapered stripes.
From the 39 laser cuts, the experimental design delivers the same number of training videos with overall 52 thousand training images, while from the 22 testing cuts 34 thousand test images are provided. It is worth to mention, that the size of several ten thousands of training images is typical for training neural networks [33]. For both training and testing, the images are almost evenly distributed on the three categories with cut interruptions being slightly underrepresented. The reason for this underrepresentation is, that cut interruptions only occur at high feed rates, i.e., images from acceleration and deceleration paths can be used only partially and, in turn, less images per video can be captured.

2.4. Computer Hardware and Neural Network Design

For learning and evaluating the neural network, a computer with an Intel Core 7-8700 processor with a 3.2-GHz clock rate in combination with 16-GB DDR4 RAM was used. All calculations are performed with the CPU rather than the GPU to show that the machine learning steps are also possible to run on standard computers, which are usually integrated with laser cutting machines. The used software was TensorFlow version 2.0.0 in combination with Keras version 2.2.4 (Software available from: https://tensorflow.org (accessed on 24 March 2021)).
In most publications about image classification with neural networks, the images have major differences. In contrast, in the images captured in our experiments, the object to analyze always has the same size, orientation and illumination conditions which should simplify the classification when compared to classifying common, moving items like vehicles or animals [34,35]. Furthermore, our images have a rectangular shape with 128 × 64 pixels, while most classification algorithms are optimized on square images sizes having mostly a resolution of 224 × 224 pixels like MobileNet, SqueezeNet or AlexNet [36,37]. Because an enlargement of the image size slows the system drastically, two self-designed and completely different neural network are used with many elements being adapted to other, often used neural networks. The first network, as shown in Figure 4 is a basic network without convolution and only consists of image flattening followed by two fully connected layers with N nodes and ReLU Activation. To classify the three different cut categories, a fully connected layer with 3 nodes and softmax activation completes the network. The second network is a convolutional neural network with four convolution blocks followed by the same three fully connected layers as in the basic network. Each block consists of a convolution layer with a kernel size of 3 × 3 and M filters, which the output of the convolution is added with input of the block. Such bypasses are most common in, e.g., MobileNet [36]. To reduce the number of parameters, a max pooling layer with a common pool size of 2 × 2 is used [26]. In contrast to often neural networks used in the literature, we use a constant instead of an increasing filter number for subsequent convolution layers and we use normal convolutions rather than separable or pointwise convolutions. Because every block halves the image size in 2 dimensions, after 4 blocks the image size is 8 × 4 × M. The fully connected layers after the flattening have the same number of nodes as the number of parameters delivered by the flattened layer. The used model optimizer is Adam, which according to [38], together with SDG (Stochastic Gradient Descent) provides superior optimization results. Furthermore, we use the loss function “categorical crossentropy” to enable categorical outputs (one hot encoding), and the metrics “accuracy”.

2.5. Methodology

The methodology of our experiments is shown in the workflow diagram in Figure 5. In a first step, the laser cuts are performed and during cutting videos are taken from the process zone with the high speed camera, some of these images have been shown in Figure 3. After cutting, the cut kerfs are analyzed by with an optical microscope and categorized manually whether a good cut, burr formation or a cut interruption occurred (examples of these images shown in Figure 2). Based on this classification, the videos taken during laser cutting are labeled with the corresponding class. In case the cut quality changes within one cut, the video is divided, so the quality is constant within a video. Then the videos are separated in training videos and test videos, so the images for testing are not from videos used for training. From the training videos, the single frame is extracted and with these images the neural network is trained. Furthermore, the single frames are extracted from the test videos, and the resulting images are used to test the trained neural network.

3. Results

3.1. Training Behaviour

The different neural networks are trained on the training dataset and the performance is calculated on the test dataset. Exemplarily, the training behavior of the convolutional neural network with 16 filters in each convolution is shown in Figure 6. Apparently, the training accuracy rises continuously with the training epochs, reaching 99% after 10 epochs and 99.5% after 20 epochs, respectively. On the other hand, the test accuracy reaches 94% after three epochs and fluctuates with further training around this level, which is a typical behavior for neural network training [39]. Even further training, above 20 epochs, results only in a fluctuation of the accuracy rather than a continuous increase. To reduce the deviation of the test results for comparisons between different networks, the mean of the test results between 10 and 20 epochs is used.

3.2. Basic Neural Network

To determine the performance of the basic neural networks, those with node numbers N between 5 and 1000 are trained on the training dataset and tested on the test dataset. The mean test accuracy between 10 and 20 training epochs and the required calculation time per image are shown in Figure 7. It is obvious that the accuracy for a very small network with only five nodes is quite high, being 92.8%, and the calculation time of 0.1 ms per image being very fast. With an increasing number of nodes, the accuracy increases to a maximum of 95.2% at 1000 nodes, which is accompanied by a higher calculation time of 0.32 ms. Parallel to the calculation time, also the trainable parameters increase with the number of nodes starting from 122 thousand parameters for five nodes and reaching 25 million parameters at 1000 nodes. A further increase of the parameters is not considered to be useful, because the training dataset consist of 420 million of pixels (number of images x pixels per image), so the neural network tend to over fit the training dataset rather than developing generalized features. Generally, with the basic neural network accuracies of 94% (mean) are achievable.

3.3. Convolutional Neural Network

Under the same conditions as the basic neural network, the convolutional neural network is also trained and tested. The results of the accuracy and calculation time for filter numbers between 4 and 64 are depicted in Figure 8. The accuracy of the neural network is quite high for all filter numbers and fluctuates between 94.6% and 95.8% with no clear trend. In addition, the accuracy also varies for the same network when it is calculated several times. However, the calculation time increases clearly with the number of filters from 0.36 ms per image to 1.77 ms. The number of trainable parameters start with 34 thousand for four filters and increases to 8.4 Million for the 64 filters (details how to calculate the number of parameters are described in [25]). For the mean, the convolutional neural network is able to classify about 95% of the image correctly.

3.4. Comparison between Cut Failures

Since the literature is available for both burr detection and cut interruptions during laser cutting, which vary strongly in accuracy, the performance of our neural networks in detecting one cut failure is determined. Therefore, the accuracy in classifying good cuts and cuts with burr as well as good cuts and cut interruptions is calculated separately. For this investigation, the convolutional neural network with 16 filters is chosen, because it provides high accuracy and a comparable moderate calculation time. The results of the separated classification are shown in Figure 9. It is obvious that the detection of cut interruptions is very reliable with the accuracy being 99.5%, as being compared to 93.1% when detecting burr formation. The reason for this can also be seen in Figure 3, where good cuts are much more similar to cuts with burr, while cut interruptions look very different to both of the other failure classes. Both values individually agree with the literature values, which are 99.9% for the cut interruptions [13] and 92% for the burr detection [5], yet for burr detection in the literature a more complex burr definition is chosen. This shows that cut interruptions are much easier to detect from camera images compared to burr formations.

3.5. Error Analysis

For the error analysis, single misclassified images and the distribution of misclassifications are analyzed. For the temporal distribution, a video of a cut with different cut qualities is produced. The measured quality obtained by the optical microscope and the prediction of the convolutional neural network with 16 filters is shown in Figure 10. Misclassifications are indicated by red dots that are not placed on the blue line and it can clearly be seen, which misclassifications occur more often than others. The most frequent misclassifications are cuts predicted as burr. Interruptions are rarely misclassified and other images are seldom misclassified as interruptions, which accompanies the results in Section 3.4. The distribution of the misclassifications reveals no concentration on a specific sector but minor accumulations of several misclassifications are observed. In addition, some areas without any misclassification or only single misclassifications can be found. These results reveal that misclassifications do not occur all at once or at a special event but are widely distributed.
To analyze the misclassified images, two exemplified images from a good cut classified as cut with burr are shown in Figure 11. In contrast to the images in Figure 3, where the bright area is followed by two tapered stripes, in Figure 11, these stripes are hardly observed. However, these following stripes are important for the classification, because in this area the burr is generated. Therefore, in the case of missing stripes, the classification between cuts with and without burr is difficult and thus characterized by many misclassifications.

4. Discussion

With the classification in three different categories during laser cutting, good cuts can be distinguished from cuts with burr and cut interruptions. The convolutional neural network has, depending on the number of filters, a better classification accuracy by about 1% when compared to the basic networks. The maximum accuracy for the basic neural networks (1000 nodes) is also lower, being 95.2% as compared to a 95.8% accuracy of the convolutional neural network with 16 filters. Nevertheless, the difference between both neural network types is small, which can be explained by the objects in the images always having the same size, orientation and brightness, which is not usually the case for many other classification tasks [34,35]. As a consequence, the basic neural network can classify the images by bright or dark zones and does not necessarily require learning and extracting abstractions of 2D features which is the main advantage of convolutional neural networks [25,26].
For the required accuracy, the size of the cut failure has to be considered. Because of the accuracy being below 100%, a post algorithm is necessary which should report an error only when a certain amount of failures occurs in a sequent number of images. To detect geometrically long failures, which can occur, e.g., by unclean optics, our classification system is adequate. Very short failures, like single burr drops when cutting an edge, are probably not be detectable with our system. It is remarkable for the results with both neural networks, however, that at least 92.8% accuracy (cf. Figure 7) can be achieved with any network configuration independent from network type, number of nodes or filters. This means that about 93% of the images are easy to classify because they differ strongly between the categories. Furthermore, about 2% of the images can be classified by more complex neural networks (cf. Section 3.2 and Section 3.3). About 5% of the images, mostly between the categories good cuts and cuts with burr formation, are very difficult to classify because the images are quite similar (cf. Figure 3). For an industrial application, it has to be further considered whether the intentionally enforced cut failures are representative for typical industrial cut failures, e.g., as a result of unclean optics, which are not reproducible in scientific studies.
The main advantage of the basic neural network is the much lower computation time between 0.1 ms and 0.32 ms, while the convolutional neural network requires 0.36 ms to 1.7 ms, respectively. For typical available industrial cameras having maximum frame rates in the range of 1000 Hz, a calculation time for the classification of about 1 ms is sufficient, which is fulfilled by all basic and most of our convolutional neural networks. A similar frame rate was also used by [5] when detecting burr formations during laser cutting. With maximum cutting speeds of modern laser machines in the range of 1000 mm/s still a local resolution of 1 mm is achieved which can clearly be considered as adequate for industrial use.
Following this fundamental and comparative analysis, future investigations have to address field trials of the proposed sensor system and classification scheme in industrial cutting processes. Within such industrial environments additional error sources may appear and further reduce the cut quality, such as damaged gas nozzles or partially unclean optics which in turn are difficult to reproduce under laboratory conditions. The images from these error sources can be added to the training data and improve the detection rate of the classification system. To improve the detection rate it is also possible to classify not a single image but a series of 3 to 10 subsequent images, which reduces the influence of a single misleading image.

5. Conclusions

Overall, with our neural network approach, two cut failures during laser cutting can be detected simultaneously by evaluating camera images with artificial neural networks. With different neural network designs up to 95.8% classification accuracy can be achieved. Generally, convolutional neural networks have only minor classification advantages of about 1% over basic neural networks, while the basic neural networks are considerably faster in calculation. The detection of cut interruptions is remarkably higher when compared to the burr formation, because the images of cut interruptions are more different from the good cuts compared to the images with burr formation. In general, the detection rate is high enough to advance industrial applications.

Author Contributions

Conceptualization, B.A.; methodology, B.A.; software, B.A.; validation, B.A., investigation, B.A.; writing—original draft preparation, B.A. and R.H.; project administration, R.H., All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by the Bavarian Research Foundation.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table of all performed laser cuts with machine parameters, cut category and use for training and test.
Nr:LaserFeed RateNozzle DistanceFocus PositionCategoryUse
Wmm/smmmm
15006000.5−1.25CutTraining
23004000.5−1.25CutTraining
35005000.5−1.25CutTraining
45003000.5−1.25CutTraining
53006000.5−1.25InterruptionTest
62005001,0−1.75InterruptionTraining
75005000.8−1.55BurrTraining
85006000.5−1.25CutTest
92505000.5−1.25InterruptionTest
105004000.5−1.25CutTest
115006000.5−1.25InterruptionTraining
125005001,0−1.75BurrTraining
135005000.5−1.25CutTest
145003000.5−1.25CutTraining
155002000.5−1.25CutTest
165005000.5−1.25CutTraining
175005000.5−1.25InterruptionTest
184005000.9−1.65BurrTraining
195005000.8−1.55BurrTraining
202005001,0−1.75InterruptionTraining
215003000.5−1.45BurrTraining
221505000.5−1.25InterruptionTest
235004000.5−1.25CutTraining
245005000.5−1.25CutTest
255004000.5−1.25 Training
264005000.8−1.55BurrTraining
275005000.8−1.55BurrTest
281505000.5−1.25InterruptionTraining
292005001,0−1.75InterruptionTest
304005000.9−1.65BurrTraining
313006000.5−1.25InterruptionTraining
325005001,0−1.75BurrTraining
335003000.5−1.25CutTest
344005001,0−1.75BurrTest
355005000.5−1.25InterruptionTraining
365004000.5−1.25CutTraining
371505000.5−1.25InterruptionTraining
384005000.5−1.45BurrTraining
395006000.5−1.25InterruptionTraining
404004000.5−1.25CutTraining
415006000.5−1.25CutTraining
425006000.5−1.25InterruptionTest
434005000.8−1.55BurrTest
444005000.5−1.45BurrTest
454004000.5−1.25CutTest
465005000.5−1.25CutTraining
475002000.5−1.25CutTraining
483004000.5−1.25InterruptionTraining
494005000.5−1.45BurrTraining
505004000.5−1.25CutTest
515005000.8−1.55BurrTraining
524005000.9−1.65BurrTraining
534005000.9−1.65BurrTest
545003000.5−1.45BurrTraining
553004000.5−1.25CutTraining
565003000.5−1.45BurrTest
572505000.5−1.25InterruptionTraining
583004000.5−1.25CutTraining
593004000.5−1.25InterruptionTraining
603006000.5−1.25InterruptionTraining
613004000.5−1.25InterruptionTest

References

  1. Kratky, A.; Schuöcker, D.; Liedl, G. Processing with kW fibre lasers: Advantages and limits. In Proceedings of the XVII International Symposium on Gas Flow, Chemical Lasers, and High-Power Lasers, Lisboa, Portugal, 15–19 September 2008; p. 71311X. [Google Scholar]
  2. Sichani, E.F.; de Keuster, J.; Kruth, J.; Duflou, J. Real-time monitoring, control and optimization of CO2 laser cutting of mild steel plates. In Proceedings of the 37th International MATADOR Conference, Manchester, UK, 25–27 July 2012; pp. 177–181. [Google Scholar]
  3. Sichani, E.F.; de Keuster, J.; Kruth, J.-P.; Duflou, J.R. Monitoring and adaptive control of CO2 laser flame cutting. Phys. Procedia 2010, 5, 483–492. [Google Scholar] [CrossRef] [Green Version]
  4. Wen, P.; Zhang, Y.; Chen, W. Quality detection and control during laser cutting progress with coaxial visual monitoring. J. Laser Appl. 2012, 24, 032006. [Google Scholar] [CrossRef]
  5. Franceschetti, L.; Pacher, M.; Tanelli, M.; Strada, S.C.; Previtali, B.; Savaresi, S.M. Dross attachment estimation in the laser-cutting process via Convolutional Neural Networks (CNN). In Proceedings of the 2020 28th Mediterranean Conference on Control and Automation (MED), Saint-Raphaël, France, 16–18 September 2020; pp. 850–855. [Google Scholar]
  6. Schleier, M.; Adelmann, B.; Neumeier, B.; Hellmann, R. Burr formation detector for fiber laser cutting based on a photodiode sensor system. Opt. Laser Technol. 2017, 96, 13–17. [Google Scholar] [CrossRef]
  7. Garcia, S.M.; Ramos, J.; Arrizubieta, J.I.; Figueras, J. Analysis of Photodiode Monitoring in Laser Cutting. Appl. Sci. 2020, 10, 6556. [Google Scholar] [CrossRef]
  8. Levichev, N.; Rodrigues, G.C.; Duflou, J.R. Real-time monitoring of fiber laser cutting of thick plates by means of photodiodes. Procedia CIRP 2020, 94, 499–504. [Google Scholar] [CrossRef]
  9. Tomaz, K.; Janz, G. Use of AE monitoring in laser cutting and resistance spot welding. In Proceedings of the EWGAE 2010, Vienna, Austria, 8–10 September 2010; pp. 1–7. [Google Scholar]
  10. Adelmann, B.; Schleier, M.; Neumeier, B.; Hellmann, R. Photodiode-based cutting interruption sensor for near-infrared lasers. Appl. Opt. 2016, 55, 1772–1778. [Google Scholar] [CrossRef]
  11. Adelmann, B.; Schleier, M.; Neumeier, B.; Wilmann, E.; Hellmann, R. Optical Cutting Interruption Sensor for Fiber Lasers. Appl. Sci. 2015, 5, 544–554. [Google Scholar] [CrossRef] [Green Version]
  12. Schleier, M.; Adelmann, B.; Esen, C.; Hellmann, R. Cross-Correlation-Based Algorithm for Monitoring Laser Cutting with High-Power Fiber Lasers. IEEE Sens. J. 2017, 18, 1585–1590. [Google Scholar] [CrossRef]
  13. Adelmann, B.; Schleier, M.; Hellmann, R. Laser Cut Interruption Detection from Small Images by Using Convolutional Neural Network. Sensors 2021, 21, 655. [Google Scholar] [CrossRef]
  14. Tatzel, L.; León, F.P. Prediction of Cutting Interruptions for Laser Cutting Using Logistic Regression. In Proceedings of the Lasers in Manufacturing Conference 2019, Munich, Germany, 24 July 2019; pp. 1–7. [Google Scholar]
  15. Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [Google Scholar] [CrossRef]
  16. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed]
  17. Ting, F.F.; Tan, Y.J.; Sim, K.S. Convolutional neural network improvement for breast cancer classification. Expert Syst. Appl. 2018, 120, 103–115. [Google Scholar] [CrossRef]
  18. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput. Biol. Med. 2018, 100, 270–278. [Google Scholar] [CrossRef] [PubMed]
  19. Perol, T.; Gharbi, M.; Denolle, M. Convolutional neural network for earthquake detection and location. Sci. Adv. 2018, 4, e1700578. [Google Scholar] [CrossRef] [Green Version]
  20. Dung, C.V.; Anh, L.D. Autonomous concrete crack detection using deep fully convolutional neural network. Autom. Constr. 2019, 99, 52–58. [Google Scholar] [CrossRef]
  21. Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3708–3712. [Google Scholar]
  22. Nakazawa, T.; Kulkarni, D. Wafer Map Defect Pattern Classification and Image Retrieval Using Convolutional Neural Network. IEEE Trans. Semicond. Manuf. 2018, 31, 309–314. [Google Scholar] [CrossRef]
  23. Urbonas, A.; Raudonis, V.; Maskeliūnas, R.; Damaševičius, R. Automated Identification of Wood Veneer Surface Defects Using Faster Region-Based Convolutional Neural Network with Data Augmentation and Transfer Learning. Appl. Sci. 2019, 9, 4898. [Google Scholar] [CrossRef] [Green Version]
  24. Khumaidi, A.; Yuniarno, E.M.; Purnomo, M.H. Welding defect classification based on convolution neural network (CNN) and Gaussian kernel. In Proceedings of the 2017 International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, Indonesia, 28–29 August 2017; pp. 261–265. [Google Scholar] [CrossRef]
  25. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; van Esesn, B.C.; Awwal, A.A.S.; Asari, V.K. The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. March 2018. Available online: http://arxiv.org/pdf/1803.01164v2 (accessed on 26 August 2021).
  26. O’Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks Nov 2015. Available online: http://arxiv.org/pdf/1511.08458v2 (accessed on 26 August 2021).
  27. Emura, M.; Landgraf, F.J.G.; Ross, W.; Barreta, J.R. The influence of cutting technique on the magnetic properties of electrical steels. J. Magn. Magn. Mater. 2002, 254, 358–360. [Google Scholar] [CrossRef]
  28. Schoppa, A.; Schneider, J.; Roth, J.-O. Influence of the cutting process on the magnetic properties of non-oriented electrical steels. J. Magn. Magn. Mater. 2000, 215, 100–102. [Google Scholar] [CrossRef]
  29. Adelmann, B.; Hellmann, R. Process optimization of laser fusion cutting of multilayer stacks of electrical sheets. Int. J. Adv. Manuf. Technol. 2013, 68, 2693–2701. [Google Scholar] [CrossRef]
  30. Adelmann, B.; Lutz, C.; Hellmann, R. Investigation on shear and tensile strength of laser welded electrical sheet stacks. In Proceedings of the 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), Munich, Germany, 20–24 August 2018; pp. 565–569. [Google Scholar]
  31. Arntz, D.; Petring, D.; Stoyanov, S.; Quiring, N.; Poprawe, R. Quantitative study of melt flow dynamics inside laser cutting kerfs by in-situ high-speed video-diagnostics. Procedia CIRP 2018, 74, 640–644. [Google Scholar] [CrossRef]
  32. Tennera, F.; Klämpfla, F.; Schmidta, M. How fast is fast enough in the monitoring and control of laser welding? In Proceedings of the Lasers in Manufacturing Conference, Munich, Germany, 22–25 June 2015. [Google Scholar]
  33. Keshari, R.; Vatsa, M.; Singh, R.; Noore, A. Learning Structure and Strength of CNN Filters for Small Sample Size Training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  34. Chen, G.; Han, T.X.; He, Z.; Kays, R.; Forrester, T. Deep convolutional neural network based species recognition for wild animal monitoring. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 858–862. [Google Scholar]
  35. Dong, Z.; Wu, Y.; Pei, M.; Jia, Y. Vehicle Type Classification Using a Semisupervised Convolutional Neural Network. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2247–2256. [Google Scholar] [CrossRef]
  36. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. July 2017. Available online: http://arxiv.org/pdf/1707.01083v2 (accessed on 26 August 2021).
  37. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and <0.5MB Model Size. February 2016. Available online: http://arxiv.org/pdf/1602.07360v4 (accessed on 26 August 2021).
  38. Bello, I.; Zoph, B.; Vasudevan, V.; Le, Q.V. Neural Optimizer Search with Reinforcement Learning. September 2017. Available online: http://arxiv.org/pdf/1709.07417v2 (accessed on 26 August 2021).
  39. An, S.; Lee, M.; Park, S.; Yang, H.; So, J. An Ensemble of Simple Convolutional Neural Network Models for MNIST Digit Recognition. arXiv 2020, arXiv:2008.10400. [Google Scholar]
Figure 1. Optical setup of the cutting head (left) and image of the system (right).
Figure 1. Optical setup of the cutting head (left) and image of the system (right).
Sensors 21 05831 g001
Figure 2. Images of the top and bottom side of laser cuts with and without cut errors taken with an optical microscope after laser cutting.
Figure 2. Images of the top and bottom side of laser cuts with and without cut errors taken with an optical microscope after laser cutting.
Sensors 21 05831 g002
Figure 3. Examples of camera images of the three cut categories taken during laser cutting with the high speed camera.
Figure 3. Examples of camera images of the three cut categories taken during laser cutting with the high speed camera.
Sensors 21 05831 g003
Figure 4. Design of the two neural networks.
Figure 4. Design of the two neural networks.
Sensors 21 05831 g004
Figure 5. Workflow diagram.
Figure 5. Workflow diagram.
Sensors 21 05831 g005
Figure 6. Training accuracy and test accuracy of a convolutional neural network with 16 filters.
Figure 6. Training accuracy and test accuracy of a convolutional neural network with 16 filters.
Sensors 21 05831 g006
Figure 7. Accuracy of the basic neural network as a function of nodes per fully connected layer.
Figure 7. Accuracy of the basic neural network as a function of nodes per fully connected layer.
Sensors 21 05831 g007
Figure 8. Test accuracy for the convolutional neural network as a function of the number of filters.
Figure 8. Test accuracy for the convolutional neural network as a function of the number of filters.
Sensors 21 05831 g008
Figure 9. Comparison of the test accuracy between interruptions and burr formations.
Figure 9. Comparison of the test accuracy between interruptions and burr formations.
Sensors 21 05831 g009
Figure 10. Measured image class and prediction by the neural network.
Figure 10. Measured image class and prediction by the neural network.
Sensors 21 05831 g010
Figure 11. Two examples of cuts missclassified as burr.
Figure 11. Two examples of cuts missclassified as burr.
Sensors 21 05831 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Adelmann, B.; Hellmann, R. Simultaneous Burr and Cut Interruption Detection during Laser Cutting with Neural Networks. Sensors 2021, 21, 5831. https://doi.org/10.3390/s21175831

AMA Style

Adelmann B, Hellmann R. Simultaneous Burr and Cut Interruption Detection during Laser Cutting with Neural Networks. Sensors. 2021; 21(17):5831. https://doi.org/10.3390/s21175831

Chicago/Turabian Style

Adelmann, Benedikt, and Ralf Hellmann. 2021. "Simultaneous Burr and Cut Interruption Detection during Laser Cutting with Neural Networks" Sensors 21, no. 17: 5831. https://doi.org/10.3390/s21175831

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop