# Research on a Tool Wear Monitoring Algorithm Based on Residual Dense Network

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Detection Method of Tool Wear Value

#### 2.1. Signal Preprocessing

#### 2.2. Residual Network

#### 2.3. Tool Wear Detection Method

#### 2.3.1. Network Structure

- (1)
- Calculating the mean value of all features$${\mu}_{B}=\frac{1}{m}{\displaystyle \sum _{i=1}^{m}{x}_{i}}$$
- (2)
- Calculating the variance of all features$${\sigma}_{B}^{2}=\frac{1}{m}{\displaystyle \sum _{i=1}^{m}{({x}_{i}-{\mu}_{B})}^{2}}$$
- (3)
- Normalization$${\widehat{x}}_{i}=\frac{{x}_{i}-{\mu}_{B}}{\sqrt{\epsilon +{\sigma}_{B}^{2}}}$$
- (4)
- Reconstructing the feature$${y}_{i}=\gamma {\widehat{x}}_{i}+\beta \equiv B{N}_{\gamma ,\beta}({x}_{i})$$

_{1}(x) and the global residual F

_{2}(x) are calculated, and the local feature H

_{1}and the global feature H can be subsequently obtained as follows

_{1}represents the local feature from the third convolutional layer as shown in Figure 4, and H represents the global feature from the fifth convolutional layer.

#### 2.3.2. Network Training

## 3. Experimental Results and Analysis

#### 3.1. Signal Processing Unit

#### 3.2. Evaluation Indicators

#### 3.3. Experimental Results of Network Structure Comparison

#### 3.3.1. Impact of BN and Dropout Strategies on Model Performance

#### 3.3.2. Impact of Network Layer Number on Model Performance

#### 3.4. Comparison Results of Deep Learning Models

#### 3.5. Comparison Results of Machine Learning Models

- (1)
- The wavelet threshold denoising method is employed to perform the noise reduction processing on the acquired three-axis acceleration signal.
- (2)
- The data features of time domain, frequency domain, and time-frequency domain are extracted, and the specific extraction methods are shown in Table 4.
- (3)
- Pearson’s Correlation Coefficient (PCC) is used to reflect the correlation between the feature and the wear value, and the feature with a correlation coefficient greater than 0.9 is selected as the extraction object to reduce the feature dimension.
- (4)
- The extracted features are used as the inputs of the machine learning model.

## 4. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Yu, P. A review: Prognostics and health management. J. Electron. Meas. Instrum.
**2010**, 24, 1007–1015. [Google Scholar] - Zhou, Y.; Xue, W. Review of tool condition monitoring methods in milling processes. Int. J. Adv. Manuf. Technol.
**2018**, 96, 2509–2523. [Google Scholar] [CrossRef] - Dutta, S.; Pal, S.K.; Mukhopadhyay, S.; Sen, R. Application of digital image processing in tool condition monitoring: A review. CIRP J. Manuf. Sci. Technol.
**2013**, 6, 212–232. [Google Scholar] [CrossRef] - Patra, K.; Jha, A.K.; Szalay, T.; Ranjan, J.; Monostori, L. Artificial neural network based tool condition monitoring in micro mechanical peck drilling using thrust force signals. Precis. Eng.
**2017**, 48, 279–291. [Google Scholar] [CrossRef] - Martins, C.H.; Aguiar, P.R.; Frech, A.; Bianchi, E.C. Tool Condition Monitoring of Single-Point Dresser Using Acoustic Emission and Neural Networks Models. IEEE Trans. Instrum. Meas.
**2014**, 63, 667–679. [Google Scholar] [CrossRef] - Li, X.; Lim, B.S.; Zhou, J.H.; Huang, S.; Phua, S.J.; Shaw, K.C.; Er, M.J. Fuzzy neural network modelling for tool wear estimation in dry milling operation. In Proceedings of the Annual Conference of the Prognostics and Health Management Society, PHM, Montreal, QC, Canada, 27 September–1 October 2009. [Google Scholar]
- Liao, Z.R.; Li, S.M.; Lu, Y.; Gao, D. Tool Wear Identification in Turning Titanium Alloy Based on SVM. Mater. Sci. Forum
**2014**, 800, 446–450. [Google Scholar] [CrossRef] - Liu, C.; Wu, H.; Wang, L.; Zhang, Z. Tool wear state recognition based on LS-SVM with the PSO algorithm. J. Tsinghua Univ.
**2017**, 57, 975–979. [Google Scholar] - Li, W.L.; Fu, P.; Zhang, E.Q. Application of Fractal Dimensions and Fuzzy Clustering to Tool Wear Monitoring. Telkomnika
**2013**, 11, 187–194. [Google Scholar] [CrossRef] - Liao, Z.; Gao, D.; Lu, Y.; Lv, Z. Multi-scale hybrid HMM for tool wear condition monitoring. Int. J. Adv. Manuf. Technol.
**2016**, 84, 2437–2448. [Google Scholar] [CrossRef] - Li, X.; Er, M.J.; Ge, H.; Gan, O.P.; Huang, S.; Zhai, L.Y.; Torabi, A.J. Adaptive Network Fuzzy Inference System and support vector machine learning for tool wear estimation in high speed milling processes. In Proceedings of the Conference of the IEEE Industrial Electronics Society, Montreal, QC, Canada, 25–28 October 2012. [Google Scholar]
- Zhang, K.F.; Yuan, H.Q.; Nie, P. Prediction of tool wear based on generalized dimensions and optimized BP neural network. J. Northeast. Univ.
**2013**, 34, 1292–1295. [Google Scholar] - Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature
**2017**, 542, 115. [Google Scholar] [CrossRef] [PubMed] - Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal.
**2017**, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Hinton, G.E.; Osindero, S.; The, Y.A. Fast learning algorithm for deep belief nets. Neural Comput.
**2006**, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed] - Li, C.; Zhang, W.E.I.; Peng, G.; Liu, S. Bearing Fault Diagnosis Using Fully-Connected Winner-Take-All Autoencoder. IEEE Access
**2017**, 6, 6103–6115. [Google Scholar] [CrossRef] - Zhao, R.; Yan, R.; Wang, J.; Mao, K. Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks. Sensors
**2017**, 17, 273. [Google Scholar] [CrossRef] [PubMed] - Zhang, A.; Wang, H.; Li, S.; Cui, Y.; Liu, Z.; Yang, G.; Hu, J. Transfer Learning with Deep Recurrent Neural Networks for Remaining Useful Life Estimation. Appl. Sci.
**2018**, 8, 2416. [Google Scholar] [CrossRef] - Zhang, C.J.; Yao, X.F.; Zhang, J.M. Research on tool wear monitoring based on deep learning. Comput. Integr. Manuf. Syst.
**2017**, 10, 2146–2155. [Google Scholar] - He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 1–26 June 2016; pp. 770–778. [Google Scholar]
- Tai, Y.; Yang, J.; Liu, X. Image Super-Resolution via Deep Recursive Residual Network. In Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Huang, G.; Liu, Z.; Laurens, V.D.M. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 July 2016. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv
**2015**, arXiv:1502.03167. [Google Scholar] - Simonyan, I.K.; Zisserman, I.A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv
**2014**, arXiv:1409.1556. [Google Scholar] - Zaremba, W.; Sutskever, I.; Vinyals, O. Recurrent Neural Network Regularization. arXiv
**2014**, arXiv:1409.2329. [Google Scholar] - Xingjian, S.H.I.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 802–810. [Google Scholar]

Layer Name | Output Feature Size | Quantity | RDN Network |
---|---|---|---|

Input | 3 × 5000 | 1 | — |

Convolution | 3 × 5000 | 1 | Conv1D, 1; kernel size = 13; stride = 1 |

RD Block1 | 3 × 5000 | 2 | Conv1D, 2; kernel size = 3; stride = 1 |

Conv1D, 2; kernel size = 5; stride = 1 | |||

Conv1D, 2; kernel size = 3; stride = 1 | |||

Conv1D, 2; kernel size = 5; stride = 1 | |||

Conv1D, 2; kernel size = 3; stride = 1 | |||

Transition Layer 1 | 3 × 2500 | 1 | Conv1*1,1; stride = 1 |

AvePooling1D, 3; stride = 2 | |||

RD Block2 | 3 × 157 | 4 | Conv1D, 4; kernel size = 3; stride = 1 |

Conv1D, 4; kernel size = 5; stride = 1 | |||

Conv1D, 4; kernel size = 3; stride = 2 | |||

Conv1D, 4; kernel size = 5; stride = 1 | |||

Conv1D, 4; kernel size = 3; stride = 1 | |||

Transition Layer 2 | 3 × 78 | 1 | Conv1*1,1; stride = 1 |

AvePooling1D, 3; stride = 2 | |||

RD Block3 | 3 × 20 | 2 | Conv1D, 2; kernel size = 3; stride = 1 |

Conv1D, 2; kernel size = 5; stride = 1 | |||

Conv1D, 2; kernel size = 3; stride = 2 | |||

Conv1D, 2; kernel size = 5; stride = 1 | |||

Conv1D, 2; kernel size = 3; stride = 1 | |||

Pooling layer | 3 × 10 | 1 | AvePooling1D, 3; stride = 2 |

Full connected layer | 1 | 2 | Dense, 1000, 1 |

Output layer | 1 | 1 | Linear regression Loss function: mean square error |

Spindle Speed | Feed Rate | Cutting Width | Cutting Depth | Cooling Condition | Processing Mode |
---|---|---|---|---|---|

8000 RPM | 1000 mm/min | 0.5 mm | 1 mm | Dry cut | Face milling |

Parameters/Indicators | RNN | LSTM | VGG-16 | ResNet-50 | RDN |
---|---|---|---|---|---|

Initial learning rate | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 |

Epoch | 100 | 100 | 100 | 100 | 100 |

Optimizer | Adam | Adam | Adam | Adam | Adam |

RMSE | 18.54 | 15.56 | 13.07 | 11.14 | 9.73 |

R^{2} | 0.714 | 0.825 | 0.874 | 0.916 | 0.935 |

MAPE | 14.92% | 12.78% | 10.65% | 8.87% | 7.24% |

Single operation time/ms | 45 | 32 | 65 | 145 | 112 |

Feature Attribute | Transformation Mode | Feature Category |
---|---|---|

Time domain | \ | Maximum, Mean, Root mean square, Variance, Standard deviation, Skewness, Kurtosis, Peak, Peak coefficient |

Frequency domain | Fourier transform | Maximum, Mean, Variance, Skewness, Peak, Frequency band peak |

Time-frequency domain | Wavelet decomposition | Node energy |

Parameters/Indicators | BPNN | RBFN | SVR |
---|---|---|---|

Learning rate | 0.1 | 0.1 | 0.1 |

Number of layers | 4 | 3 | \ |

Number of nodes per layer | 46, 100, 50, 1 | 46, 200, 1 | \ |

Iteration times | 100 | 100 | 100 |

RMSE | 15.73 | 17.95 | 19.41 |

R^{2} | 0.818 | 0.746 | 0.673 |

MAPE | 12.84% | 14.54% | 15.46% |

Single operation time/ms | 28 | 15 | 18 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Li, Y.; Xie, Q.; Huang, H.; Chen, Q.
Research on a Tool Wear Monitoring Algorithm Based on Residual Dense Network. *Symmetry* **2019**, *11*, 809.
https://doi.org/10.3390/sym11060809

**AMA Style**

Li Y, Xie Q, Huang H, Chen Q.
Research on a Tool Wear Monitoring Algorithm Based on Residual Dense Network. *Symmetry*. 2019; 11(6):809.
https://doi.org/10.3390/sym11060809

**Chicago/Turabian Style**

Li, Yiting, Qingsheng Xie, Haisong Huang, and Qipeng Chen.
2019. "Research on a Tool Wear Monitoring Algorithm Based on Residual Dense Network" *Symmetry* 11, no. 6: 809.
https://doi.org/10.3390/sym11060809