# Dynamic Monitoring of Grinding Circuits by Use of Global Recurrence Plots and Convolutional Neural Networks

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Convolutional Neural Networks

## 3. Dynamic Process Monitoring Methodology

#### 3.1. Dynamic Process Monitoring Framework

**,**are presented to a classical process monitoring scheme based on principal component analysis (PCA). The three methods considered in this investigation are summarized in Figure 3. Methods I–III are described in more detail in Section 3.2, Section 3.3 and Section 3.4, respectively.

**.**The monitoring performance of the PCA models are validated using the 95% confidence limits of the ${T}^{2}$ and Q statistics of the NOC data. These confidence limits are used as control limits on control charts, and an alarm is raised once these limits are exceeded, indicating a fault.

#### 3.2. Method I: CNN Feature Extraction Based on Transfer Learning

#### 3.3. Method II: CNN Feature Extraction Based on Contrast Learning

- (a)
- Generate a sorted copy and random permutation of time series $X$, called $\tilde{X}$ and ${\mathit{S}}^{\left(0\right)}$, respectively.
- (b)
- Calculate the Fourier amplitudes of $X$ by the Discrete Fourier Transform (DFT):$${\left|{A}_{k}\right|}^{2}={\left|\frac{1}{\sqrt{N}}{\displaystyle \sum}_{k=1}^{N}{X}_{k}{e}^{\frac{i2\pi}{N}kn}\right|}^{2}$$
- (c)
- The following steps are iterated until convergence:
- (i)
- Transform ${S}^{\left(i\right)}$ to the Fourier domain using DFT:$$\mathcal{F}\left({S}^{\left(\mathrm{i}\right)}\right)={\displaystyle \sum}_{k=1}^{N}{S}_{k}^{\left(i\right)}{e}^{-\frac{i2\pi}{N}kn}$$
- (ii)
- Generate $\widehat{\mathcal{F}}\left({S}^{\left(\mathrm{i}\right)}\right)$ by replacing the Fourier amplitudes of $\mathcal{F}\left({S}^{\left(\mathrm{i}\right)}\right)$ with ${\left|{A}_{k}\right|}^{2}$, while keeping the complex phases. This step ensures the original power spectrum is produced at each iteration.
- (iii)
- Take the inverse DFT back to the time domain:$${\tilde{S}}^{\left(i\right)}=\frac{1}{\sqrt{N}}{\displaystyle \sum}_{k=1}^{N}\widehat{\mathcal{F}}{\left({S}^{\left(i\right)}\right)}_{k}{e}^{-\frac{i2\pi}{N}kn}$$
- (iv)
- Generate ${S}^{\left(i+1\right)}$ by ranking the values of ${\tilde{S}}^{\left(i\right)}$ in ascending order and replacing them by the values of $\tilde{X}$ having the same ranking. Here, the distribution of the original series is retained.
- (v)
- Convergence is achieved once the ranked order of ${\tilde{S}}^{\left(i\right)}$ is identical to that of $\tilde{X}$, and ${\tilde{S}}^{\left(i\right)}$ is returned as the surrogate time series.

#### 3.4. Method III: Dynamic Principal Component Analysis

## 4. Case Studies

#### 4.1. Simulated Sinusoidal Dataset

#### 4.1.1. Dataset Description

#### 4.1.2. Method I: CNN Feature Extraction Based on Transfer Learning

#### 4.1.3. Method II: CNN Feature Extraction Based on Contrast-Based Learning

#### 4.1.4. Method III: Dynamic Principal Component Analysis

#### 4.2. Monitoring the Power Draw of an Autogenous Milling Circuit

#### 4.2.1. Method I: CNN Feature Extraction Based on Transfer Learning

#### 4.2.2. Method II: CNN Feature Extraction Based on Contrast-Based Learning

#### 4.2.3. Method III: Dynamic Principal Component Analysis

#### 4.3. Monitoring the Operational State of an Autogenous Milling Circuit

#### 4.3.1. Method I: CNN Feature Extraction Based on Transfer Learning

#### 4.3.2. Method II: CNN Feature Extraction Based on Contrast-Based Learning

#### 4.3.3. Method III: Dynamic Principal Component Analysis

## 5. Discussion

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Wei, D.; Craig, I.K. Grinding mill circuits—A survey of control and economic concerns. Int. J. Miner. Process.
**2009**, 90, 56–66. [Google Scholar] [CrossRef] - Muller, B.; De Vaal, P.L. Development of a model predictive controller for a milling circuit. J. South. Afr. Inst. Min. Metall.
**2000**, 100, 449–453. [Google Scholar] - Chen, X.S.; Zhai, J.Y.; Li, S.H.; Li, Q. Application of model predictive control in ball mill grinding circuit. Miner. Eng.
**2007**, 20, 1099–1108. [Google Scholar] [CrossRef] - Botha, S.; le Roux, J.D.; Craig, I.K. Hybrid non-linear model predictive control of a run-of-mine ore grinding mill circuit. Miner. Eng.
**2018**, 123, 49–62. [Google Scholar] [CrossRef] [Green Version] - Chen, X.; Zhai, J.; Li, Q.; Fei, S. Fuzzy logic based on-line efficiency optimization control of a ball mill grinding circuit. In Proceedings of the Fourth International Conference on Fuzzy Systems and Knowledge Discovery, Haikou, China, 24–27 August 2007. [Google Scholar] [CrossRef]
- Gomez, A.; Aracena, C.; Cornejo, F.; Festa, A.; Vasquez, A. Rule and fuzzy-logic based expert control of Barrick Lagunas Norte mine. Automining 2010. In Proceedings of the 2nd International Congress on Automation in the Mining Industry, Santiago, Chile, 10–12 November 2010. [Google Scholar]
- Van Drunick, W.I.; Penny, B. Expert mill control at AngloGold Ashanti. J. South. Afr. Inst. Min. Metall.
**2005**, 105, 497–506. [Google Scholar] - Inapakurthi, R.K.; Miriyala, S.S.; Mitra, K. Recurrent neural networks based modelling of industrial grinding operation. Chem. Eng. Sci.
**2020**, 219, 115585. [Google Scholar] [CrossRef] - Aldrich, C.; Burchell, J.J.; De, J.W.; Yzelle, C. Visualization of the controller states of an autogenous mill from time series data. Miner. Eng.
**2014**, 56, 1–9. [Google Scholar] [CrossRef] - Chen, X.S.; Li, Q.; Fei, S. min Supervisory expert control for ball mill grinding circuits. Expert Syst. Appl.
**2008**, 34, 1877–1885. [Google Scholar] [CrossRef] - Chen, X.; Li, S.; Zhai, J.; Li, Q. Expert system based adaptive dynamic matrix control for ball mill grinding circuit. Expert Syst. Appl.
**2009**, 36, 716–723. [Google Scholar] [CrossRef] - Groenewald, J.D.V.; Coetzer, L.P.; Aldrich, C. Statistical monitoring of a grinding circuit: An industrial case study. Miner. Eng.
**2006**, 19, 1138–1148. [Google Scholar] [CrossRef] - Haasbroek, A.L.; Barnard, J.P.; Auret, L. Performance Audit of a Semi-autogenous Grinding Mill Circuit. IFAC Proc. Vol.
**2014**, 47, 9798–9803. [Google Scholar] [CrossRef] - Wakefield, B.J.; Lindner, B.S.; McCoy, J.T.; Auret, L. Monitoring of a simulated milling circuit: Fault diagnosis and economic impact. Miner. Eng.
**2018**, 120, 132–151. [Google Scholar] [CrossRef] - Pekpe, K.M.; Mourot, G.; Ragot, J. Subspace method for sensor fault detection and isolation-Application to grinding circuit monitoring. IFAC Proc. Vol.
**2004**, 37, 47–52. [Google Scholar] [CrossRef] - Zeng, Y.; Forssberg, E. Monitoring grinding parameters by signal measurements for an industrial ball mill. Int. J. Miner. Process.
**1993**, 40, 1–16. [Google Scholar] [CrossRef] - Aldrich, C.; Theron, D.A. Acoustic estimation of the particle size distributions of sulphide ores in a laboratory ball mill. J. South. Afr. Inst. Min. Metall.
**2000**, 100, 243–248. [Google Scholar] - Tang, J.; Qiao, J.; Wu, Z.W.; Chai, T.; Zhang, J.; Yu, W. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features. Mech. Syst. Signal Process.
**2018**, 99, 142–168. [Google Scholar] [CrossRef] - Olivier, L.E.; Maritz, M.G.; Craig, I.K. Deep Convolutional Neural Network for Mill Feed Size Characterization. IFAC-PapersOnLine
**2019**, 52, 105–110. [Google Scholar] [CrossRef] - Ku, W.; Storer, R.H.; Georgakis, C. Disturbance detection and isolation by dynamic principal component analysis. Chemom. Intell. Lab.
**1995**, 30, 179–196. [Google Scholar] [CrossRef] - Aldrich, C.; Auret, L. Unsupervised Process Monitoring and Fault Diagnosis with Machine Learning Methods; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Zhang, Q.; Li, P.; Lang, X.; Miao, A. Improved dynamic kernel principal component analysis for fault detection. Meas. J. Int. Meas. Confed.
**2020**, 158, 107738. [Google Scholar] [CrossRef] - Lee, J.M.; Yoo, C.K.; Lee, I.B. Statistical monitoring of dynamic processes based on dynamic independent component analysis. Chem. Eng. Sci.
**2004**, 59, 2995–3006. [Google Scholar] [CrossRef] - Huang, J.; Yan, X. Dynamic process fault detection and diagnosis based on dynamic principal component analysis, dynamic independent component analysis and Bayesian inference. Chemom. Intell. Lab.
**2015**, 148, 115–127. [Google Scholar] [CrossRef] - Rashid, M.M.; Yu, J. A new dissimilarity method integrating multidimensional mutual information and independent component analysis for non-Gaussian dynamic process monitoring. Chemom. Intell. Lab.
**2012**, 115, 44–58. [Google Scholar] [CrossRef] - Pilario, K.E.S.; Cao, Y.; Shafiee, M. Mixed kernel canonical variate dissimilarity analysis for incipient fault monitoring in nonlinear dynamic processes. Comput. Chem. Eng.
**2019**, 123, 143–154. [Google Scholar] [CrossRef] [Green Version] - Huang, J.; Ersoy, O.K.; Yan, X. Fault detection in dynamic plant-wide process by multi-block slow feature analysis and support vector data description. ISA Trans.
**2019**, 85, 119–128. [Google Scholar] [CrossRef] - Song, B.; Ma, Y.; Shi, H. Multimode process monitoring using improved dynamic neighborhood preserving embedding. Chemom. Intell. Lab.
**2014**, 135, 17–30. [Google Scholar] [CrossRef] - Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 10 September 2020).
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis.
**2015**, 115, 211–252. [Google Scholar] [CrossRef] [Green Version] - Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Part 1, Lecture Nores in Computer Science 8689, Proceedings of the 13 European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Berlin, Germany, 2014. [Google Scholar] [CrossRef] [Green Version]
- Fu, Y.; Aldrich, C. Froth image analysis by use of transfer learning and convolutional neural networks. Miner. Eng.
**2018**, 115, 68–78. [Google Scholar] [CrossRef] - Fu, Y.; Aldrich, C. Flotation froth image recognition with convolutional neural networks. Miner. Eng.
**2019**, 132, 183–190. [Google Scholar] [CrossRef] - Bardinas, J.; Aldrich, C.; Napier, L. Predicting the Operating States of Grinding Circuits by Use of Recurrence Texture Analysis of Time Series Data. Processes
**2018**, 6, 17. [Google Scholar] [CrossRef] [Green Version] - Fu, Y.; Aldrich, C. Quantitative Ore Texture Analysis with Convolutional Neural Networks. IFAC-PapersOnLine
**2019**, 52, 99–104. [Google Scholar] [CrossRef] - Liu, X.; Zhang, Y.; Jing, H.; Wang, L.; Zhao, S. Ore image segmentation method using U-Net and Res_Unet convolutional networks. RSC Adv.
**2020**, 10, 9396–9406. [Google Scholar] [CrossRef] [Green Version] - Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Eckmann, J.P.; Oliffson Kamphorst, O.; Ruelle, D. Recurrence plots of dynamical systems. EPL
**1987**, 4, 973–977. [Google Scholar] [CrossRef] [Green Version] - Marwan, N.; Carmen Romano, M.; Thiel, M.; Kurths, J. Recurrence plots for the analysis of complex systems. Phys. Rep.
**2007**, 438, 237–329. [Google Scholar] [CrossRef] - Webber, C.L.; Norbert Marwan, J. Understanding Complex Systems Recurrence Quantification Analysis Theory and Best Practices; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Chollet, F. Keras. 2015. Available online: https://keras.io (accessed on 1 June 2020).
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), Savannah, GA, USA, 2–4 November 2016. [Google Scholar]
- Schreiber, T.; Schmitz, A. Improved surrogate data for nonlinearity tests. Phys. Rev. Lett.
**1996**, 77, 635–638. [Google Scholar] [CrossRef] [Green Version] - Lancaster, G.; Iatsenko, D.; Pidde, A.; Ticcinelli, V.; Stefanovska, A. Surrogate data for hypothesis testing of physical systems. Phys. Rep.
**2018**, 748, 1–60. [Google Scholar] [CrossRef] - Li, R.; Rong, G. Fault isolation by partial dynamic principal component analysis in dynamic process. Chin. J. Chem. Eng.
**2006**, 14, 486–493. [Google Scholar] [CrossRef] - Russell, E.L.; Chiang, L.H.; Braatz, R.D. Fault detection in industrial processes using canonical variate analysis and dynamic principal component analysis. Chemom. Intell. Lab.
**2000**, 51, 81–93. [Google Scholar] [CrossRef] - Choi, S.W.; Lee, I.B. Nonlinear dynamic process monitoring based on dynamic kernel PCA. Chem. Eng. Sci.
**2004**, 59, 5897–5908. [Google Scholar] [CrossRef] - Jia, M.; Chu, F.; Wang, F.; Wang, W. On-line batch process monitoring using batch dynamic kernel principal component analysis. Chemom. Intell. Lab.
**2010**, 101, 110–122. [Google Scholar] [CrossRef] - Vanhatalo, E.; Kulahci, M.; Bergquist, B. On the structure of dynamic principal component analysis used in statistical process monitoring. Chemom. Intell. Lab.
**2017**, 167, 1–11. [Google Scholar] [CrossRef] - Rato, T.J.; Reis, M.S. Defining the structure of DPCA models and its impact on process monitoring and prediction activities. Chemom. Intell. Lab.
**2013**, 125, 74. [Google Scholar] [CrossRef] - Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Rhodes, C.; Morari, M. The false nearest neighbors algorithm: An overview. Comput. Chem. Eng.
**1997**, 21. [Google Scholar] [CrossRef] - Gallager, R. Information Theory and Reliable Communication; John Wiley and Sons: Hoboken, NJ, USA, 1968. [Google Scholar]
- Chen, W.; Shi, K. A deep learning framework for time series classification using Relative Position Matrix and Convolutional Neural Network. Neurocomputing
**2019**, 359, 384–394. [Google Scholar] [CrossRef] - Henry, Y.Y.S.; Aldrich, C.; Zabiri, H. Detection and severity identification of control valve stiction in industrial loops using integrated partially retrained CNN-PCA frameworks. Chemom. Intell. Lab.
**2020**, 206, 104143. [Google Scholar] [CrossRef] - Garcia, G.R.; Michau, G.; Ducoffe, M.; Gupta, J.S.; Fink, O. Time Series to Images: Monitoring the Condition of Industrial Assets with Deep Learning Image Processing Algorithms. arXiv
**2020**, arXiv:2005.07031. [Google Scholar]

**Figure 1.**VGG19 network architecture adapted for transfer learning. Layers include convolutional (white), pooling (blue) and an average pooling layer (red). Classification layers were replaced with an average pooling layer to output a flat vector of extracted features.

**Figure 2.**General approach to feature extraction from time series data with CNNs, showing normal operating condition (NOC) and validation (VAL) data used for training, as well as independent test data (TEST) not used in the construction of the model.

**Figure 3.**Three approaches to dynamic process monitoring: Method I is based on the use of a principal component model derived from the features extracted from the time series with convolutional neural networks and transfer learning; Method II is the same as Method I, except for extraction of enhanced features; and Method III is based on dynamic principal component analysis.

**Figure 5.**Generation of a distance matrix associated with a window sliding across a univariate process signal.

**Figure 9.**GRPs for CNN feature extraction through transfer learning: (

**a**) NOC data (occurring before t = 1000); and (

**b**) fault data (occurring after t = 1000 in Figure 8). The color scale defines the Euclidean distances represented by different RGB pixel values.

**Figure 10.**Cumulative fraction variance explained by each principal component from CNN feature extraction with transfer learning, for a window length of 150 time steps.

**Figure 11.**${T}^{2}$ (

**top**) and Q (

**bottom**) process monitoring charts of simulated dataset for a window length of 150 time steps using Method I. NOC data are shown in blue (up to index 749), validation data in green (indices 750–999) and test (fault) data in red (indices 1000–2000).

**Figure 13.**Distribution (

**a**) and autocorrelation function (

**b**) of the simulated data and the distribution (

**c**) and autocorrelation function (

**d**) of the IAAFT surrogate data.

**Figure 14.**Distance plots for training the CNN in Method II, with a window length of 150 time steps: (

**a**) original time series; and (

**b**) surrogate time series.

**Figure 15.**Cumulative fraction variance explained by each principal component from CNN feature extraction with contrast-based learning (Method II), for a window length of 150 time steps.

**Figure 16.**${T}^{2}$ (

**top**) and Q (

**bottom**) process monitoring charts of simulated dataset for a window length of 150 time steps using Method II. NOC data are shown in blue (up to index 749), validation data in green (indices 750–999) and test (fault) data in red (indices 1000–2000).

**Figure 17.**${T}^{2}$ (

**top**) and Q (

**bottom**) process monitoring charts of simulated dataset using Method III, with an embedding dimension of four and an embedding lag of 30. NOC data are shown in blue (indices 0–600), validation data in green (indices 601–800) and test (fault) data in red (indices 800–1720).

**Figure 18.**AG mill power draw gradually transforming from non-fault data (blue) into white noise through the fault region (red).

**Figure 19.**${T}^{2}$ (

**top**) and Q (

**bottom**) process monitoring charts of AG mill power draw for a window length of 200 time steps using Method I. NOC data are shown in blue and test (fault) data in red.

**Figure 20.**${T}^{2}$ (

**top**) and Q (

**bottom**) process monitoring charts of AG mill power draw for a window length of 200 time steps using Method II. Training non-fault data are shown in blue, untrained non-fault data in green and test (fault) data in red.

**Figure 21.**${T}^{2}$ (

**top**) and Q (

**bottom**) process monitoring charts of simulated dataset using Method III, with embedding dimension of four and embedding lag of 10. NOC data are shown in blue and test (fault) data in red.

**Figure 23.**Scaled multivariate grinding circuit data. Circuit operational state changes from NOC (blue) to overloaded mill (red), as logged by the expert mill controller.

**Figure 24.**Distance plots for CNN feature extraction, with a window length of 100 time steps: (

**a**) NOC state; and (

**b**) overloaded mill state.

**Figure 25.**${T}^{2}$ (

**top**) and Q (

**bottom**) process monitoring charts of multivariate grinding circuit data for a window length of 200 time steps using Method I. NOC (trained) data are shown in blue and test (fault) data in red.

**Figure 26.**${T}^{2}$ (

**top**) and Q (

**bottom**) process monitoring charts of multivariate grinding circuit data for a window length of 100 time steps using Method II. Training non-fault data are shown in blue, untrained non-fault data in green and test (fault) data in red.

**Figure 27.**${T}^{2}$ (

**top**) and Q (

**bottom**) process monitoring charts of multivariate grinding circuit data using Method III. NOC data are shown in blue and test (fault) data in red.

Layers | Output Shape | Description |
---|---|---|

VGG 19 pretrained feature extraction layers | $\left(w\times w\times 512\right)$ | Feature map dimensions, $w$, dependent on window length |

Global Average Pooling | $\left(1\times 512\right)$ | Calculates average value over $w\times w$ feature map |

Fully connected | $\left(1\times 128\right)$ | 128 nodes with Rectified Linear Units (ReLU) nonlinearity |

Output node | $\left(1\right)$ | Single output node with sigmoidal nonlinearity |

Window Length | FAR | TAR | DD | |||
---|---|---|---|---|---|---|

${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | |

50 | 0.063 | 0.035 | 0.062 | 0.076 | 15 | 16 |

100 | 0.044 | 0.052 | 0.288 | 0.280 | 10 | 10 |

150 | 0.047 | 0.031 | 0.481 | 0.487 | 24 | 24 |

200 | 0.033 | 0.042 | 0.537 | 0.623 | 23 | 20 |

Parameter | Details | |
---|---|---|

Pretrain Newly-Added Classification Section | Fine-Tune Classification and Feature Extraction Sections | |

Loss function | Binary cross-entropy | Binary cross-entropy |

Optimizer | Adam [51] | Adam [51] |

Learning rate | 0.01 | 0.0001 |

Training epochs | 20 | 100 |

Window Length | FAR | TAR | DD | |||
---|---|---|---|---|---|---|

${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | |

50 | 0.050 | 0.050 | 0.109 | 0.142 | 13 | 13 |

100 | 0.010 | 0.003 | 0.322 | 0.392 | 15 | 14 |

150 | 0.013 | 0.040 | 0.849 | 0.880 | 41 | 31 |

200 | 0.113 | 0.123 | 0.415 | 0.341 | 3 | 9 |

Embedding Lag | FAR | TAR | DD | |||
---|---|---|---|---|---|---|

${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | |

1 | 0.081 | 0.040 | 0.052 | 0.040 | 172 | - |

5 | 0.063 | 0.053 | 0.061 | 0.039 | 407 | - |

10 | 0.070 | 0.067 | 0.107 | 0.009 | 407 | - |

30 | 0.033 | 0.076 | 0.154 | 0.580 | 292 | 3 |

50 | 0.133 | 0.047 | 0.104 | 0.004 | 292 | - |

Window Length | TAR | DD | ||
---|---|---|---|---|

${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | |

50 | 0.061 | 0.067 | 78 | 79 |

100 | 0.163 | 0.173 | 20 | 20 |

150 | 0.350 | 0.392 | 50 | 50 |

200 | 0.758 | 0.777 | 3 | 3 |

Window Length | TAR | DD | ||
---|---|---|---|---|

${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | |

50 | 0.026 | 0.050 | 431 | 41 |

100 | 0.737 | 0.760 | 3 | 3 |

150 | 0.515 | 0.801 | 170 | 3 |

200 | 1.000 | 0.996 | 3 | 3 |

**Table 8.**Monitoring results on AG mill power draw using Method III with embedding dimension of four.

Embedding Lag | TAR | DD | ||
---|---|---|---|---|

${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | |

2 | 0.0573 | 0.07645 | - | 776 |

5 | 0.0822 | 0.0792 | 797 | - |

10 | 0.0856 | 0.0907 | 791 | 805 |

30 | 0.0626 | 0.0484 | - | 819 |

50 | 0.0718 | 0.0588 | 803 | - |

Window Length | TAR | DD | ||
---|---|---|---|---|

${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | |

50 | 0.723 | 0.753 | 3 | 3 |

100 | 0.965 | 0.975 | 3 | 3 |

150 | 0.820 | 0.870 | 62 | 34 |

200 | 0.900 | 0.937 | 3 | 3 |

Window Length | TAR | DD | ||
---|---|---|---|---|

${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | |

50 | 0.610 | 0.797 | 3 | 3 |

100 | 1.000 | 0.990 | 3 | 3 |

150 | 0.553 | 0.497 | 3 | 3 |

200 | 1.000 | 1.000 | 3 | 3 |

Embedding Dimension/Lag | TAR | DD | ||
---|---|---|---|---|

${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | ${\mathit{T}}^{\mathbf{2}}$ | Q (SPE) | |

2/4 | 0.993 | 1.000 | 6 | 3 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Olivier, J.; Aldrich, C.
Dynamic Monitoring of Grinding Circuits by Use of Global Recurrence Plots and Convolutional Neural Networks. *Minerals* **2020**, *10*, 958.
https://doi.org/10.3390/min10110958

**AMA Style**

Olivier J, Aldrich C.
Dynamic Monitoring of Grinding Circuits by Use of Global Recurrence Plots and Convolutional Neural Networks. *Minerals*. 2020; 10(11):958.
https://doi.org/10.3390/min10110958

**Chicago/Turabian Style**

Olivier, Jacques, and Chris Aldrich.
2020. "Dynamic Monitoring of Grinding Circuits by Use of Global Recurrence Plots and Convolutional Neural Networks" *Minerals* 10, no. 11: 958.
https://doi.org/10.3390/min10110958