#
An Algorithm for Finding Process Identification Intervals from Normal Operating Data^{ †}

^{1}

^{2}

^{3}

^{4}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

#### Problem Formulation

^{N}=[z(1)

^{T},⋯, z(N)

^{T}]

^{T}is available, where

_{init}, k

_{end}] where the data in Z

^{N}may be suitable to perform identification of the process parameters. Remark: if m(k) is not available, the mode is usually possible to infer from the behaviour of r(k) and u(k).

**Minimal knowledge about the plant is required**. That is, none (or little) input is expected from the user.**The resulting algorithm should process the data quickly**. For example, a database containing data from a month of a large scale plant operation should not take longer than a few minutes to be processed.**For each interval found, a numeric measure of its quality should be given**. This can be used by the user in order to select which intervals to use for identification.

**Assumption 1.1**(SISO). Since the purpose is PID tuning it is assumed that only SISO control loops are to be estimated.

**Assumption 1.2**(Linear models). It is assumed that the process can be well described by a linear model $\mathcal{M}$(θ).

**Assumption 1.3**(Monotone step response). In process industry most uncontrolled processes are non-oscillatory, i.e., transfer functions with real valued poles are sufficient.

- The process is operating in manual mode and the input signal u(k) is varied enough to be exciting the process.
- The controller is in automatic and there are enough changes in the setpoint r(k) to make identification possible.

## 2. Theoretical Guiding Principles and Preliminaries

_{0}, y(k) is the output and u(k) is the input, which might be given in open-loop or by a stabilizing controller

_{0}(q), H

_{0}(q) and K(q) are rational and proper and H

_{0}(q) is minimum phase and normalized such that H

_{0}(∞) = 1. A parametric linear model structure $\mathcal{M}$(θ) is used to describe the system $\mathcal{S}$ as

^{n}is the vector of unknown parameters. The true system S is said to belong to the model set $\mathcal{M}$≜{$\mathcal{M}$(θ)|θ ∈ D

_{θ}} for some parameter set D

_{θ}if there is θ′ ∈ D

_{θ}such that $\mathcal{M}(\mathrm{\theta}\prime )=\mathcal{S}$. The equality here means that G

_{0}(z) = G(z, θ′) and H

_{0}(z) = H(z, θ′) for almost all z, i.e., the system and model are equivalent from an input-output relationship. The set defined by all θ′ such that $\mathcal{S}=\mathcal{M}(\mathrm{\theta}\prime )$ is denoted $D(\mathcal{S},\mathcal{M})$ and contains a unique element θ

_{0}, or “true pameters”, if the model structure is globally identifiable, the concept of identifiability will be reviewed in Section 2.2.

#### 2.1. Black-Box Modeling with Laguerre Models

^{T}= [a

_{1},⋯, a

_{n}, b

_{1},⋯b

_{n}] and its identification in a prediction error sense is very tractable as will be discussed in Section 2.3.

^{pT}and thus for a small sampling interval, T, the approximation will require large n. Similarly, for plants with an unknown delay d, it will require d/T additional coefficients for B

_{n}(q).

_{i}(q, α) is the ith Laguerre filter

#### 2.2. Identifiability, Informative Data Sets and Persistence of Excitation

_{θ}is thus defined as whether

_{θ}.

_{1}), W(z, θ

_{2}) in the set, i.e.,

**Definition 1**(Persistent excitation). A quasi-stationary regressor ϕ(k) is persistently exciting (PE) if Ē[ϕ(k)ϕ(k)

^{T}]>0.

**Definition 2**(Richness of a signal). A scalar quasi-stationary signal u(k) is sufficiently rich of order n (SRn) if ϕ(k)

^{T}= [u(k − 1),⋯, u(k − n)] is PE.

**Theorem 1**(ARX, open loop). Let the system be operating in open-loop, the data set is informative if and only if u(k) is SRn

_{b}.

**Theorem 2**(ARX, servo). Let max(n

_{x}− n

_{a}, n

_{y}− n

_{b}) < 0, the data are informative for almost all SRn

_{r}r(k), if and only if n

_{r}≥ min(n

_{a}− n

_{x}, n

_{b}− n

_{y}).

**Theorem 3**(ARX, disturbance rejection). Let r(k) ≡ 0, then the data are informative if and only if max(n

_{x}− n

_{a}, n

_{y}− n

_{b}) ≥ 0.

_{ϕ}(ω) > 0 for almost all ω. The next results follow for any linear plant [12],

**Theorem 4**(Open loop). A persistently exciting input u(k) is informative enough for open loop experiments.

**Theorem 5**(Closed loop). A persistently exciting reference r(k) is informative enough for closed loop experiments.

#### 2.3. (Recursive) Prediction Error Methods

^{T}(k) is function of past inputs u(k − 1),…, u(1) and outputs y(k − 1),…, y(1). In a linear regression, the parameters θ ∈ ℝ

^{n}appear as a linear function with the regressors, which allow for a simple identification procedure. A common choice of identification approach is the prediction error method, where the prediction error ε(k, θ) = y(k) − ŷ(k|θ) = y(k) − φ(k)

^{T}θ is minimized according to some criterion. To reduce the storage requirements of the resulting algorithm and to allow for adaptive solutions, recursive methods are preferred, which can be updated in a data stream. Many recursive prediction error methods are possible, see, e.g., [12]. For the presentation here, we consider an exponentially weighted quadratic prediction error criterion, where the estimate is given as

#### 2.3.1. Frequency Description of Prediction Error

^{k−i}in Equation (18) by the constant weight $\frac{1}{2N}$, where N is the number of data samples, and let N → ∞. The limiting cost is denoted $\overline{V}(\mathrm{\theta})$ and it follows from the definition of the Fourier transform and Equation (16) that

_{0}, H

_{0}to describe the system’s transfer functions in Equation (2) and G

_{θ}, H

_{θ}to describe the transfer functions of the model in Equation (4), it follows, see [12], that the residuals’ spectrum can be written as

_{ue}denotes the cross-spectrum of u and e, defined in an analogy to Equation (13). The expression allows us to study the effects of the model chosen and excitation to the cost function and thus to the estimated parameters. Notice for instance that if $\mathcal{S}\in \mathcal{M}$, θ′ minimizes the criterion. Furthermore, G

_{θ}will be pushed towards (G

_{0}+ B

_{θ}), and for that reason B

_{θ}will introduce a bias to the estimate of G

_{0}.

_{ue}≡ 0 and so the bias term B

_{θ}≡ 0. The input spectra ϕ

_{u}will thus control the frequencies where ${|{G}_{\widehat{\mathrm{\theta}}}-{G}_{0}|}^{2}$ is minimized. For instance, some times the noise model is unimportant and thus fixed to a constant, i.e., H(q, θ) = H

_{*}(q), this was used in our previous paper [9]. In this case we have

_{θ}−G

_{0}|

^{2}is decreased according to the frequency weight given by the ratio of the input spectrum Φ

_{u}and the noise model chosen |H

_{*}|

^{2}. Furthermore, the relevance of a persistently exciting input becomes clear in the expression.

_{ue}and thus the bias term B

_{θ}should be addressed by adjusting the noise model H

_{θ}to H

_{0}. Splitting the input spectrum into the parts originating from the reference r and noise e as ${\mathrm{\Phi}}_{u}(\mathrm{\omega})={\mathrm{\Phi}}_{u}^{r}(\mathrm{\omega})+{\mathrm{\Phi}}_{u}^{e}(\mathrm{\omega})$, if the input can be written as u(k) = K

_{1}(q)r(k) + K

_{2}(q)e(k), then ${|{\mathrm{\Phi}}_{ue}\left(\mathrm{\omega}\right)|}^{2}={\mathrm{\gamma}}_{0}{\mathrm{\Phi}}_{u}^{e}(\mathrm{\omega})$ and

_{0}. Furthermore, for a controller in disturbance rejection mode, with ${\mathrm{\Phi}}_{u}^{r}\equiv 0$, the bias can only be controlled by the ratio of the noise energy and the contribution of the noise to the input energy due to feedback, i.e., ${\mathrm{\gamma}}_{0}/{\mathrm{\varphi}}_{u}^{e}$. So to reduce the bias, the influence of the noise to the input should increase, which contradicts many control objectives. Reducing the bias for a system operating in disturbance rejection mode is therefore difficult. In servo mode, the reference r can reduce |H

_{0}− H

_{θ}|

^{2}through the ratio ${\mathrm{\Phi}}_{u}^{r}/{\mathrm{\Phi}}_{u}$, thus reducing the bias term.

#### 2.3.2. Asymptotic Properties of the RLS

_{0}with their sample estimates, given by

## 3. User Choices, Data Features

#### 3.1. Choice of Model Structure

_{i}(k) ≜ L

_{i}(q, α)u(k). The resulting model presents the key characteristics

- for an adequate choice of model orders n
_{b}, n_{a}, it is a flexible representation of any linear system, - for an adequate choice of α lower orders n
_{b}are possible compared to high-order ARX because of the use the Laguerre polynomials to describe the input relations, - it can be written as a linear regression which allows for an efficient identification, using, e.g., the RLS algorithm.

#### 3.1.1. Tuning

_{b}can be chosen as

#### 3.1.2. Plants with an Integrator

_{I}. The same model order n for non-integrating plants can be used or a specific choice n

_{I}can be made for integrating plants, e.g., using Equation (29) with α

_{I}.

#### 3.2. Choice of Operational Modes to Consider

#### 3.3. Simple Heuristic Tests

#### 3.3.1. Input Step Change Test

_{1}can be detected with a threshold check

_{1}. The threshold η

_{1}should be chosen very small as this test is only used to avoid scanning for steady-state data.

#### 3.3.2. Variability of y Test

_{y}(k) as an estimate for the mean and γ

_{y}(k) as an estimate for the variance, a recursive estimate can be found by [27]

_{µ}, λ

_{γ}< 1 are the forgetting factors, controlling the effective size of the averaging window for the estimation of the mean and variance of y respectively. Since y′ is normalized between 0 and 1, the threshold η

_{2}can be chosen relative to its range, e.g., η

_{2}= 0.01

^{2}means a standard deviation of 1% relative to the output range.

#### 3.4. Finding Excitation in the Data

#### Numerical Conditioning of $\overline{R}(k)$ Test

_{p}(A) ≥ 1 is the condition number of matrix A. Small values of

_{max}(A) and σ

_{min}(A) denoting the largest respectively smallest singular values of A.

_{min}can be close to zero, κ(k) can vary between [1, ∞) and it might be difficult to set a threshold for it. An alternative is to monitor its inverse, also known as the reciprocal condition number as it will have values in [0, 1]. Notice that when using the reciprocal condition number, high values relate to good conditioning. The reciprocal condition number is tested with a threshold

^{4}, i.e., 10

^{−4}when using its reciprocal. We note that increasing the model orders may increase the number of small eigenvalues in $\overline{R}(k)$ due to over-fitting and thus η

_{3}may be chosen proportional to the number of estimated parameters.

#### 3.5. Granger Causality Test

_{0}: θ

_{b}= 0 which should be rejected in case there is a Granger-causal relation present. A statistical test for this hypothesis is readily available from the asymptotic results for the estimate $\widehat{\mathrm{\theta}}$ given by Equation (24) since under the null hypothesis $\mathscr{H}$

_{0},

^{b})

^{−1}is the inverse of the sub matrix of Σ

_{θ}associated to θ

_{b}only and χ

_{d}is the chi square distribution with d degrees of freedom. The statistics can be computed based on the finite sample estimates in Equation (26), giving

_{4}is taken as the p quantile of ${\mathrm{\chi}}_{{n}_{b}}$. This condition must be satisfied in our algorithm to assign a data interval as useful.

#### Choice of Implementation of RLS Based on the QR Factorization

_{ii}= λ

^{k}

^{−}

^{i}and

^{T}= I, find the QR factorization to

_{0}(k) is square upper triangular of dimension n + 1 and R

_{3}(k) is a scalar. Applying the orthonormal transformation Q(k)

^{T}from the left in Equation (42), then gives

_{k}(θ) is given by the solution to

_{1}(k) and comparing the singular values of R

_{1}(k) and the ones of ${\overline{R}}_{k}=\mathrm{\varphi}{\left(k\right)}^{T}\mathrm{\Lambda}(k)\mathrm{\varphi}(k)={R}_{1}{\left(k\right)}^{T}{R}_{1}(k)$ gives

_{1}(k) is upper triangular. Sample estimates for ${\widehat{\mathrm{\gamma}}}_{k}$, ${\widehat{\overline{R}}}_{k}$ and ${\widehat{\mathrm{\Sigma}}}_{k}$ as in Equation (26) are given by

_{ef}≫ n = n

_{a}+ n

_{b}.

_{k}(θ) can be written as

_{k}(θ) gives

_{0}(0) = R

_{0}. The matrix R

_{0}can be chosen as diagonal to ensure well-posedness and with small values to reflect the large uncertainties present during initialization, recall the expression for the covariance of the parameters given in Equation (50c).

Set R_{0}(0) = R_{0}. |

for k do |

Compute $\left[\begin{array}{l}{\mathrm{\lambda}}^{-1/2}{R}_{0}\left(k-1\right)\\ \mathrm{\phi}(k)\phantom{\rule{1em}{0ex}}y(k)\end{array}\right]=Q(k)R(k)$, |

Solve ${R}_{1}(k){\widehat{\mathrm{\theta}}}_{k}={R}_{2}(k)$, compute s(k) as in Equation (51). end for |

end for |

## 4. Outline of the Algorithm

_{−}

_{1}and finishing at ${\left\{z(k)\right\}}_{{k}_{-1}}^{N}$, where z(k) is given in Equation (1).

_{0}is the minimal length of data under the same mode to be considered and can be adjusted to avoid too short intervals.

_{1:4}have increasing orders of complexity and demand on the data quality for identification. While there is no causal relation between the tests, it is reasonable to consider that it is only worth the effort of computing more complex tests once the simpler tests have been triggered. Denoting the first sample a test is triggered by k

_{i}, a test T

_{i}is only checked from k

_{i−}

_{1}to the current sample k, and thus

_{i−}

_{1}is triggered, initialization of the next test T

_{i}is taken from k

_{1}−n

_{1}, i.e., n

_{1}≤ n

_{0}samples before the step change, to the current sample k,

_{0}and T

_{1}do not require initialization.

_{2}is update as in Equation (33c), Test T

_{3}is given by Equation (38) with $\overline{R}(k)$ updated as in the standard RLS solution given in Equation (19b) due to its simplicity and test T

_{4}is updated as in Equation (51) using the QR-RLS implementation given by Algorithm 1.

- E
_{0}: a change of operating mode happened, - E
_{1}: T_{1}is not triggered, - E
_{2:4}: T_{2:4}returns to false after reaching a true value for some k in ${\mathcal{C}}_{2:4}$ respectively, - E
_{5}: the data stream reaches the end.

_{e}. Once the algorithm exits, it is called again with k

_{−}

_{1}= k

_{e}+ 1 until E

_{5}is achieved, i.e., all data is processed in the stream.

_{4}, is triggered at some point. In this case, we consider the interval Δ = [k

_{1}−n

_{1}, k

_{e}] as possibly suitable for identification. The interval Δ is returned to the user, together with the largest value for s(k),

## 5. Illustration Based on Real Data from an Entire Process Plant

_{0:4}in the proposed algorithm, we count the number of times a certain test was the deepest to be triggered before an exit condition. We also present the counts for the exit conditions E

_{i}. The results are shown in Table 3 for open and closed loop. The algorithm is well balanced since every step affects the behavior of the algorithm in selecting and rejecting intervals. The large number for T

_{4}is due to the large number of intervals retrieved for flow loops, recall Table 2. The operation of the plant is mainly carried out in closed-loop, for which case most loops operate in steady-state. This is reflected by the large count for E

_{1}, meaning that no significant step change was found for those intervals. The simple test T

_{1}can thus considerably reduce the number of computations needed to scan the data.

#### 5.1. Selected Examples

_{1}and, compared to the previous example in Figure 3c, significant changes for the variance of y takes considerably longer time to be detected as seen in Figure 3d. Despite the large delay, the use of a Laguerre expansion can capture the excitation in the data and is well adjusted to the first changes in y. The Granger causality test also indicates the presence of excitation and the algorithm exits when its value returns below the threshold.

## 6. Conclusions

## Appendix

## A. Proof of Equation (51)

^{b},

^{b}, so

## Author Contributions

## Conflicts of Interest

## Acknowledgments

## References

- Carrette, P.; Bastin, G.; Genin, Y.; Gevers, M. Discarding data may help in system identification. IEEE Trans. Signal Process.
**1996**, 44, 2300–2310. [Google Scholar] - Horch, A. Condition Monitoring of Control Loops. Ph.D. Thesis, Department of Signals Sensors and Systems, The Royal Institute of Technology (KTH), Stockholm, Sweden, 2000. [Google Scholar]
- Isaksson, A.; Horch, A.; Dumont, G. Event-triggered deadtime estimation from closed-loop data, Proceedings of the American Control Conference, Arlington, VA, USA, 25–27 June 2001.
- Green, M.; Moore, J. Persistence of excitation in linear systems. Syst. Control Lett.
**1986**, 7, 351–360. [Google Scholar] - Cho, Y.J.; Ramakrishnan, N.; Cao, Y. Reconstructing chemical reaction networks: Data mining meets system identification, Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’08), Las Vegas, NV, USA, 24–27 August 2008.
- Schmidt, M.D.; Lipson, H. Data-mining dynamical systems: Automated symbolic system identification for exploratory analysis. ASME Conf. Proc.
**2008**, 2008, 643–649. [Google Scholar] - Amirthalingam, R.; Sung, S.W.; Lee, J.H. Two-step procedure for data-based modeling for inferential control applications. AIChE J
**2000**, 46, 1974–1988. [Google Scholar] - Shardt, Y.A.; Huang, B. Data quality assessment of routine operating data for process identification. Comput. Chem. Eng.
**2013**, 55, 19–27. [Google Scholar] - Peretzki, D.; Isaksson, A.J.; Bittencourt, A.C.; Forsman, K. Data mining of historic fata for process identification, Proceedings of the 2011 AIChE Annual Meeting, Minneapolis, MN, USA, 16–21 October 2011.
- Shardt, Y.A.; Shah, S.L. Segmentation methods for model identification from historical process data, Proceedings of the 14th IFAC World Congress, Beijing, China, 5 July 1999; 19, pp. 2836–2841.
- Ng, Y.S.; Srinivasan, R. Data mining for the chemical process industry. In Encyclopedia of Data Warehousing and Mining; Wang, J., Ed.; IGI Global: Hershey, PA, USA, 2009; pp. 458–464. [Google Scholar]
- Ljung, L. System Identification: Theory for the User, 2nd ed; Prentice-Hall: Englewood Cliffs, NJ, USA, 1998. [Google Scholar]
- Wahlberg, B. System identification using Laguerre models. IEEE Trans. Autom. Control.
**1991**, 36, 551–562. [Google Scholar] - Gevers, M.; Bazanella, A.; Bombois, X.; Miskovic, L. Identification and the information matrix: How to get just sufficiently rich? IEEE Trans. Autom. Control.
**2009**, 54, 2828–2840. [Google Scholar] - Ljung, L.; Yuan, Z.D. Asymptotic properties of black-box identification of transfer functions. IEEE Trans. Autom. Control.
**1985**, 30, 514–530. [Google Scholar] - Wang, L.; Cluett, W. Building transfer function models from noisy step response data using the Laguerre network. Chem. Eng. Sci.
**1995**, 50, 149–161. [Google Scholar] - Oliveira e Silva, T. On the determination of the optimal pole position of Laguerre filters. IEEE Trans. Signal Process.
**1995**, 43, 2079–2087. [Google Scholar] - Björklund, S.; Ljung, L. A review of time-delay estimation techniques, Proceedings of the 42nd IEEE Conference on Decision and Control, Maui, HI, USA, 9–12 December 2003; 3, pp. 2502–2507.
- Isaksson, M. A Comparison of Some Approaches to Time-Delay Estimation. In Master’s Thesis; Department of Automatic Control, Lund: University, Lund, Sweden, 1997. [Google Scholar]
- Park, H.; Sung, S.; Lee, I.; Lee, J. On-line process identification using the Laguerre series for automatic tuning of the proportional-integral-derivative controller. Ind. Eng. Chem. Res.
**1997**, 36, 101–111. [Google Scholar] - Fu, Y.; Dumont, G. On determination of Laguerre filter pole through step or impulse response data, Proceedings of the 12th IFAC World Congress; Pergamon: Oxford, UK, 1994; pp. 35–39.
- Wahlberg, B.; Mäkilä, P. On approximation of stable linear dynamical systems using Laguerre and Kautz functions. Automatica
**1996**, 32, 693–708. [Google Scholar] - Gevers, M.; Bazanella, A.; Miskovic, L. Informative data: How to get just sufficiently rich? Proceedings of the 47th IEEE Conference on Decision and Control, 2008 (CDC 2008), Cancun, Mexico, 9–11 December 2008; pp. 1962–1967.
- Söderström, T.; Gustavsson, I.; Ljung, L. Identifiability conditions for linear systems operating in closed-loop. Int. J. Control.
**1975**, 21, 243–255. [Google Scholar] - Shardt, Y.A.; Huang, B. Closed-loop identification with routine operating data: Effect of time delay and sampling time. J. Process Control.
**2011**, 21, 997–1010. [Google Scholar] - Shardt, Y.; Huang, B. Statistical properties of signal entropy for use in detecting changes in time series data. J. Chemom.
**2013**, 27, 394–405. [Google Scholar] - Finch, T. Incremental calculation of weighted mean and variance; Technical Report, Computing Service; University of Cambridge: Cambridge, UK, 2009. [Google Scholar]
- Golub, G.; Loan, C.F.V. Matrix Computations, 3rd ed; John Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
- Grillenzoni, C. Testing for causality in real time. J. Econom.
**1996**, 73, 355–376. [Google Scholar]

**Figure 2.**Algorithm execution sequence. Lines denote the different logical tests. Boxes relate to the computations of quantities used in the different tests. Shaded boxes denote the samples used during initialization of the different quantities and white boxes refer to their online update.

**Figure 3.**Examples of retrieved intervals and the quantities used during the algorithm execution. (

**a**) Level, open loop; (

**b**) Density, closed loop; (

**c**) Test for variance of y; (

**d**) Test for variance of y; (

**e**) Test for numerical conditioning of $\overline{R}(k)$; (

**f**) Test for numerical conditioning of $\overline{R}(k)$; (

**g**) Granger causality test; (

**h**) Granger causality test.

**Figure 4.**Data under a significant disturbance that was rejected by the algorithm. (

**a**) Temperature, open loop; (

**b**) Test for numerical conditioning of $\overline{R}(k)$; (

**c**) Test for variance of y; (

**d**) Granger causality test.

Parameter | Symbol | Value |
---|---|---|

Filter coefficient for µ_{y} | λ_{µ} | 0.99 |

Filter coefficient for γ_{y} | λ_{γ} | 0.9 |

RLS forgetting factor | λ | 0.99 |

Initialization matrix for QR-RLS | R_{0} | 0.005 · I |

Laguerre poles | α, α_{I} | 0.8, 0.6 |

Order of Laguerre input model | n_{b} | 10 |

Order of noise model | n_{a} | 10 |

Minimal number of samples under the same mode | n_{0} | 2(n_{a} + n_{b}) |

Number of initialization samples before step change | n_{1} | (n_{a} + n_{b}) |

Test description | Threshold | Value |

Step change size | η_{1} | 0.002 |

Variance of output | η_{2} | 0.00125^{2} |

Reciprocal condition number | η_{3} | 0.002 |

Chi-square | η_{4} | 23.2 (p = 0.99) |

**Table 2.**Some statistics characterizing the performance of the method when applied to a historical dataset.

Loop type | Number of Intervals Scanned | Average Length of Δ | ||||
---|---|---|---|---|---|---|

Total | Found | (Samples) | ||||

open | closed | open | closed | open | closed | |

Density | 2973 | 9074 | 425 | 102 | 75 | 86 |

Flow | 34,177 | 79,334 | 460 | 39,141 | 198 | 421 |

Level | 18,008 | 59,030 | 616 | 146 | 72 | 105 |

Pressure | 6286 | 15,846 | 312 | 47 | 64 | 109 |

Temperature | 9118 | 48,739 | 72 | 41 | 67 | 76 |

**Table 3.**Counts for the deepest test T

_{i}triggered before the algorithm exits and counts for the condition causing the algorithm to exit the scanning E

_{i}.

i | Deepest T_{i} Triggered | E_{i} Triggered | ||
---|---|---|---|---|

open | closed | open | closed | |

0 | 17,305 | 158,186 | 27,711 | 6746 |

1 | 7342 | 435 | 17,305 | 158,186 |

2 | 16,189 | 5398 | 2936 | 10,766 |

3 | 130 | 1881 | 1066 | 6645 |

4 | 1885 | 39,477 | 435 | 22,383 |

© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Bittencourt, A.C.; Isaksson, A.J.; Peretzki, D.; Forsman, K.
An Algorithm for Finding Process Identification Intervals from Normal Operating Data. *Processes* **2015**, *3*, 357-383.
https://doi.org/10.3390/pr3020357

**AMA Style**

Bittencourt AC, Isaksson AJ, Peretzki D, Forsman K.
An Algorithm for Finding Process Identification Intervals from Normal Operating Data. *Processes*. 2015; 3(2):357-383.
https://doi.org/10.3390/pr3020357

**Chicago/Turabian Style**

Bittencourt, André C., Alf J. Isaksson, Daniel Peretzki, and Krister Forsman.
2015. "An Algorithm for Finding Process Identification Intervals from Normal Operating Data" *Processes* 3, no. 2: 357-383.
https://doi.org/10.3390/pr3020357