Next Article in Journal
Estimating Equations for Density Dependent Markov Jump Processes
Next Article in Special Issue
Functional Modeling of High-Dimensional Data: A Manifold Learning Approach
Previous Article in Journal
The Hilbert Space of Double Fourier Coefficients for an Abstract Wiener Space
Previous Article in Special Issue
Compression-Based Methods of Time Series Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Complex Model via Phase-Type Distributions to Study Random Telegraph Noise in Resistive Memories

1
Department of Statistics and O.R. and Math Institute, University of Granada, 18071 Granada, Spain
2
Department of Electronics and Computing Technology, University of Granada, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(4), 390; https://doi.org/10.3390/math9040390
Submission received: 22 January 2021 / Revised: 8 February 2021 / Accepted: 10 February 2021 / Published: 16 February 2021

Abstract

:
A new stochastic process was developed by considering the internal performance of macro-states in which the sojourn time in each one is phase-type distributed depending on time. The stationary distribution was calculated through matrix-algorithmic methods and multiple interesting measures were worked out. The number of visits distribution to a determine macro-state were analyzed from the respective differential equations and the Laplace transform. The mean number of visits to a macro-state between any two times was given. The results were implemented computationally and were successfully applied to study random telegraph noise (RTN) in resistive memories. RTN is an important concern in resistive random access memory (RRAM) operation. On one hand, it could limit some of the technological applications of these devices; on the other hand, RTN can be used for the physical characterization. Therefore, an in-depth statistical analysis to model the behavior of these devices is of essential importance.

1. Introduction

In several fields, such as computing and electronics engineering, it is of great interest to analyze complex devices with several macro-states that evolve by time. It is usual to consider Markov processes for this analysis, but in multiple occasions, the spent times in each macro-state are not exponentially distributed. In this context, a new approach is to consider that the spent time in each macro-state is phase-type (PH) distributed. In this case, it is assumed that each macro-state is composed of internal performance states that have a Markovian behavior. One interesting aspect is that the modeling of the stochastic process can only occur when the macro-state process can be observed. For this new process, the Markovianity is lost.
A phase-type distribution is defined as the distribution of the absorption time in an absorbing Markov chain. These probability distributions constitute a class of distributions on the positive real axis, which seems to strike a balance between generality and tractability, thanks to its good properties. This class of distributions was introduced by Neuts [1,2] and allows the modeling of complex problems in an algorithmic and computational way. It has been widely applied in fields such as engineering to model complex systems [3,4,5], queueing theory [6], risk theory [7] and electronics engineering [8].
The good properties of PH distributions due to their appealing probabilistic arguments constitute their main feature of being mathematically tractable. Several well-known probability distributions, e.g., exponential, Erlang, Erlang generalized, hyper-geometric, and Coxian distributions, among others, are particular cases of PH. One of the most important PH properties is that whatever nonnegative probability distribution can be approximated as much as desired through a PH, accounting for the fact that the PH class is dense in the set of probability distributions on the nonnegative half-line [9]. It allows the consideration of general distributions through PH approximations. Exact solutions to many complex problems in stochastic modeling can be obtained either explicitly or numerically by using matrix-analytic methods. The main features of PH distributions are described in depth in [10].
A macro-state stochastic process is built here to model the behavior of different random telegraph noise (RTN) signals. Given that the behavior of the different macro-states (levels) is not exponentially distributed, we assume each level to be composed of multiple internal phases that can be related with other levels. We show that if the internal process, by considering internal states inside the macro-states, has a Markovian behavior, the process between levels is not. For the latter, the sojourn time in each level is phase-type distributed depending on the initial time. This fact is of great interest and gives us more information about the device’s internal process.
Advanced statistical techniques are key tools to model complex physical and engineering problems in many different areas of expertise. In this context, a new statistical methodology is presented; we will concentrate in the study of resistive memories [11,12], also known as resistive random access memories (RRAMs), a subgroup of a wide class of electron devices named memristors [13]. They constitute a promising technology with great potential in several applications in the integrated circuit realm.
Circuit design is a tough task because of the high number of electronic components included in integrated circuits, therefore, electronic design automation (EDA) tools are essential. These software tools need compact models that represent the physics of the devices in order to fulfil their role in aiding circuit design. Although there are many works in compact modeling in the RRAM literature [14,15], there is a lot to be done, in particular in certain areas related to variability and noise. Variability is essential in modeling due to the RRAM operation’s inherent stochasticity, since the physics is linked to random physical mechanisms [16,17]. Another issue of great importance in these devices is noise. Among the different types of noise, random telegraph noise is of great concern [18]. The disturbances produced by one or several traps (physical active defects within the dielectric) inside the conductive filament or close to it alter charge transport mechanisms and consequently, current fluctuations can occur (Figure 1) that lead to RTN [14,15,19,20]. This noise can affect the correct device operation in applications linked to memory-cell arrays and neuromorphic hardware [14,16,21], posing important hurdles to the use of this technology in highly scaled integrated circuits. Nevertheless, RTN fluctuations can also be beneficial, for instance, when used as entropy sources in random number generators, an employment of most interest in cryptography [22,23].
The application in this issue is addressed in the current work, in particular with the statistical description of RTN signals in RRAMs. This study, in addition to physically characterizing the devices, can be used for compact modeling and, therefore, as explained above, for developing the software tools needed for circuit design. Accounting for the aforementioned intrinsic stochasticity of these devices, the choice of a correct statistical strategy to attack this problem is essential in the analysis. In this respect, we use the PH distributions, which have already been employed to depict some facets of RRAM variability [24]; nonetheless, as far as we know, they have not been used in RTN analysis.
In our study, we deal with devices whose structure is based on the Ni/HfO2/Si stack. The fabrication details, as well as their electric characterization, are given in [25]. RTN measurements were recorded in the time domain by means of an HP-4155B semiconductor parameter analyzer (Figure 1). The current fluctuation between levels (see the marked levels in red in Figure 1), ΔI, and the spent time in each of the levels are key parameters to analyze. The associated time with the current low level is known as emission time (τe), which is linked to the time the active defect keeps a “captured” electron trapped until it releases it (through this time, it is said that the defect is occupied). On the other hand, the capture time (τc) represents the time taken to capture an electron by an empty defect.
The manuscript is organized as follows: in Section 2, the proposed statistical procedure is described; in Section 3, we deepen on the measures associated; and in Section 4, the number of visits to a determined macro-state is presented. The parameter estimation is unfolded in Section 5, and the application of the methodology developed here in Section 6. Finally, conclusions related to the main contributions of this work are presented in Section 7.

2. Statistical Methodology

Different RTN signals have been employed here, such as the ones shown in Figure 1. We have determined the number of current levels in order to be able to calculate the emission and capture times for a certain time interval. The current thresholds (red lines in Figure 1) were chosen to allow the extraction algorithm to calculate the time intervals corresponding to the levels defined previously.
In order to explain these issues, a new model is first built in transient and stationary regimes.

2.1. The Model

We assume a stochastic process {X(t); t ≥ 0} with macro-state space E = { 1 , 2 , , r } . Each macro-state k, level k, is composed of nk internal phases or states denoted as i h k for h = 1,…,nk. We assume that the device internal performance is governed by a Markov process, {J(t); t ≥ 0}, with state space E = { i 1 1 , , i n 1 1 , i 1 2 , , i n 2 2 , , i 1 r , , i n r r } , and with an initial distribution θ, and the following generator matrix expressed by blocks:
Q = ( Q 11 Q 1 k Q 1 r Q k k Q r 1 Q r k Q r r )
where the matrix block Qij contains the transition intensities between the states of the macro-states from i to j.
Throughout this work, we will denote Q k as the matrix with zeros except the column matrix block k of Q, and Q k as the matrix Q with zeros in the k-th block column. These matrices give the transition intensities of macro-state k and of any macro-state different to k, respectively.
Thus:
Q k = ( 0 0 Q 1 k 0 0 0 0 Q r k 0 0 )
Q k = ( Q 11 Q 1 , k 1 0 Q 1 , k + 1 Q 1 r Q r 1 Q r , k 1 0 Q r , k + 1 Q r r )
It can be seen that the following equality holds: Q = Q k + Q k .
Given the Q-matrix of the Markov process, the transition probability matrix for the process {J(t); t ≥ 0} is given by P(t) = exp{Qt}. From this and from the initial distribution θ, the transient distribution at time t is given by (P{X(t) = i 1 1 }, P{X(t) = i 2 1 },…, P{X(t) = i n r r }) = a(t) = θP(t). The order of this vector is 1   x   i = 1 r n i .
The transition probabilities for the process {X(t); t ≥ 0} can be obtained from the transition probabilities of the Markov process {J(t); t ≥ 0}. If it is denoted as H(⋅,⋅) where the transition between macro-states ij is given by:
h i j ( s , t ) = P { X ( t ) = j | X ( s ) = i } = h j k i P { J ( t ) = h | J ( s ) = k } P { J ( s ) = k } k i P { J ( s ) = k } = k i [ a k ( s ) h j p k h ( t s ) ] k i a k ( s ) .
This transition probability can be expressed in a matrix-algorithmic form as follows. Denoting by Ak a matrix of zeros with order i = 1 r n i   x   i = 1 r n i except the block corresponding to the macro-state k, which is the identity matrix, then:
h i j ( s , t ) = P { X ( t ) = j | X ( s ) = i } = a ( s ) A i P ( t s ) A j e a ( s ) A i e
where a(t)Aie is the probability of being in the macro-state i at time t, with e being a column vector of ones with an appropriate order. A similar reasoning can be done with the embedded jumping probability.
Note that the matrix H is a stochastic matrix, and therefore it is a transition matrix of a Markov process. However, an important remark is that the Markov process related to the matrix H does not correspond to the process defined in this section. In fact, this process is non-homogeneous and not Markovian.

2.2. Stationary Distribution

The stationary distribution for the process {X(t); t ≥ 0} can be calculated by blocks from the stationary distribution for the process {J(t); t ≥ 0}. It is well known that this distribution verifies the balance equations π Q = 0 and the normalization equation π e = 1 . We denote π k as the block of π corresponding to the macro-state k. It is denoted as π k e (k = 1,...,r) to the stationary distribution for the macro-state k of the process {X(t); t ≥ 0}.
The stationary distribution has been worked out by blocks in order to reduce the computational cost, according to the macro-states, and applying matrix-analytic methods.
As stated above, the stationary distribution verifies the following system:
π 1 Q 11 + π 2 Q 21 + π 3 Q 31 + + π r Q r 1 = 0 π 1 Q 12 + π 2 Q 22 + π 3 Q 32 + + π r Q r 2 = 0 π 1 Q 13 + π 2 Q 23 + π 3 Q 33 + + π r Q r 3 = 0 π 1 Q 1 r + π 2 Q 2 r + π 3 Q 3 r + + π r Q r r = 0 π e = 1 .
It has been solved by using matrix-analytics methods, and the solution is given by
π j = i = 1 j 1 π i R i j r j + 1 ; j = 2 , , r
where R i j 1 = Q i j Q j j 1 for any i, j = 1,…, r with ij
R i j r j + 1 = ( R i j 1 + H i j r j + 1 ) ( I + H j j r j + 1 ) 1   for   1     i   <   j   <   r   being
H i j r j + 1 = j < u 1 r R i u 1 r u 1 + 1 R u 1 j 1 + j < u 1 < u 2 r R i u 1 r u 1 + 1 R u 1 u 2 r u 2 + 1 R u 2 j 1 j < u 1 < u 2 < u 3 r R i u 1 r u 1 + 1 R u 1 u 2 r u 2 + 1 R u 2 u 3 r u 3 + 1 R u 3 j 1 + j < u 1 < u 2 < u 3 < u 4 r R i u 1 r u 1 + 1 R u 1 u 2 r u 2 + 1 R u 2 u 3 r u 3 + 1 R u 3 u 4 r u 4 + 1 R u 4 j 1      for   1     i   <   j   <   r ± j < u 1 < u 2 < < u r j r R i u 1 r u 1 + 1 R u 1 u 2 r u 2 + 1 R u r 3 u r 2 r u r 2 + 1 R u r 2 j 1 I { j < r 1   or   i j } R 1 , j + 1 r j R j + 1 , j + 2 r j R r 1 , r 1 R r j 1 .
The vector π 1 is worked out as follows:
π 1 = ( 1 , 0 ) [ B | ( I + H 11 r ) * ] 1
Where, given a matrix A, A* is the matrix A without the last column, and:
B = e j < u 1 r R i u 1 r u 1 + 1 e + j < u 1 < u 2 r R i u 1 r u 1 + 1 R u 1 u 2 r u 2 + 1 e j < u 1 < u 2 < u 3 r R i u 1 r u 1 + 1 R u 1 u 2 r u 2 + 1 R u 2 u 3 r u 3 + 1 e + j < u 1 < u 2 < u 3 < u 4 r R i u 1 r u 1 + 1 R u 1 u 2 r u 2 + 1 R u 2 u 3 r u 3 + 1 R u 3 u 4 r u 4 + 1 e ± j < u 1 < u 2 < < u r j r R i u 1 r u 1 + 1 R u 1 u 2 r u 2 + 1 R u r 3 u r 2 r u r 2 + 1 e .
Finally, the stationary distribution of the process {X(t); t ≥ 0} is given by ( π 1 e , π 2 e , , π r e ) .
Algorithm to calculate the stationary distribution
Step 1. For i, j =1,…, r and ij
 Compute R i j 1
Step 2. For j = r−1,…, 2 {
 For i = 1,…, j {
 Compute H i j r j + 1 }
 For i = 1,…, j {
 Compute R i j r j + 1 (ij)}}
Step 3. Compute H 11 r and B
Step 4. Compute π 1
Step 5. Compute π 2 , , π r and π 1 e , , π r e
Example. Case r = 4
Step 1.
   R 12 1 = Q 12 Q 22 1 ; R 13 1 = Q 13 Q 33 1 ; R 14 1 = Q 14 Q 44 1
   R 21 1 = Q 21 Q 11 1 ; R 23 1 = Q 23 Q 33 1 ; R 24 1 = Q 24 Q 44 1
   R 31 1 = Q 31 Q 11 1 ; R 32 1 = Q 32 Q 22 1 ; R 34 1 = Q 34 Q 44 1
Step 2.
   H 13 2 = R 14 1 R 43 1 ; H 23 2 = R 24 1 R 43 1 ; H 33 2 = R 34 1 R 43 1
   R 13 2 = ( R 13 1 + H 13 2 ) ( I + H 33 2 ) 1 ; R 23 2 = ( R 23 1 + H 23 2 ) ( I + H 33 2 ) 1
   H 12 3 = R 13 2 R 32 1 R 14 1 R 42 1 + R 13 2 R 34 1 R 42 1
   H 22 3 = R 23 2 R 32 1 R 24 1 R 42 1 + R 23 2 R 34 1 R 42 1
   R 12 3 = ( R 12 1 + H 12 3 ) ( I + H 22 3 ) 1
Step 3.
H 11 4 = R 12 3 R 21 1 R 13 2 R 32 1 R 14 1 R 41 1 + R 12 3 R 23 2 R 31 1 + R 12 3 R 24 1 R 31 1 + R 13 2 R 34 1 R 41 1 R 12 3 R 23 2 R 34 1 R 41 1
B = e R 12 3 e R 13 2 e R 14 1 e + R 12 3 R 23 2 e + R 12 3 R 24 1 e + R 13 2 R 34 1 e R 12 3 R 23 2 R 34 1 e
Step 4.
π 1 = ( 1 , 0 ) [ B | ( I + H 11 4 ) * ] 1
Step 5.
π 2 = π 1 R 12 3 ; π 3 = π 1 R 13 2 π 2 R 23 2 ; π 4 = π 1 R 14 1 π 2 R 24 1 π 3 R 34 1

3. Associated Measures

Several associated measures, such as the sojourn time in each level (macro-state) and the number of visits to each macro-state by time, have been worked out.

3.1. Sojourn Time Phase-Type Distribution

One interesting aspect to address at this point is the probability distribution of the sojourn time in a macro-state. It is well known that for the Markov process J(t), the sojourn time in any state is exponentially distributed. For the stochastic process X(t), it is different.
We denote T(X(s)) as the random sojourn time in macro-state X(s) from time s. If the macro-state is known at time s, then:
P { T ( X ( s ) ) > t | X ( s ) = i } = a ( s ) A i exp { A i Q A i t } e a ( s ) A i e
Therefore, the probability distribution of T ( X ( s ) ) | X ( s ) = i , for any s is phase-type distributed for any s with representation ( a ( s ) A i a ( s ) A i e , A i Q A i ) . Obviously, this distribution is the same as ( b ( s ) b ( s ) e , Q i i ) , with b(s) being the vector θ P ( s ) restricted to the states of the macro-state i.
If the process {X(t); t ≥ 0} has reached the stationary regime, then, if the process is in macro-state i, the probability distribution of the sojourn time is PH with representation ( π A i π A i e , A i Q A i ) .

3.2. First Step Time

It is well-known that the first step-time distribution for the process {J(t); t ≥ 0} from state l (out of macro-state k) to a macro-state k is phase-type distributed with representation ( θ l , Q 2 k ) with θ l = ( 0 , , 0 , 1 , 0 , , 0 ) , where the value 1 corresponds to state l and Q 2 k is the matrix Q with row and column blocks of zeros corresponding to the macro-state k. If we denote T h ( s , k ) as the first step time from macro state h, at time s, to macro-state k, then:
P ( T h ( s , k ) > t ) = j k i h P ( T ( j ) > t | J ( s ) = i ) P ( J ( s ) = i ) i h P ( J ( s ) = i ) = a ( s ) A h exp { Q 2 k t } e a ( s ) A h e .

4. Number of Visits to a Macro-State

One interesting measure is the number of visits to a determined state between any two times s and t. We denote N k ( s , t ) as the number of visits to the macro-state k from time s up to time t. We denote p k ( n , s , t ) as the matrix whose (i, j) element is:
[ p k ( n , s , t ) ] i j = P { N k ( s , t ) = n , J ( t ) = j | J ( s ) = i } = P { N k ( t s ) = n , J ( t s ) = j | J ( 0 ) = i } = [ p k ( n , t s ) ] i j .
The probability matrix verifies the following differential equations:
p k ( n , t ) = p k ( n , t ) ( Q k + Q ˜ k k ) + p k ( n 1 , t ) ( Q k Q ˜ k k ) ; n 1 ,
with initial condition p k ( n , 0 ) = 0 , with Q ˜ k k a matrix of zeros, with the same order than Q, except for the matrix block Q k k .
For n = 0:
p k ( 0 , t ) = p k ( 0 , t ) ( Q k + Q ˜ k k )
p k ( 0 , 0 ) = I
where, from the last two expressions we have that:
p k ( 0 , t ) = exp { ( Q k + Q ˜ k k ) t }
To obtain the probability matrix, we use Laplace transforms. It is well known that given a locally integrable function f(t), its Laplace transform is defined as f * ( u ) = 0 e u t f ( t ) d t . Therefore:
u p k * ( 0 , u ) I = p k * ( 0 , u ) ( Q k + Q ˜ k k )
u p k * ( n , u ) = p k * ( n , u ) ( Q k + Q ˜ k k ) + p k * ( n 1 , u ) ( Q k Q ˜ k k ) ; n 1
Then:
p k * ( 0 , u ) = [ u I ( Q k + Q ˜ k k ) ] 1
p k * ( n , u ) = p k * ( n 1 , u ) ( Q k Q ˜ k k ) [ u I ( Q k + Q ˜ k k ) ] 1 ; n 1
Thus, it can be proved that:
p k * ( n , u ) = [ u I ( Q k + Q ˜ k k ) ] 1 A n ( u ) ; n 0
with A(u) being:
A ( u ) = ( Q k Q ˜ k k ) [ u I ( Q k + Q ˜ k k ) ] 1
Taking the inverse Laplace transform, the function p k ( n , t ) is achieved, and for the non-homogeneous Markov process {X(t); t ≥ 0}, we have:
P { N k ( n , s , t ) = n , X ( t ) = j | X ( s ) = i } = θ P ( s ) A i p k ( n , t s ) A j e θ P ( s ) A i e
Therefore:
P { N k ( s , t ) = n } = θ P ( s ) p k ( n , t s ) e ,   for   n   =   0 ,   1 ,   2 ,

4.1. Expected Number of Visits to Determine a Macro-State

The mean number of visits to the macro-state k can be obtained as follows. This measure is given by:
E { N k ( t ) } = θ n = 0 n p k ( n , t ) e = θ M k ( t ) e
From the differential equations above, we have:
n = 1 n p k ( n , t ) = n = 1 n p k ( n , t ) ( Q k + Q ˜ k k ) + n = 1 n p k ( n 1 , t ) ( Q k Q ˜ k k )
n = 1 n p k ( n , t ) = n = 1 n p k ( n , t ) ( Q k + Q ˜ k k ) + n = 1 n p k ( n , t ) ( Q k Q ˜ k k ) + P ( t ) ( Q k Q ˜ k k )
M k ( t ) = M k ( t ) [ ( Q k + Q ˜ k k ) + ( Q k Q ˜ k k ) ] + P ( t ) ( Q k Q ˜ k k )
M k ( t ) = M k ( t ) Q + P ( t ) ( Q k Q ˜ k k )
Given that Q is a conservative matrix, then:
M k ( t ) e = P ( t ) ( Q k Q ˜ k k ) e
with initial condition M k ( 0 ) = 0 if the initial state is not considered, or M k ( 0 ) = A k if the initial state is counted.
Therefore, for the first and second cases, we have, respectively:
E [ N k ( t ) ] = θ M k ( t ) e = θ 0 t P ( u ) d u ( Q k Q ˜ k k ) e
or
E [ N k ( t ) ] = θ A k e + θ 0 t P ( u ) d u ( Q k Q ˜ k k ) e

5. Parameter Estimation

The likelihood function when the exact change time between macro-states is known is achieved. For a device l we observe ml transition times, denoted as:
0 = t 0 l , t 1 l , t 2 l , , t m l 1 l , t m l l
where the last time is a complete or a censoring time.
The time t a l corresponds to the transition from x a l to x a + 1 l . These macro-states are: x 0 l , x 1 l , x 2 l , , x m l 1 l , x m l l .
This device contributes to the likelihood function: likelihood function:
  L l = α x 0 l [ a = 0 m l 1 e Q x a l x a l ( t a + 1 l t a l ) ( Q x a l x a + 1 l ) τ l ] e
where τ l is 0 if the last time is a censoring time, and is 1 otherwise.
When m independent devices are considered:
L = l = 1 m L l = l = 1 m α x 0 l [ a = 0 m l 1 e Q x a l x a l ( t a + 1 l t a l ) ( Q x a l x a + 1 l ) τ l ] e
This function is maximized by considering that Qaa is a square sub-stochastic matrix whose main diagonal is negative and the rest are positive values, and Qab a matrix with positive elements with b = 1 r Q a b e = 0 for any a and b.
Other log-likelihood function is given from the transition probabilities of the process
log L = l = 1 m a = 0 m l 1 log ( h x a l x a + 1 l ( t a l , t a + 1 l ) ) .

6. Application of the Developed Methodology

We have made use of measurements of current-time traces in a unipolar RRAM. The RTN signals had an easy pattern, so different features of the device could be studied. Current traces showed, for instance, the number of current levels in the signal, and also the frequency at which each of these levels was active. The developed methodology allows the modeling of the internal states under an approach based on hidden Markov processes that produces the observed output (RTN signal measured). By analyzing the signals shown in Figure 1, we are able to determine these states and characterize them, probabilistically speaking, in order to understand their nature. It is possible to reproduce similar signals in case they are needed for a circuit simulation or physical characterization of the traps that help to generate the signals.
Specifically, we are going to analyze the behavior of several different current-time traces (RTN25, RTN26, RTN27) that show RTN for the described devices in the introduction. In addition, a long (more than three hours with millions of measured data) RTN current-time trace was measured and used here for the same device. Due to the fact that all signals come from devices with the same characteristics of fabrication, the difference between them lies in the applied voltage that produces the variations of electric current. Naturally, the measurement time for the long RTN was different. These measurements have previously been characterized in different ways [19,20]; however, in this new approach, the internal Markov chain that leads to the observed data set was identified, and the model, whose methodology is developed in this work, was estimated. In particular, in this application, it was shown that the short signals (RTN25, RTN26, and RTN27) had a Markovian internal behavior, whereas the developed new methodology was applied to the long RTN trace for its modeling.

6.1. Series RTN25-26-27

As stated above, the signals RTN25, RTN26, and RTN27 were produced by devices whose structure is based on the Ni/HfO2/Si stack. Nevertheless, different voltages were applied: 0.34 volts, 0.35 volts, and 0.36 volts, respectively. On the other hand, the measurement time was similar for each one of these series.

Hidden Markov Models and the Latent Markov chain for RTN25/26/27 traces

To study the number of possible latent levels hidden into the signals, we considered hidden Markov models (HMMs) [26,27]. Whereas in simple Markov models, the state of the device is visible to the observer at each time, in HMMs, only the output of the device is visible, while the state that leads to a determined output is hidden. Each state is associated with a set of transition probabilities (one per each state) defining how likely it is for the system, being in a given state at a given instant of time, to switch to another possible state (including a transition to the same state) at the successive instants of time.
We analyzed different data sets for RTN25, RTN26, RTN27, and the hidden states from the corresponding signals were discriminated. The best fit was achieved for latent levels 2 and 3 for RTN25, RTN26, RTN27. Figure 2 shows the original observed RTN signals and the corresponding latent levels given by the model for both cases. This analysis was performed using the depmixS4 package of R-cran [28].
The RTN25/26/27 series was analyzed. The proportional number of times that the chain is in the latent states for the different proposed models is shown in Table 1.
For each model, the continuous Markov chain associated to the latent states was estimated, and the corresponding stationary distribution was achieved. This is shown in Table 2.
These values can be interpreted as the proportional time that the device is in each latent state in the stationary regime for the embedded continuous Markov chain. We observed that these values were very close to zero for the RTN25/26/27 devices. We tested this fact, and do not think it can be rejected.
Therefore, we assumed that the internal performance of the devices behaved as a Markov model with two latent states for the RTN25/26/27 traces. The Markov chains for two latent states for the different cases were estimated. The exponentiality of the sojourn time in each state cannot be rejected according to the p-values obtained by means of a Kolmogorov–Smirnov test, and the expected number of visits up to a certain time as explained in Section 4.1 also was estimated. These estimations are shown in Table 3.

6.2. Long RTN Trace

A similar exploratory analysis for this trace was carried out. This signal was supplied with 0.5 V, and its behavior was measured for more than three hours. To study the number of possible latent levels hidden in the signal, we again considered hidden Markov chains. The best fit was achieved for latent levels 3 and 4 for this long RTN trace. Figure 3 shows the original observed signal (a time interval of the whole signal was considered for the sake of computing feasibility) and the corresponding latent levels given by the model for both cases. This analysis also was performed using the depmixS4 package of R-cran.
We have focused on the four latent states. If the HMM is considered, the proportional time that the process was in each latent state was 0.3243048, 0.1219429, 0.1635429, and 0.3902095. If more latent states were assumed, negligible proportions would appear. Therefore, four different latent states were assumed. Before applying the new model to the data set, a statistical analysis was performed that considered classical techniques. For these latent states, the exponentiality of the sojourn time was tested and rejected through a Kolmogorov–Smirnov test for any latent state by obtaining the following p-values: 0.00027, 0.0045, 0.0000, and 0.0001, respectively. Next, we studied whether these times could be described as PH distributions. After multiple analyses, the best PH distributions for the latent states had 2, 2, 4, and 3 internal states, respectively. The structures of these PH distributions were generalized Coxian/Erlang distributions. Then, the internal behavior for each latent state passed across multiple internal states in a sequential way, one-by-one. The Anderson–Darling test was applied to test the goodness of fit, and obtained p-values of 0.8972, 0.4405, 0.0752, and 0.9876, respectively, for each latent state.

Generator of the Internal Markov Process

Given the previous analysis, we have observed that the sojourn time distribution in each macro-state (latent state) was phase-type distributed with a sequential degradation (generalized Coxian degradation). That is, the macro-states 1, 2 were composed of two phases (internal states), the macro-state 3 of four phases, and the macro-state 4 of three states. Thus, we assume the sojourn time in a macro-state i (level i) is PH distributed with representation ( α i , T i ) with the following structure:
Macro-state 1Macro-state 2
α 1 = ( α 1 1 , 1 α 1 1 ) T 1 = ( t 12 1 t 12 1 0 t 13 1 ) T 1 0 = ( 0 , t 13 1 ) α 2 = ( α 1 2 , 1 α 1 2 ) T 2 = ( t 12 2 t 12 2 0 t 13 2 ) T 2 0 = ( 0 , t 13 2 )
Macro-state 3Macro-state 4
α 3 = ( α 1 3 , α 2 3 , α 3 3 , 1 α 1 3 α 2 3 α 3 3 ) T 3 = ( t 12 3 t 12 3 0 0 0 t 23 3 t 23 3 0 0 0 t 34 3 t 34 3 0 0 0 t 35 3 ) T 3 0 = ( 0 , 0 , 0 , t 35 3 ) α 4 = ( α 1 4 , α 2 4 , 1 α 1 4 α 2 4 ) T 4 = ( t 12 4 t 12 4 0 0 t 23 4 t 23 4 0 0 t 34 4 ) T 4 0 = ( 0 , 0 , 0 , t 34 4 )
We assume that when the device leaves the macro-state i, it goes to macro-state j with a probability of pij, where pii = 0 and p 14 = 1 i = 1 3 p i j , and the sojourn time in this new macro-state begins with the corresponding initial distribution αj.
Thus, the behavior of the device is governed by a stochastic process {X(t); t ≥ 0} with an embedded Markov process {J(t); t ≥ 0}, as described in Section 2.1. The macro-state space is {1, 2, 3, 4} and the state space { 1 = i 1 1 , 2 = i 2 1 ; 3 = i 1 2 , 4 = i 2 2 ; 5 = i 1 3 , 6 = i 2 3 , 7 = i 3 3 , 8 = i 4 3 ; 9 = i 1 4 , 10 = i 2 4 , 11 = i 3 4 } .
In this case, the matrix blocks are Q i i = T i and Q i i = p i j T i 0 α j for i, j = 1,..., r and Ij. Therefore, the matrix generator is given by:
Q = ( T 1 p 12 T 1 0 α 2 p 13 T 1 0 α 3 p 14 T 1 0 α 4 p 21 T 2 0 α 1 T 2 p 23 T 2 0 α 3 p 24 T 2 0 α 4 p 31 T 3 0 α 1 p 32 T 3 0 α 2 T 3 p 34 T 3 0 α 4 p 41 T 4 0 α 1 p 42 T 4 0 α 2 p 43 T 4 0 α 3 T 4 )
The parameters were estimated by considering the likelihood function built in Section 5. The estimated parameters with a value of the logL = −4066.118120 are:
P = ( 0 0.6667 0.0407 0.2926 0.3870 0 0.1969 0.4161 0.0296 0.2238 0 0.7466 0.1605 0.3395 0.5 0 )
α 1 = ( 0.5730374 , 0.4269626 ) ; T 1 = ( 0.6790043 0.6790043 0 4.1343018 )
α 2 = ( 0.4699825 , 0.5300175 ) ; T 2 = ( 3.249849 3.249849 0 10.426533 )
α 3 = ( 0.0741494 , 0.4258142 , 0.5000364 , 0 )
T 3 = ( 0.5533471 0.5533471 0 0 0 2.2755462 2.2755462 0 0 0 29.535695 29.535695 0 0 0 242.7465 )
α 4 = ( 0.3593538 , 0.6406462 , 0.0000 ) ;   T 4 = ( 0.964899 0.964899 0 0 4.112655 4.112655 0 0 28.638854 )
Therefore, the generator of the Markov process {J(t); t ≥ 0} is:
Q ^ = ( 0.6790 0.6790 0 0 0 0 0 0 0 0 0 0 4.1343 1.2954 1.4609 0.0125 0.0717 0.0841 0 0.4347 0.7750 0 0 0 3.2498 3.2498 0 0 0 0 0 0 0 2.3122 1.7228 0 10.4265 0.1522 0.8742 1.0266 0 1.5590 2.7794 0 0 0 0 0 0.5533 0.5533 0 0 0 0 0 0 0 0 0 0 2.2755 2.2755 0 0 0 0 0 0 0 0 0 0 29.5357 29.5357 0 0 0 4.1174 3.0679 25.5326 28.7941 0 0 0 242.7465 65.1273 116.1072 0 0 0 0 0 0 0 0 0 0.9649 0.9649 0 0 0 0 0 0 0 0 0 0 4.1127 4.1127 2.6340 1.9625 4.5696 5.1533 1.0618 6.0974 7.1602 0 0 0 28.6389 )
with initial distribution θ ^ = ( 0 , 0 , 0 , 0 , 0.0741494 , 0.4258142 , 0.5000364 , 0 , 0 , 0 , 0 ) .
The stationary distribution was estimated for the process {X(t); t ≥ 0} (given in Table 4) from Section 2.2. It can be interpreted as the proportional time in each macro-state (long-run).
Finally, the mean number of visits to each macro-state up to a certain time t was obtained following Section 4.1. It is shown in Table 5.

7. Conclusions

A real problem motivates the construction of a new stochastic process that accounts for the internal performance of different macro-states by considering that it follows an internal Markovian behavior. We have shown that the homogeneity and Markovianity was lost in the developed new macro-state model. The sojourn time in each macro-state was phase-type distributed depending on the initial observed time. The stationary distribution was calculated through matrix-algorithmic methods, and the distribution of the number of visits to a determined macro-state between any two different times was calculated using a Laplace transform. The mean number of visits depending on times was worked out explicitly.
The new developed methodology enables the modeling of complex systems in an algorithmic way by solving classic calculus problems. Thanks to the proposed development, the results and measures were worked out in an easier way and could be interpreted. Matrix analysis and Laplace transform techniques were used to determine the properties of the model, and algorithms to obtain quantitative results were provided. Given that everything was carried out algorithmically, is the model was implemented computationally and was successfully applied to study different random telegraph noise signals measured for unipolar resistive memories.
Resistive memory random telegraph noise signals were analyzed in depth in order to characterize them from a probabilistic point of view. These signals are essential, since they can pose a limit in the performance of certain applications; in addition, this type of noise can be good as an entropy source in the design of random number generators for cryptography. From the proposed model, we showed that a latent state of a resistive memory RTN long signal is composed of multiple internal states. Of course, the applications of a phase-type model, as given in this work, are not restricted to the RRAM context.

Author Contributions

J.B.R. was in charge of the physical theoretical explanation and its conclusions. J.E.R.-C., C.A., and A.M.A. developed all the statistical theory, computationally implemented the methodology, and obtained the results shown. All authors contributed equally to the writing part of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by projects MTM2017-88708-P and TEC2017-84321-C4-3-R of the Spanish Ministry of Science, Innovation and Universities (also supported by the FEDER program), the PhD grant (FPU18/01779) awarded to Christian Acal, and project FQM-307 of the Government of Andalusia (Spain). This study also was partially financed by the Andalusian Ministry of Economy, Knowledge, Companies and Universities under project A-TIC-117-UGR18.

Acknowledgments

We would like to thank F. Campabadal and M. B. González from the IMB-CNM (CSIC) in Barcelona for fabricating and measuring the devices employed here.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PHPhase-type
RTNRandom Telegraph Noise
RRAMResistive Random Access Memory
EDAElectronic Design Automation
HMMHidden Markov Model

References

  1. Neuts, M.F. Probability distributions of phase type. In Liber Amicorum Professor Emeritus H. Florin; University of Louvain: Louvain-la-Neuve, Belgium, 1975. [Google Scholar]
  2. Neuts, M.F. Matrix Geometric Solutions in Stochastic Models: An Algorithmic Approach; John Hopkins University Press: Baltimore, MD, USA, 1981. [Google Scholar]
  3. Ruiz-Castro, J.E. Complex multi-state systems modelled through Marked Markovian Arrival Processes. Eur. J. Oper. Res. 2016, 252, 852–865. [Google Scholar] [CrossRef]
  4. Ruiz-Castro, J.E. A complex multi-state k-out-of-n: G system with preventive maintenance and loss of units. Reliab. Eng. Syst. Safe. 2020, 197, 106797. [Google Scholar] [CrossRef]
  5. Ruiz-Castro, J.E.; Dawabsha, M. A multi-state warm standby system with preventive maintenance, loss of units and an indeterminate multiple number of repairpersons. Comput. Ind. Eng. 2020, 142, 106348. [Google Scholar] [CrossRef]
  6. Artalejo, J.R.; Chakravarthy, S.R. Algorithmic Analysis of the MAP/PH/1 Retrial Queue. Top 2006, 14, 293–332. [Google Scholar] [CrossRef]
  7. Asmussen, S.; Bladt, M. Phase-type distribution and risk processes with state-dependent premiums. Scand. Actuar. J. 1996, 1, 19–36. [Google Scholar] [CrossRef]
  8. Ruiz-Castro, J.E.; Acal, C.; Aguilera, A.M.; Aguilera-Morillo, M.C.; Roldán, J.B. Linear-Phase-Type probability modelling of functional PCA with applications to resistive memories. Math. Comput. Simulat. 2020. [Google Scholar] [CrossRef]
  9. Asmussen, S. Ruin Probabilities; World Scientific: Singapore, 2000. [Google Scholar]
  10. He, Q.M. Fundamentals of Matrix-Analytic Methods; Springer: New York, NY, USA, 2014. [Google Scholar]
  11. Carboni, R.; Ielmini, D. Stochastic Memory Devices for Security and Computing. Adv. Electron. Mater. 2019, 5, 1900198. [Google Scholar] [CrossRef] [Green Version]
  12. Aldana, S.; Roldán, J.B.; García-Fernández, P.; Suñe, J.; Romero-Zaliz, R.; Jiménez-Molinos, F.; Long, S.; Gómez-Campos, F.; Liu, M. An in-depth description of bipolar resistive switching in Cu/HfOx/Pt devices, a 3D Kinetic Monte Carlo simulation approach. J. Appl. Phys. 2018, 123, 154501. [Google Scholar] [CrossRef]
  13. Chual, L.O. Memristor-the missing circuit element. IEEE Trans. Circuits Syst. 1971, 18, 507–519. [Google Scholar]
  14. Puglisi, F.M.; Zagni, N.; Larcher, L.; Pavan, P. Random Telegraph Noise in Resistive Random Access Memories: Compact Modeling and Advanced Circuit Design. IEEE Trans. Electron Dev. 2018, 65, 2964–2972. [Google Scholar] [CrossRef]
  15. Puglisi, F.M.; Larcher, L.; Padovani, A.; Pavan, P. A Complete Statistical Investigation of RTN in HfO2-Based RRAM in High Resistive State. IEEE Trans. Electron Dev. 2015, 62, 2606–2613. [Google Scholar] [CrossRef]
  16. Alonso, F.J.; Maldonado, D.; Aguilera, A.M.; Roldán, J.B. Memristor variability and stochastic physical properties modeling from a multivariate time series approach. Chaos Soliton Fract. 2021, 143, 110461. [Google Scholar] [CrossRef]
  17. Aguilera-Morillo, M.C.; Aguilera, A.M.; Jiménez-Molinos, F.; Roldán, J.B. Stochastic modeling of Random Access Memories reset transitions. Math. Comput. Simulat. 2019, 159, 197–209. [Google Scholar] [CrossRef]
  18. Simoen, E.; Claeys, C. Random Telegraph Signals in Semiconductor Devices; IOP Publishing: Bristol, UK, 2017. [Google Scholar]
  19. González-Cordero, G.; González, M.B.; Jiménez-Molinos, F.; Campabadal, F.; Roldán, J.B. New method to analyze random telegraph signals in resistive random access memories. J. Vac. Sci. Technol. B 2019, 37, 012203. [Google Scholar] [CrossRef]
  20. González-Cordero, G.; González, M.B.; Morell, A.; Jiménez-Molinos, F.; Campabadal, F.; Roldán, J.B. Neural network based analysis of Random Telegraph Noise in Resistive Random Access Memories. Semicond. Sci. Tech. 2020, 35, 025021. [Google Scholar] [CrossRef]
  21. Grasser, T. Noise in Nanoscale Semiconductor Devices; Springer: Cham, Switzerland, 2020. [Google Scholar]
  22. Wei, Z.; Katoh, Y.; Ogasahara, S.; Yoshimoto, Y.; Kawai, K.; Ikeda, Y.; Eriguchi, K.; Ohmori, K.; Yoneda, S. True random number generator using current difference based on a fractional stochastic model in 40-nm embedded ReRAM. In Proceedings of the 2016 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 3–7 December 2016. [Google Scholar]
  23. Chen, X.; Wang, L.; Li, B.; Wang, Y.; Li, X.; Liu, Y.; Yang, H. Modeling Random Telegraph Noise as a Randomness Source and its Application in True Random Number Generation. IEEE Trans. Comput. Aided Des. 2016, 35, 1435–1448. [Google Scholar] [CrossRef]
  24. Acal, C.; Ruiz-Castro, J.E.; Aguilera, A.M.; Jiménez-Molinos, F.; Roldán, J.B. Phase-type distributions for studying variability in resistive memories. J. Comput. Appl. Math. 2019, 345, 23–32. [Google Scholar] [CrossRef]
  25. González, M.B.; Rafí, J.M.; Beldarrain, O.; Zabala, M.; Campabadal, F. Analysis of the Switching Variability in Ni/HfO2 -Based RRAM Devices. IEEE Trans. Device Mat. Reliab. 2014, 14, 769–771. [Google Scholar]
  26. Rabiner, L.R. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 1989, 77, 257–286. [Google Scholar] [CrossRef]
  27. Puglisi, F.M.; Pavan, P. Factorial Hidden Markov Model analysis of Random Telegraph Noise in Resistive Random Access Memories. ECTI Trans. Electr. Eng. Electron. Commun. 2014, 12, 24–29. [Google Scholar]
  28. Visser, I.; Speekenbrink, M. depmixS4: An R Package for Hidden Markov Models. J. Stat. Softw. 2010, 36, 1–21. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Current versus time trace for a Ni/HfO2/Si device in the high resistance state HRS, in which the RTN noise can clearly be seen. A two-level signal is shown; the two different current levels are marked with the corresponding current thresholds. Another RTN trace is shown in the inset, measured for the same device. Three current levels are seen in this case.
Figure 1. Current versus time trace for a Ni/HfO2/Si device in the high resistance state HRS, in which the RTN noise can clearly be seen. A two-level signal is shown; the two different current levels are marked with the corresponding current thresholds. Another RTN trace is shown in the inset, measured for the same device. Three current levels are seen in this case.
Mathematics 09 00390 g001
Figure 2. Fit obtained with the HMM for multiple latent states for the different RTN signals under study: (a) RTN25; (b) RTN26; (c) RTN27.
Figure 2. Fit obtained with the HMM for multiple latent states for the different RTN signals under study: (a) RTN25; (b) RTN26; (c) RTN27.
Mathematics 09 00390 g002
Figure 3. Fit obtained with the HMM for multiple latent states for the RTN different signals under study.
Figure 3. Fit obtained with the HMM for multiple latent states for the RTN different signals under study.
Mathematics 09 00390 g003
Table 1. Proportional number of times that each chain was in the latent states.
Table 1. Proportional number of times that each chain was in the latent states.
SignalModelLatent State 1Latent State 2Latent State 3
RTN252 latent states 0.784---0.216
3 latent states 0.7620.0370.201
RTN262 latent states 0.756---0.244
3 latent states 0.7530.03050.2165
RTN272 latent states 0.7665---0.2335
3 latent states 0.75750.02550.2170
Table 2. Stationary distribution for the different traces.
Table 2. Stationary distribution for the different traces.
SignalModelLatent State 1Latent State 2Latent State 3
RTN252 latent states 0.7974---0.2026
3 latent states 0.78470.01850.1968
RTN262 latent states 0.7708---0.2292
3 latent states 0.77460.01650.2089
RTN272 latent states 0.7736---0.2264
3 latent states 0.77850.00260.219
Table 3. Expected number of visits to each level for the different traces.
Table 3. Expected number of visits to each level for the different traces.
SignalLevelt = 5t = 10t = 15t = 20
RTN25Level 112.349723.860935.372146.8833
Level 211.552323.063534.574746.0859
RTN26Level 112.819824.816236.812748.8092
Level 212.049024.045536.041948.0384
RTN27Level 111.564522.530533.496544.4626
Level 211.790922.756933.722944.6890
Table 4. Stationary distribution for the process {X(t); t ≥ 0}.
Table 4. Stationary distribution for the process {X(t); t ≥ 0}.
Level 1Level 2Level 3Level 4
Selected interval in the long RTN trace0.32730.11970.16120.3919
Table 5. Expected number of visits up to a certain time for different times.
Table 5. Expected number of visits up to a certain time for different times.
TimeLevel 1Level 2Level 3Level 4
t = 5016.020725.071620.397430.0827
t = 10031.083749.936440.959160.1877
t = 20061.192599.640482.0612120.3666
t = 500151.4018248.5475205.1981300.6553
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ruiz-Castro, J.E.; Acal, C.; Aguilera, A.M.; Roldán, J.B. A Complex Model via Phase-Type Distributions to Study Random Telegraph Noise in Resistive Memories. Mathematics 2021, 9, 390. https://doi.org/10.3390/math9040390

AMA Style

Ruiz-Castro JE, Acal C, Aguilera AM, Roldán JB. A Complex Model via Phase-Type Distributions to Study Random Telegraph Noise in Resistive Memories. Mathematics. 2021; 9(4):390. https://doi.org/10.3390/math9040390

Chicago/Turabian Style

Ruiz-Castro, Juan E., Christian Acal, Ana M. Aguilera, and Juan B. Roldán. 2021. "A Complex Model via Phase-Type Distributions to Study Random Telegraph Noise in Resistive Memories" Mathematics 9, no. 4: 390. https://doi.org/10.3390/math9040390

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop