Next Article in Journal
Generalized Antiorthotomics of (n, m)-Cusp Curves
Previous Article in Journal
Appointment Scheduling Considering Outpatient Unpunctuality Under Telemedicine Services
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Near-Optimal Multirun March Memory Test Algorithms for Neighborhood Pattern-Sensitive Faults in Random-Access Memories

by
Petru Cașcaval
1,* and
Doina Cașcaval
2
1
Department of Computer Science and Engineering, “Gheorghe Asachi” Technical University of Iași, Bulevardul Profesor Dimitrie Mangeron 65, 700050 Iași, Romania
2
Faculty of Industrial Design and Business Management, “Gheorghe Asachi” Technical University of Iași, Bulevardul Profesor Dimitrie Mangeron 65, 700050 Iași, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(16), 2594; https://doi.org/10.3390/math13162594
Submission received: 6 July 2025 / Revised: 5 August 2025 / Accepted: 8 August 2025 / Published: 13 August 2025

Abstract

This research paper addresses the problem of testing N × 1 random-access memories (RAMs) in which complex models of unlinked static neighborhood pattern-sensitive faults (NPSFs) are considered. Specifically, two well-known fault models are addressed: the classical NPSF model that includes only memory faults sensitized by transition write operations and an extended NPSF model that covers faults sensitized by transition write operations as well as faults sensitized by non-transition writes or read operations. For these NPSF models, near-optimal multirun march memory tests suitable for implementation in embedded self-test logic are proposed. Each of the two new memory tests completely covers the NPSF model considered. The assessment of optimality is based on the fact that for any group of cells corresponding to the NPSF model, the state graph is completely covered and each arc is traversed only once, which means that the graph is of the Eulerian type. However, we say that these memory tests are near-optimal and not optimal because some additional write operations are required for data background changes. A characteristic of a memory test algorithm where multiple data backgrounds are applied is that the test data is always correlated with the address of the accessed location. For easy implementation in embedded self-test logic, the proposed tests use 4 × 4 memory initialization patterns rather than the more difficult-to-implement 3 × 3 patterns, as is the case with other currently known near-optimal memory tests.

1. Introduction

As shown in the International Roadmap for Devices and Systems (IRDS) edited by the IEEE Computer Society or in works such as [1,2,3], in a System on a Chip (SoC) or Network on a Chip (NoC), the memory part has a growing share, currently covering 75–85% of the integrated area. Therefore, in the production of reliable integrated circuits at competitive costs, efficient testing of the memory part is a very important requirement.
In this paper, the authors address the issue of testing N × 1 RAMs with regard to the complex fault model of static unlinked neighborhood NPSFs as a subclass of the coupling fault (CF) model [4,5]. According to the classification of memory faults, as presented for example in [2,6,7,8], the “static fault” class refers to those faults sensitized by performing at most one memory operation, while the “dynamic fault” class refers to those faults sensitized by two or more operations performed sequentially. According to the same taxonomy presented in [6] or [7], for example, CFs are said to be unlinked when they do not influence each other. In this paper, the class of static unlinked CFs is addressed.
The CF model reflects imperfections in the memory cell area that cause two or more cells to influence each other in their operation [2,3]. Depending on the number of cells involved in a memory fault, there are several classes of coupling faults, such as two-cell coupling, three-cell coupling, four-cell coupling, or five-cell coupling. In the two-cell CF model, it is considered that the cells involved in a CF can be anywhere in the memory [9]. In this way, this model also addresses memory faults [6]. For more complex k -cell CF models, where k 3 , it is assumed that the cells involved in a memory fault are physically adjacent [5,10,11]. It is therefore necessary to know the memory structure, or more precisely the correspondence between logical addresses and physical addresses, in the form of row and column addresses. This realistic limitation allows the complexity of these fault models to be more easily managed [12,13,14]. For these complex CF models, testing is usually performed with additional integrated architectures called built-in self-test (BIST) logic [15,16,17,18,19,20].
Regarding the k -cell CFs, where k 2 , depending on the technology used, the accuracy of the testing, and the costs involved, two classes of fault models have been defined to cover the most common memory errors, namely, (a) the classical model class in which it is considered that a memory fault can be sensitized only by a transition write operation, and (b) an extended class of models in which it is admitted that memory faults can be sensitized by transition write operations, as well as by non-transition writes or read operations—extended CF (ECF) models [9,12,13,14]. With the increase in density and the decrease in supply voltage, the range of memory defects has diversified, and, as a result, for high testing accuracy, ECF models are increasingly used. The authors appreciate, however, that a memory test covering the classical CF model can be adapted relatively easily to cover the corresponding ECF model [12]. This is also highlighted in this work.
For three-cell or four-cell classical CF models, reference tests are reported in [10,11,21,22,23,24]. For three-cell or four-cell ECF models, near-optimal tests are presented in [14].
In this paper, we focus exclusively on the NPSF model, which is a subclass of k -cell CFs, where k = 5 . The NPSF model involves five physically adjacent memory cells arranged in a configuration like the one shown in Figure 1. Note that this five-cell configuration also covers the six configurations of three-cell CF model involving neighboring memory cells arranged in a corner or on a row or column, as considered in [5,13,14], for example, but does not completely cover the configuration of four-cell CF model in which neighboring memory cells are arranged in a square [14].
This fault model, which reflects the influence of neighboring cells on the central memory cell, has been defined since the 1970s [25]. Specifically, we refer here to the classical NPSF model in which a fault sensitization involves a transition write operation in a cell in the group, and the other four cells are in a certain state that favors the occurrence of the memory error affecting the central cell. The scientific community has continuously studied the NPSF model because of its importance and complexity. Memory testing for NPSF models is covered extensively in books, like [2,3,4,26], or in review papers, such as [16,17,19].
For the NPSF model, some memory tests have been proposed since the 1980s that were quite effective for the memory sizes of that period [27,28]. For example, the memory test given by Suk and Reddy [28], which completely covers this model, is based on an algorithm for dividing the set of memory cells into two halves, based on row and column addresses, and is 165 N long. A reduction in testing time was possible by applying march-type tests on different memory initialization patterns (data backgrounds). This idea was first proposed by Cockborn [23]. This technique, called “multirun march memory testing”, was then used to identify efficient memory tests for this complex NPSF model [29,30,31,32]. For example, in [31], to reduce the length of the test, Buslowska and Yarmolik propose a multirun technique with a random background variation. However, in that case, the NPSF model is only partially covered. A first near-optimal deterministic memory test for the NPSF model was proposed by Cheng, Tsai, and Wu in 2002 [32]. This memory test called CM-79N is 79 1 / 3 N long and uses 16 memory initialization patterns of the size 3 × 3. Ten years later, Huzum and Cașcaval [33] improved this result with another near-optimal memory test (called March-76N) that was slightly shorter, of length 76 N .   Compared to CM-79N, the March-76N memory test applies a different testing technique, but also uses 3 × 3 data background patterns. CM-79N and March-76N are the shortest deterministic memory tests that completely cover this complex fault model. Their authors consider these memory tests to be near-optimal and not optimal, as additional write operations are required for multiple memory initializations. Although near-optimal, these tests are not exactly suitable for BIST implementation due to the complexity of the address-based data generation logic. More on this issue can be found in the works [34,35,36,37,38,39], but an in-depth analysis is also presented in this paper. For this reason, Cheng, Tsai and Wu also proposed in the same article [32] another memory test (March-100N), which, although it is longer (and obviously non-optimal), is easier to implement in BIST-RAM architectures due to its 4 × 4 memory initialization patterns. A modified version of the March-100N memory test for the diagnosis of SRAM is given by Julie, Sidek, and Wan Zuha [36].
In this work, a new near-optimal test (MT_NPSF_81N) able to completely cover the classical NPSF model is proposed. This new test is of length 81 N and uses 16 data backgrounds with patterns of 4 × 4 in size, making it suitable for implementation in BIST-RAM architectures. Compared to the March-100N test, this new memory test is significantly shorter and has a simpler structure. The synthesis of the address-based data generation logic for this memory test, considering a possible BIST implementation, is also discussed. This work also highlights the simplicity of address-based data generation logic in the case of 4 × 4 data backgrounds compared to 3 × 3 data backgrounds. For the extended NPSF model (ENPSF), a near-optimal test of length 177 N is proposed.
The remainder of this paper is organized as follows: Section 2 includes assumptions, notations, a short description of the NPSF model, and some considerations related to multirun memory tests, or to the necessary and sufficient conditions for detecting coupling faults. Section 3 describes new near-optimal memory tests for the classical NPSF model and the extended NPSF model that use 4 × 4 data background patterns. Section 4 presents a synthesis of address-based data generation logic for memory operations for a possible BIST implementation of the proposed memory tests. To emphasize the advantage of the proposed tests, compared to other known memory tests that use 3 × 3 background patterns, where modulo 3 residues are required for row and column addresses, Section 5 presents a synthesis of address-based data generation logic for the March-76N memory test. A discussion related to the proposed new memory tests, covering several different aspects, is included in Section 6. Some experimental results that complete the comparative analysis between the proposed tests and other known tests are presented in Section 7. Some conclusions regarding this work, presented in Section 8, complete this paper.

2. Assumptions, Notations, and Preliminaries

2.1. Assumptions

As in other works dedicated to NPSF models, it is assumed that the memory structure is fully known, so that based on the physical address information, a multirun march test can be applied using different data backgrounds. Of course, in the memory, there can be one or more groups of coupled cells. As in other similar works, such as [4,26,30,31,32,33,34,35], it is assumed that the coupled cell groups are disjoint.

2.2. Notations

The notations used to describe memory operations or certain memory errors are shown in Table 1.

2.3. The NPSF Model and Preliminary Considerations for Multirun Memory Testing

Generally, for a k -cell CF model, the affected cell (the cell where the memory error occurs) is considered v i c t i m   c e l l ( v c e l l ) and the other k 1 cells in the group are considered a g g r e s s o r   c e l l s ( a c e l l s ) [7]. In order to describe more clearly how a memory fault is sensitized, sometimes the cell where the fault-sensitizing operation is performed is considered the a g g r e s s o r   c e l l , and the other k 2 cells in the group which, based on the state they are in, allow the appearance of the memory error, are called e n a b l i n g   c e l l s ( e c e l l s ) [6,7]. These aspects are illustrated in Figure 2 and Figure 3. Since neighboring cells can exert a passive or active influence on the central cell, the NPSF model includes two types of faults, passive NPSF and active NPSF, as shown in Figure 2 and Figure 3, respectively [27,28].
Specifically, in the case of a passive NPSF, a configuration of neighboring bits (a data pattern) prevents a certain transition in the central cell, from zero to one or from one to zero, as appropriate. Hence, this characteristic of this fault model is “pattern-sensitive”. In the case of an active NPSF, making a transition in one of the neighboring cells ( a c e l l ) in a certain state of the other cells that facilitate the occurrence of the error ( e c e l l s ) can cause the state to change in the central cell as well (↑ or ↓, as appropriate). Example of primitive faults for the classical NPSF model are presented in Table 2.
For all active faults shown in Table 2 (rows 1–32), cell i plays the role of the aggressor cell, while cells j , k , and m are considered enabling cells. For the passive faults in this table (rows 129–160), cells i , j , k , and m are considered aggressor cells. The classical NPSF model comprises 2 5 passive primitive faults and 4 × 2 5 active primitive faults [28].
To detect a memory error, the test must sensitize the fault with a suitable transition write operation and then read the victim cell to observe the error. For the classical NPSF model, in terms of satisfying the sensitization condition of any fault, a memory test must entirely cover the state transition graph for any group of five cells in the known configuration.
Let g = { i , j , b , k , m } be a group of five cells as shown in Figure 1, where i < j < b < k < m , which means that when memory addresses are traversed in ascending order (⇑), the cells are accessed in this order, and when memory addresses are traversed in descending order (⇓), the cells are accessed in the reverse order. Given this aspect, it can be said that a memory test is optimal if, for any group g of cells, any transition write operation in the state graph is performed once and only once (i.e., for any group g of cells, the state graph is Eulerian).
Considering the two types of NPSFs (passive and active faults), as also mentioned in other works, e.g., Refs. [2,4,9,10], in order to detect any error resulting from memory fault sensitization, two necessary and sufficient observability conditions must be met:
( O C 1 ) A memory cell must be read after performing a fault-sensitizing operation prior to a new write to verify that this operation was performed correctly.
( O C 2 ) Before writing to a cell, it must first be read to verify that a state change has not occurred as a result of an operation previously performed in another possibly aggressor cell.
For RAM testing, the most commonly used tests are march-type tests. These memory tests are distinguished by their simplicity of implementation and the fact that they belong to the O ( N ) complexity class. Therefore, the length of the testing algorithms and, implicitly, the duration of the memory testing process are reduced.
A march memory test consists of a set of march items, each of them describing a sequence of operations that must be performed successively on all memory locations (moving to the next cell is performed only after performing all operations on the current cell). The first symbol of a march element specifies the order in which memory locations are accessed. Given a sequential enumeration of memory addresses, the symbol ⇑ indicates that these addresses are traversed in ascending order, while the symbol ⇓ indicates that they are traversed in reverse order.
In general, to cover k -cell CF models, where k 3 , a multi-stage testing technique with different memory initializations is applied. This technique is called “multirun memory testing”.
In the case of multirun march memory testing, a given march test is applied repeatedly on different data backgrounds. A data background is obtained by multiplying a data pattern in memory both horizontally and vertically. Data backgrounds with row stripe, column stripe, or checkerboard patterns are commonly used in multirun memory testing [12,13]. For these data backgrounds, the initialization patterns are of the size 2 × 2. But the NPSF model we consider requires more complicated initialization patterns, concretely of the size 3 × 3 or 4 × 4, as shown in [32,33], and as presented extensively in this work. Specifically, a data background pattern must completely cover the five-cell configuration shown in Figure 1 and, for this reason, the pattern must be at least 3 × 3 in size.

3. Near-Optimal March Tests for the NPSF Model

To cover the classical NPSF model, where a fault sensitization involves a transition write operation in a cell of the coupled cell group with the configuration shown in Figure 1, a new memory test is proposed, called MT_NPSF_81N. In this test, two march elements are applied sixteen times on different data backgrounds (i.e., a multirun technique). The description of this new march multirun memory test is given in Figure 4, and the sixteen patterns used for memory initialization are shown in Figure 5.
In this description,   B G C 1 indicates the primary memory initialization sequence while B G C i , i 2 ,   3 ,   ,   16 , represent test sequences with background changes, according to the patterns shown in Figure 5. Note that, any change in the data background involves a status update only for half of the memory cells. On the other hand, in order to satisfy the O C 2 observability condition, any write operation to change the state of a memory cell is preceded by a read operation. Thus, the initialization sequence B G C 1 of the form w 0   involves N write operations, while a test sequence with a background change, B G C i , i 2 ,   3 ,   ,   16 , requires N / 2 read operations and N / 2 write operations (so, the same number N of memory operations). Thus, the length of the memory test is
L = 16 × 1 + 4 N + N = 81 N .
The ability of the MT_NPSF_81N memory test to fully cover the classical NPSF model is discussed in the following.
Theorem 1.
The  M T _ N P S F _ 81 N  test algorithm is able to detect all unlinked static faults of this classical NPSF model.
Proof. 
Let g = { i , j , b , k , m } be an arbitrary group of five neighboring memory cells corresponding to the well-known NPSF configuration shown in Figure 1. The ability of the MT_NPSF_81N test algorithm to sensitize and observe any fault that may affect a cell in the g-group of cells is demonstrated in the following.
  • MT_NPSF_81N is able to sensitize any fault in a g -group of coupled cells
To prove this statement, it is shown that during memory testing, MT_NPSF_81N performs all possible operations in this g-group of cells. In other words, MT_NPSF_81N completely covers the graph of states describing the normal operation of cells in this group.
But the g -group of cells can be anywhere in memory. The operations performed during memory testing differ from one group of cells to another because the initializations of the cells in a group depend on their position in memory (a position fully specified by the base cell address). But how can we monitor the operations performed within a group of cells under these circumstances when the number of distinct cases is so large? The following analysis aims to answer this question or, at least, to provide a solution to master the complexity of this problem.
As it is known, a data background is obtained by multiplying a data pattern in memory both horizontally and vertically. Given that this test algorithm uses 4 × 4 memory initialization patterns, it follows that the cells selected 4 by 4 per row or column are brought to the same initial state. This aspect is illustrated in Figure 6, where the BG12 pattern is applied. For easier understanding, two groups of cells identically initialized are highlighted in this figure: a group of cells marked in gray, initialized with zero, and another group of cells marked in yellow, initialized with one.
Note that for cells initialized with the same value, the same memory operations will be performed when a march element is applied. Starting from this observation, the memory cells are divided into 16 classes (subsets), denoted by C 1 , C 2 , , C 16 , depending on the residues of the row address modulo 4 ( A r   m o d   4 ) and the column address modulo 4 ( A c   m o d   4 ), as shown in Table 3. For example, the cells highlighted in gray in Figure 6 belong to class C 6 because for any of these cells, A r   m o d   4 = 1 and A c   m o d   4 = 1 , while the cells highlighted in yellow belong to class C 11 , for which A r   m o d   4 = 2 and A c   m o d   4 = 2 . This division of the memory cell set into 16 classes according to row and column addresses is illustrated in Figure 7.
Let G i now be the class (subset) of all g -groups of five cells in the known configuration for which the base cell b C i ,   i 1 ,   2 ,   ,   16 . This class division of the g -groups of five cells is also illustrated in Figure 7 in different colors. This figure shows that the cells in the neighborhood of the base cells also belong to the same classes. For example, for the groups of cells in the subset G 6 , the cells identified with i   belong to C 2 , those identified with j belong to C 5 , and so on. As a result, for all groups within the same class, exactly the same operations will be performed during the testing process, and therefore the same state transitions will result. Given this, the study of monitoring state transitions in a group of cells g   is limited to 16 distinct groups, one for each class G i ,   i 1 ,   2 ,   ,   16 . This idea is the basis for understanding this proof.
Given this simplification, in the following, let us analyze the initial state for a g -group of five memory cells in the known configuration, depending on the data background B G 1 B G 16 and the class of groups it belongs to G 1   G 16 .
The status code for a g -group of cells is a five-bit word corresponding to the component cells in the form v i v j v b v k v m . It is worth noting that a group g of cells in the state S = v i v j v b v k v m will reach the complementary state S = v i ¯   v j ¯   v b ¯     v k ¯   v m ¯ when the march element r w t is applied. We say that S and S are complementary states taking into account the following property:
S + S = 11111 B = 1 F H .
Figure 8 illustrates the initial and complementary states, I x   and I x , for two groups of cells   G 1   and G 11 , depending on the data background, BG x , x 1 ,   2 ,   ,   16 . For the other 14 cell groups, the values of I x   and I x are obtained in the same way. Table 4 lists the initial and complementary states for any group of cells in the known configuration, depending on the initialization pattern B G 1 , B G 2 , . , B G 16 and the class to which the group belongs G 1 ,   G 2 ,   . . . ,   G 16 .
The hexadecimal values in Figure 8 are listed in Table 4 in columns G 1   and G 11 . In this table, it can be verified that for any group g of five neighboring cells in the known configuration, regardless of the class to which the group belongs (i.e., any column), the 16 background changes result in 16 distinct initial states, such that
I x I y , x , y 1 ,   2 ,   ,   16 ,   x y .
Furthermore, for any column in Table 3, the following equation is valid:
I x I y , x , y 1 ,   2 ,   ,   16 .
If S represents the set of all 2 5 = 32   states (as in the truth table), based on Equations (3) and (4) we can write the following equation:
x = 1 16 I x   I x = S .
Note that the states in any of the 16 columns of Table 4 satisfy Equation (5), which means that all initial complementary states are different from each other and all together form the set S .
As a remark, identifying those initialization models for which this property is fulfilled has involved a considerable research effort.
It is also important to note that, starting from an initial state I x ,   x 1 ,   2 ,   ,   16 , upon the execution of the first march element (of the form r w t ), a g -group of cells is brought to the complementary state I x , and after the next march element (identical to the first), that group of cells is returned to the initial state I x .
Based on all these aspects, we can conclude that any g -group of cells in the known configuration is brought into each of the 32 possible states, and then a test sequence of the form r w t is applied. In a full memory scan with the march element r w t , five different transitions are performed in any group of five cells. On the whole, in a group of five cells, 32 × 5 = 160 transitions are performed.
Figure 9 shows the transitions performed in a g -group of cells of class G 1 in the first six of the sixteen stages of the memory test. The initial states after the six background changes are highlighted in bold. For easier identification, the transitions performed at each iteration are shown in different colors, and for the first transition within an iteration, an identification number is written on the edge.
For example, the 10 blue arcs that appear on the outer edge of the graph reflect the transitions made in a g -group of cells in the first stage of the test, when the memory is initialized with zero everywhere (BG1-0 solid data background) and then the full memory is scanned twice in ascending order by applying the test sequence of the following form: r w t ; r w t . The cells in group g are accessed in the order   i j b k m . During the first scan of the memory, as these cells are accessed, the state evolution for group g is as follows: 00000, 10000, 11000, 11100, 11110, and 11111. Then, during the second scan, the group of cells successively reaches the following states: 01111, 00111, 00011, 00001, and finally 00000. So, the group of cells is returned to the initial state from which it started. In the same way, the transitions in the graph can be tracked for the other iterations of the memory test. Isolated nodes in the graph, without incident edges, reflect states that have not yet been reached in the first six iterations of the memory test. It should be noted that a complete graph with 160 edges would be difficult to draw and track.
Next, it is shown that all 160 transitions performed in a group g of cells are different transitions and therefore the state transition graph is completely covered. Figure 10 shows the state evolution for a g -group of cells starting from two different initial states, I x 0     I y 0 , when the march element r w t is applied.
The intermediate states that appear in the figure, I x u and I y u , u = 1 ,   2 ,   3 ,   4 , are detailed in Table 5.
From Table 5 it follows that, since I x 0 I y 0 and I x u I y u ,   u 1 ,   2 ,   3 ,   4 . Note that although often   I x u = I y v , u v , in the two cases, the write operations are performed on different memory cells and therefore the respective transitions are also different. Several such cases are found in Figure 9, such as the transitions made in the states 10000, 10100, 10010, or 01001.
In conclusion, by applying the MT_NPSF_81N memory test, 16 × 2 × 5 = 160 different transitions are performed in the g -group of cells, completely covering the state transition graph. Moreover, each of these transitions is performed only once. As a result, the sensitization condition for all coupling faults specific to this classical NPSF model is met.
II.
MT_NPSF_81N detects any sensitized fault in a group g of coupled cell
Proving this property involves verifying the fulfillment of the two observability conditions, O C 1 and O C 2 , presented in Section 2. For an easier and more direct verification of the fulfillment of the observability conditions, Figure 11 shows all the operations performed in a group of cells g = i ,   j ,   b , k , m during the execution of the MT_NPSF_81N memory test. In the sequence of operations performed on cells in group g , write operations for initialization or background change are written in normal font, and those with the role of memory fault sensitization are highlighted in blue. Read operations that have the role of detecting possible errors resulting from fault sensitization are written in red.
Analyzing the sequence of operations shown in Figure 11, it can be seen that when a cell is accessed, it is first read, thus checking whether a state change has occurred in the cell as a result of an influence from another aggressor cell. This check is required by condition O C 2 . Also, by analyzing the sequence of operations illustrated in this figure, it can be seen that after performing a write operation on a cell, it is then read (immediately or later) to verify that the operation was performed correctly. Thus, condition O C 1 is also met.
With the verification of both conditions regarding the sensitization of all considered memory faults and the ability to detect any resulting errors, the proof of the theorem is complete. □
Remark 1.
For any group of cells that corresponds to the considered NPSF model, the state transition graph is completely covered, and furthermore, each transition is performed only once. That means that the graph of states is Eulerian. The only redundant operations are used for background changes. With this argument, it can be said that this memory test is near-optimal.
The proposed memory test can be adapted to cover the extended NPSF model (ENPSF), in which memory faults can be sensitized by transition write operations but also by read or non-transition write operations [12,14]. To this end, the two march elements of form r ,   w t   are extended to the form r ,   w n t ,   r , r ,   w t . This extension leads to a near-optimal memory test for the ENPSF model, called MT_ENPSF, of length 177 N , as shown in Figure 12.
Note that in this case, when a certain state is reached, the w n t   and r operations are first executed and checked, and then the state is changed by a w t operation.
Remark 2.
The ability of each of the two memory tests to cover the considered NPSF model was also verified by simulation.
Verifying by simulation the ability of a memory test to detect faults within the considered fault model is not a difficult problem. The following clarifications are made regarding this experiment:
(1)
The memory is simulated by a binary matrix of appropriate size (in our case, a matrix of 16 × 16 was sufficient). The operations described in the test algorithm are performed on this matrix.
(2)
Within the matrix, a group of five cells is chosen for each of the sixteen classes, G 1 , G 2 , . . . , G 16 , which are then studied and monitored in turn.
(3)
The faults considered (see Table 1) are simulated/injected within the current g -group of cells and verified one by one. For each simulated fault, the test program is run to verify whether the fault is detected or not.
(4)
For the classical NPSF model, simulating a fault requires permanent evidence of the state of the five cells in group g : i , j , b , k , and m . Monitoring all write operations in group g allows any change in state of a cell in the group to be captured. Based on the state of the other cells, it is checked whether the sensitizing condition of the current fault is met. When this condition is met, the simulation of the memory error is performed by changing the value in the binary matrix at the location corresponding to the victim cell (cell b in this model).
(5)
To simulate a memory error, it is more advantageous to use the facility provided by the TRAP interrupt. In some microcontroller-based systems (as be STM32 Nucleo-64 development board with STM32F103RB MCU), this facility is available to users. Thus, the operations for simulating a fault are implemented in the TRAP interrupt handling routine and, in this way, the test program remains unchanged.
(6)
For the extended NPSF model, the simulation experiment is more complex but does not pose any particular problems for an experienced programmer. The facility provided by the TRAP interrupt is no longer useful in this case.

4. Considerations for BIST Implementation of the Proposed Memory Tests

In this section, the issue of a possible BIST implementation of the proposed memory tests is addressed. Considering the number of memory areas tested in parallel and the number of bits that form the memory words (the number of bits accessed simultaneously), there are four memory BIST architectures, namely [8,17,18,19,20], SASB (Single-Array Single-Bit), SAMB (Single-Array Multiple-Bit), MASB (Multiple-Array Single-Bit), and MAMB (Multiple-Array Multiple-Bit).
Since SASB is the basic architecture, in the following section, we will limit ourselves to this architecture only. As mentioned in [19], adaptation for other architectures does not pose particular problems.
A block diagram of the self-test logic considered for implementing the proposed memory tests is shown in Figure 13. This self-test test logic has three parts: microcode control logic, address generation logic, and data generation and response verification logic. The other two flip-flops reflect the status of the test process and the test result.
The address generation logic is essentially composed of two counters, one for the row address denoted by R A and the other for the column address denoted by C A . If during the testing process the memory is scanned only in ascending order, the counters are simple, and if the memory is scanned in both directions, the counters must be reversible. In our case, the counters are simple. The commands in the address generation logic are for resetting the address ( R e s e t A d d r ) or for incrementing the address ( U p A d d r ). When the upper limit of the tested memory area is reached, the E O S   status signal is generated.
Remember that the tests proposed in this paper are multirun-type march memory tests. Specific to such a multirun test is the dependency between the value of a memory cell and its address. The proposed tests are distinguished from other known memory tests dedicated to the NPSF model, especially by the data patterns used for memory initializations. For this reason, this analysis focuses mainly on the synthesis of the address-based data generation logic (data generator, in brief).
The dependency between the date and address differs from one test step to another (from one iteration to another) because a different memory initialization pattern is used each time. To adapt the date generation logic to the current data background, a test sequence counter ( T S C ) of length four is used. The commands for this counter are reset ( R e s e t T S C ) and increment ( U p T S C ). Next, we will show that the 4 × 4 background patterns used by the proposed memory tests lead to the synthesis of a simple data generation logic, which is incomparably simpler than that implied by the 3 × 3 background patterns. This is due to the fact that the logical value for the current cell is generated based on the two least significant bits of the row address A r and/or the column address A c .
Let us denote by A r [ 1 ] and A r [ 0 ] the two least significant bits of the row address and by A c [ 1 ] and A c [ 0 ] the two least significant bits of the column address. Depending on the background pattern, the calculation relationship of the initialization value of a memory cell according to the address, denoted by V , is shown in Table 6. The background patterns are shown in the table for easier verification.
The MT_NPSF_81N memory test repeats two march elements 16 times, on different data backgrounds. Consequently, in order to adapt the logical value of the accessed cell to the data background, a 16-input multiplexer is required. The 4-bit code provided by the test sequence counter, denoted by T S , is applied to the selection inputs. Considering the relationships between date and address depending on the data background, as presented in Table 6, a possible solution for the data generation logic is shown in Figure 14.
The I n v e r t command that appears in the data generation logic and provided by the control logic is required to perform transition write operations in the first memory scan and read operations in the second scan, as highlighted in green in this test sequence: r w t ; r w t .
This synthesis shows that the MT_NPSF_81N memory test, despite using a large number of initialization patterns with some irregular structures, can be implemented quite easily in BIST-RAM architectures. For comparison, the following section presents aspects regarding to the synthesis of address-based data generation logic for a test that uses 3 × 3 memory initialization patterns, such as March-76N, the shortest near-optimal test currently known [34].

5. Considerations for Address-Based Data Generation in the Case of 3 × 3 Background Patterns

In memory tests using 3 × 3 background patterns, address-based data generation requires the modulo 3 residues of the row and column addresses. In the following, we remind how the modulo 3 residues can be generated, stressing that this is a difficult problem.

5.1. Hardware Generation of Modulo 3 Residues

For a word W of length n (a non-negative integer), let R 3 W = W   m o d   3 be the residue (remainder) of dividing W by t h r e e . As W is of the form
W = i = 0 n 1 c i 2 i
then R 3 W   can be calculated starting from the well-known equation [40] (pp. 37–59) as follows:
R 3 W = R 3   i = 0 n 1 c i 2 i = i = 0 n 1 c i R 3 2 i   m o d   3 .  
But, for numbers expressed by 2 i , the values of R 3 2 i are shown in Table 7.
Based on Equation (7) and the values in Table 7, the following equation results:
R 3 W =   c 0 + 2 c 1 + c 2 + 2 c 3 + c 4 + 2 c 5 +   mod   3 .
Equation (8) highlights the fact that the values of the coefficients for even positions and those of the coefficients for odd positions have equal weights in calculating the R 3 W   value. Therefore, the evaluation of Equation (7) starts from calculating the residue modulo 3 for the sum of two consecutive terms, one in the even position and the other in the immediate next odd position c 2 i + 2 c 2 i + 1 ,   i = 0 ,   1 ,   2 ,   .
Let us consider the value R 3 W expressed in redundant form as follows:
R 3 W = R 2 , R 1 , R 0 =   1 ,   0 ,   0    i f   t h e   r e s u l t   i s   2 0 ,   1 ,   0    i f   t h e   r e s u l t   i s   1 0 ,   0 ,   1    i f   t h e   r e s u l t   i s   0
For the sum of two consecutive terms c 2 i + 2 c 2 i + 1 ,   i = 0 ,   1 ,   2 ,   , the logical values   R 0 ,   R 1 , R 2   are determined based on the truth table presented in Table 8.
From this truth table, for the output variables R 0 , R 1 , R 2 ,   the following equations result:
      R 0 =   c 2 i   c 2 i + 1 ¯ R 1 = c 2 i   c 2 i + 1 ¯ R 2 = c 2 i ¯   c 2 i + 1
Let us denote by CLC1 the network of logic gates that performs these operations. The structure of this combinational logic circuit is given in Figure 15.
For two pairs of coefficients of this type, identified as a and b , the outputs are denoted by R 2 a ,   R 1 a ,   R 0 a   and   R 2 b ,   R 1 b ,   R 0 b , respectively. These intermediate results are then processed by another combinational logic, called CLC2, which generates the residue of the sum divided by three in the form R 2 , R 1 , R 0 , as shown in the Figure 16.
The logical values R 0 , R 1 , R 2   are obtained based on the following equations:
     R 0 = R 0 a   R 0 b + R 1 a R 2 b + R 2 a R 1 b R 1 = R 0 a   R 1 b + R 1 a R 0 b + R 2 a R 2 b R 2 = R 0 a R 2 b + R 1 a R 1 b + R 2 a   R 0 b
The network of logic gates that performs these operations is given in Figure 17.
For a word W = 2 n , calculating the value of R 3 W requires 2 n 1 CLC1 cells on the first level and a number of other CLC2 cells on the following levels. This results in a series–parallel logical structure. Operations on the same level are performed in parallel, and those on different levels are performed in series. Such a series–parallel structure for n = 16 is shown in Figure 18.
For a quantitative analysis, let us follow a classical approach, taking as reference a primary two-input logic gate, for which the complexity is considered to be 1 eqg (equivalent logic gate). For the combinational logics CLC1 and CLC2, the complexity is given in Table 9.
The series–parallel structure (SPS) in Figure 18 comprises eight CLC1 logics and seven CLC2 logics. The complexity for SPS is given by the following equations:
C S P S = 8 C 1 + 7 C 2 = 8 × 5.5 + 7 × 13.5 = 138.5   eqg .
Figure 18 and Equation (12) highlight the difficulty of determining the modulo 3 residues.
With these basic elements, let us return to the synthesis of address-based data generation logic for the March_76N memory test.

5.2. Address-Based Data Generation for the March_76N Memory Test

The March_76N memory test has the same structure as the MT_NPSF_81N test in the sense that it uses the same test sequence, r w t ; r w t , applied 16 times, but on other data backgrounds. Specifically, for memory initialization, the March_76N test uses the 3 × 3 size of data patterns, as presented in Table 9. Applying these 3 × 3 data models requires generating modulo 3 residues for the row and column addresses A r   and A c . For the two addresses, the values of the modulo 3 residues are expressed in the following form:
R 3 A r = (   R 2 r ,   R 1 r ,   R 0 r )   ,   R 3 A c = ( R 2 c ,   R 1 c ,   R 0 c ) .
For the calculation of these sets of logical values,   ( R 2 r ,   R 1 r ,   R 0 r ) and ( R 2 c ,   R 1 c ,   R 0 c ) , two series–parallel structures, such as the one presented in Figure 18, are required. These additional logics, denoted by SPSr and SPSc, are illustrated in Figure 19.
Table 10 also presents the relationships for data calculation according to the address for the 16 background patterns used in the March_76N memory test.
Considering the relationships between the date and address depending on the data background, as presented in Table 10, a solution for the data generation logic is shown in Figure 20.
Figure 19. Additional SPS logics for address modulo 3 residue generation.
Figure 19. Additional SPS logics for address modulo 3 residue generation.
Mathematics 13 02594 g019
Figure 20. Data generation logic for the March_76N memory test.
Figure 20. Data generation logic for the March_76N memory test.
Mathematics 13 02594 g020
Comparing the two address-based data generation logics, according to the background models, for the memory tests MT_NPSF_81N and March_76N, the following conclusions can be drawn:
(1)
Because the two memory tests use the same testing technique, the self-test logics that implement these tests are similar in structure.
(2)
The main difference is that when implementing the March_76N memory test, two additional logics are required to generate modulo 3 residues for the row and column addresses.
(3)
These two additional logics have high complexity and increase the propagation time through the combinational logic, which can affect the test time. For example, for two structures of the form shown in Figure 18, which correspond to 4 Gbit of memory, the additional logic complexity is 2 × 138.5 = 277 eqg. Regarding the influence of propagation time through additional logic on the memory test running speed, a quantitative assessment must be made with caution because the memory access time with the control logic microcode must also be taken into account (see Figure 13).
(4)
In the March_76N memory test, because the initialization patterns are smaller in size, the network of logic gates that feed the multiplexer is simpler. Specifically, comparing Figure 14 and Figure 20, it results that in the MT_NPSF_81N test, eleven two-input XOR or XNOR gates are used, while in the March_76N test, only nine XOR or XNOR gates and two inverters are used. Since the complexity of a two-input XOR or XNOR gate is 2.5 eqg and that of an inverter is 0.5 eqg, it results that the network of gates in the March_76N test is somewhat simpler, with a complexity reduced by 4 eqg. However, compared to the complexity of the additional logic for calculating modulo 3 residues at row and column addresses of 277 eqg, this reduction is insignificant.
This comparative analysis highlights the major advantage of the proposed tests compared to other near-optimal tests that use 3 × 3 memory initialization patterns. For this reason, the authors argue that the new MT_NPSF_81N memory test, although slightly longer, is more suitable for implementation in self-testing RAM structures compared to the other two known near-optimal memory tests, CM-79N and March-76N.

6. Discussion

In relation to the new tests proposed in this paper, additional considerations covering several different aspects are made in this section.
In this research paper, only the optimal memory tests in terms of the ability to detect faults of the NPSF model were considered. Specifically, all three memory tests dedicated to the classical NPSF model, CM-79N, March-76N, and MT_NPSF_81N, fully cover this model with a minimum number of test operations. Additional write operations are used only for background changes. For this reason, the paper does not refer to “fault coverage” as the main performance indicator of memory tests. The other performance indicator regarding the number of memory operations or the length of the test is found in the name of the memory test itself. This paper targeted another aspect regarding memory tests, namely, the ease of implementation in self-testing structures of the BIST-RAM type, and the proposed memory tests, MT_NPSF_81N and MT_ENPSF, were designed specifically for this purpose.
The difference in length at the three near-optimal tests comes from the different share of memory cells that update to a background change, which is 33.3% for March_76N, 44.4% for CM_79N, and 50% for MT_NPSF_81N. This difference is reflected in the duration of the memory testing. However, in [32], it is pointed out that selectively updating memory cells for background changes raises some implementation difficulties without a significant time gain. Therefore, designers often prefer to scan the entire memory on a background change. In these situations, the small disadvantage of the proposed test in terms of length disappears, and the advantage of simplicity of implementation in BIST-RAM type structures becomes even more important.
The main disadvantage of memory tests using 3 × 3 data background patterns comes from the fact that additional logic is required to generate modulo 3 residues for row and column addresses. Related to this, it should also be noted that if the word length of the row or column address is not a power of two (and most often this condition is not met), an asymmetry appears in the expression for calculating the modulo 3 residues, which further complicates the synthesis of additional logic.
In the case of tests using 2 × 2 or 4 × 4 background patterns, the address-based data generation logic does not depend on the size of the memory being tested. The same can be said about the control logic. Only the address counters must be sized according to the geometry of the memory area. However, in the case of tests using 3 × 3 background patterns, the two additional logics for calculating the modulo 3 residues of the row and column addresses (SPSr and SPSc) differ depending on the length of the address words and therefore differ from one memory to another. The propagation time through these additional logics is also variable. And this is a major disadvantage of the March_76N and CM_79N tests compared to the proposed test, MT_NPSF_81N.
BIST structures are implemented especially in SRAM memories, which have a short response time (on the order of nanoseconds) and, as a result, can operate at high frequencies. To detect faults that manifest at high frequencies, the memory must be tested in its normal operating mode. For memories with self-test facilities, an important limitation on the speed of operation in memory test mode is related to how the control logic is implemented. If the frequency at which the memory needs to be tested is critical, then the control logic can be implemented hardwired, rather than microprogrammed, as is usually performed. In such a situation, the propagation time through the additional logic for generating the modulo 3 residues of the row and column addresses has a greater influence on the speed at which the memory test is performed. However, a quantitative assessment of this influence requires detailed knowledge of the structure of the control unit.
In BIST-RAMs, the share of additional self-test logic in the entire structure, also reflected in terms of area overhead or additional power consumption, depends largely on the size of the memory being tested. This is because, as previously mentioned, the complexity of the additional logic that provides self-test facilities is almost the same, regardless of the size of the memory being tested.
As mentioned in the previous section, more complex architectures are commonly found in memory circuits, such as the MAMB architecture in 3D memories. From a testing perspective, these architectures do not present any different problems, since their structure is designed so that multiple areas can be tested in parallel. Therefore, the basis of any testing experiment is the fault models and tests designed for SASB architectures.
The proposed tests also cover the three-cell coupling model involving neighboring memory cells arranged in a corner or on a row or column [5,10,11,12,13,14] in the classical and extended version, respectively, since the three-cell configurations are included in the five-cell configuration (see Figure 1).
The background models used when approaching the NPSF model of the size 3 × 3 or 4 × 4 also influence to some extent the ability of memory tests to cover other simpler coupling fault models, such as two-cell coupling, three-cell coupling, or four-cell coupling. To highlight this point, the following section presents the experimental results obtained through a simulation of the ability of the March_76N and MT_NPSF_81N memory tests to cover these simpler fault models. Note that the two tests apply the same testing technique but use different background patterns of the size 3 × 3 and 4 × 4, respectively.

7. Simulation Results

To evaluate the ability of a memory test to cover a specific fault model, the simulation technique briefly described in Section 3 was used. The following fault models were considered in this study: (1) two-cell coupling, where the cells in a group can be any memory cells; (2) three-cell coupling; (3) four-cell coupling; and (4) the classical NPSF model. As presented in the Introduction, for the three-cell and four-cell coupling models, the evaluation is limited to cases where the cells in a group are physically adjacent.
Depending on the fault model considered and the memory test evaluated, a large number of cell groups were selected for monitoring to cover all write operations performed during memory testing that may sensitize faults within the studied model.
For this purpose, when evaluating the MT_NPSF_81N test, which uses 4 × 4 background patterns, the set of memory cells is divided into sixteen classes ( C 1 , C 2 , , C 16 ), as known from Section 3, while when evaluating the March_76N test, which uses 3 × 3 background patterns, the set of memory cells is divided into nine classes ( C 1 ,   C 2 ,   ,   C 9 ), depending on the modulo 3 residues of the row and column addresses, as shown in Table 11.
Next, the fault models considered and how the cell groups for monitoring and fault injection were selected are briefly described.
(1)
Two-cell coupling
Considering that when applying the studied tests to two cells of the same class, identical operations are performed, in order to evaluate the ability of the tests to detect faults that may affect pairs of memory cells in the binary matrix that simulates the tested memory, in the simulation experiment, pairs of cells ( i ,   j ) were chosen, such that
i ,   j C 1 ,   C 2 , ,   C 9 for   the   March _ 76 N   memory   test ,
and
i ,   j C 1 ,   C 2 ,   ,   C 16   for   the   MT _ NPSF _ 81 N   memory   test .
Specifically, in the simulation experiment, for the March_76N memory test, 9 × 9 pairs of cells were checked, monitoring 9 × 9 × 8 = 648 different transitions, while in the case of the MT_NPSF_81N memory test, 16 × 16 pairs of cells were checked, monitoring 16 × 16 × 8 = 2048 different transitions. For verification, 10 different experiments were performed, each time obtaining the same result.
(2)
Three-cell coupling
For this coupling fault model, the following six configurations of three physically neighboring cells are usually considered, as shown in Figure 21. All these six neighboring cell configurations are part of the more complex NPSF model (see Figure 1). However, in this model, any of the three cells in the group can be a victim cell, not just the central cell, as is the case in the five-cell NPSF model.
To select groups of cells for simulation, the address of the central cell, denoted by b , is indicated, around which all six configurations are formed in turn. In a group of three cells, 3 × 2 3 = 24 transitions are possible. In the case of the memory test March_76N, a cell address is selected in the binary matrix for each of the nine classes b C 1 ,   C 2 ,   ,   C 9 , around which the six configurations are formed in turn. Thus, in the simulation experiment, 9 × 6 ×   24 = 1296 distinct transitions are monitored.
In the case of the memory test MT_NPSF_81N, a cell address is selected for each of the sixteen classes b C 1 ,   C 2 ,   ,   C 16 , so that in a simulation experiment, 16 × 6 × 24 = 2304 distinct transitions are monitored. For verification, 10 different experiments were performed, each time obtaining the same result.
(3)
Four-cell coupling
Two distinct cases were studied for this complex fault model, as shown in Figure 22. The first model (Model 1) includes four configurations derived from the five-cell configuration of the NPSF model. The other model (Model 2) considers the configuration of four cells arranged in a square. In both cases, any cell in the group can play the role of victim, aggressor, or enabling cell.
In a group of four cells, 4 × 2 4 = 64 transitions are possible. In this case too, when forming the groups of cells to be monitored, only the address of the central cell is indicated in Model 1 (cell b ) and of the cell in the upper left corner in Model 2. The group of cells is formed around this base cell in the desired configuration.
As for the previous model, one base cell from each class is chosen in the binary matrix. Thus, for the first model, in the simulation experiment, 9 × 4 × 64 = 2304 transitions are monitored when evaluating the March_76N memory test, and 16 × 4 × 64 = 4096 transitions are monitored when evaluating the MT_NPSF_81N memory test. For the second model, the number of monitored transitions is four times smaller.
For all these fault models, the simulation results regarding fault coverage by the two memory tests are presented in Table 12. The fault coverage is expressed as the ratio between the number of detected faults and the total number of simulated faults. For verification, the simulation program also calculates the ratio between the number of transitions performed in all considered cell groups and the total number of monitored transitions. Both calculation relationships led to the same result.
The simulation results confirm that the two dedicated memory tests fully cover the NPSF model. As a verification, both memory tests also fully cover the fault models including simpler three-cell and four-cell models derived from the five-cell configuration of the NPSF model.
However, the two memory tests only partially cover the four-cell coupling fault model with the cells arranged in a square (Model 2), as these tests were not designed for this purpose. Most surprising, however, is the fact that neither of the two tests completely covers the two-cell coupling fault model, the most often used model in practice. But this is a characteristic of all tests dedicated to the NPSF model. For example, the test proposed Suk and Reddy [28] of the length 165 N covers the two-cell coupling model by 76.04%, and the four-cell coupling, Model 2, by only 54.3%.
To detect all coupling faults between any two memory cells, these dedicated tests should be supplemented with other test sequences. For example, for both the M a r c h _ 76 N and MT_NPSF_81N memory tests, a new initialization with one everywhere (one-solid data background) followed by a simple memory scan using the march element r w t before the final check is sufficient to cover this class of faults as well.
Comparing the two memory tests, March_76N and MT_NPSF_81N , it results that the new test proposed in this paper offers slightly better performance in terms of covering other fault models. Since the two memory tests apply the same testing technique, this difference in fault coverage is explained exclusively by the contribution of the 4 × 4 background patterns.

8. Conclusions

The paper proposes two near-optimal memory tests dedicated to the complex NPSF model, both in the classic version in which it is accepted that a fault is only sensitized by a transition write operation, and in the extended version in which faults sensitized by a non-transition write or a read operation are also taken into account. The proposed tests, MT_NPSF_81N and MT_ENPSF are multirun march memory tests that use 4 × 4 data background patterns and are specifically designed to be easy to implement in self-testing RAM architectures. These are the first near-optimal memory tests dedicated to the NPSF model using 4 × 4 data initialization patterns reported to date.
Due to the difficulty of generating modulo 3 residues for row and column addresses, the authors state that memory tests using 3 × 3 memory initialization patterns (as are the other two known near-optimal tests for the classical NPSF model, CM-79N and March-76N) are not exactly suitable for implementation in self-testing RAM structures.
The evaluation of the optimality of the proposed tests is based on the fact that for any group of cells corresponding to the NPSF model, the state graph is completely covered, and each arc is traversed only once, which means that the graph is Eulerian. Some additional memory write operations are only required for background changes. All these aspects were also verified through simulation.
The memory initialization patterns were designed so that any group of five cells in the known configuration would be brought into each of the 32 possible states only once. A great deal of research effort was required to fulfill this condition. The authors identified this optimal solution intuitively, but in future research, they will try to identify other solutions through search methods specific to artificial intelligence.

Author Contributions

Conceptualization, P.C.; Methodology, P.C.; Software, D.C.; Validation, P.C. and D.C.; Formal analysis, P.C. and D.C.; Investigation, P.C.; Writing – original draft, P.C. and D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors thank their colleague Florin Leon for useful and fruitful discussions. Also, they specially thank Viorel Onofrei for the useful suggestions that contributed to improving the readability of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chakravarthi, V.S. A Practical Approach to VLSI System on Chip (SoC) Design: A Comprehensive Guide, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  2. Dean Adams, R. High Performance Memory Testing: Design Principles, Fault Modeling and Self-Test; Springer: New York, NY, USA, 2003. [Google Scholar]
  3. Mazumder, P.; Chakraborty, K. Testing and Testable Design of High-Density Random-Access Memories (Frontiers in Electronic Testing); Kluwer Academic: Boston, MA, USA, 1996. [Google Scholar]
  4. Mrozek, I. Multi-Run Memory Tests for Pattern-Sensitive Faults; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  5. Ramana Kumari, K.L.V.; Asha Rani, M.; Balaji, N. 3-Cell Neighborhood Pattern-Sensitive Fault Testing in Memories Using Hamiltonian and Gray Sequences. J. Comput. Theor. Nanosci. 2021, 18, 879–884. [Google Scholar]
  6. Hamdioui, S. Testing Static Random-Access Memories: Defects, Fault Models and Test Patterns; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2004. [Google Scholar]
  7. van de Goor, A.J.; Al-Ars, Z. Functional Faults Models: A Formal Notation and a Taxonomy. In Proceedings of the 18th IEEE VLSI Test Symposium, Montreal, QC, Canada, 30 April–4 May 2000; pp. 281–289. [Google Scholar]
  8. Samanta, A.; Saha, M.; Mahato, A.K. BIST Design for Static Neighbourhood Pattern-Sensitive Fault Test. IJCA Int. Conf. Commun. Circuits Syst. 2012, iC3S, 23–28. [Google Scholar]
  9. Harutunyan, G.; Vardanian, V.A.; Zorian, Y. Minimal March Tests for Unlinked Static Faults in Random Access Memories. In Proceedings of the 23rd IEEE VLSI Symposium, Palm Springs, CA, USA, 1–5 May 2005; pp. 53–59. [Google Scholar]
  10. Caşcaval, P.; Bennett, S. Efficient March Test for 3-Coupling Faults in Random Access Memories, Microprocessors and Microsystems. Elsevier 2001, 24, 501–509. [Google Scholar]
  11. Caşcaval, P.; Bennett, S.; Huţanu, C. Efficient March Tests for a Reduced 3-Coupling and 4-Coupling Faults in RAMs. J. Electron. Test. Theory Appl. 2004, 20, 227–243. [Google Scholar] [CrossRef]
  12. Caşcaval, P.; Caşcaval, D. March SR3C: A Test for a Reduced Model of All Static Simple Three-Cell Coupling Faults in Random-Access Memories. Microelectron. J. 2010, 41, 212–218. [Google Scholar] [CrossRef]
  13. Caşcaval, P.; Caşcaval, D. March Test Algorithm for Unlinked Static Reduced Three-Cell Coupling Faults in Random-Access Memories. Microelectron. J. 2019, 93, 104619. [Google Scholar] [CrossRef]
  14. Caşcaval, P.; Cașcaval, D. Near-optimal March Tests for Three-Cell and Four-Cell Coupling Fault Models in Random-Access Memories. Rom. J. Inf. Sci. Technol. 2024, 27, 323–335. [Google Scholar] [CrossRef]
  15. Mrozek, I.; Shevchenko, N.A.; Yarmolik, V.N. Universal Address Sequence Generator for Memory Built-in Self-Test. Fundam. Informaticae 2022, 188, 41–61. [Google Scholar] [CrossRef]
  16. Jidin, A.Z.; Hussin, R.; Fook, L.W.; Mispan, M.S. A Review Paper on Memory Fault Models and Test Algorithms. Bull. Electr. Eng. Inform. 2021, 10, 3083–3093. [Google Scholar] [CrossRef]
  17. Hantos, G.; Flynn, D.; Desmulliez, M.P.Y. Built-In Self-Test (BIST) Methods for MEMS: A Review. Micromachines 2020, 12, 40. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, L.; Wang, Z.; Li, Y.; Mao, L. A Precise Design for Testing High-Speed Embedded Memory Using a BIST Circuit. IETE J. Res. 2017, 63, 473–481. [Google Scholar] [CrossRef]
  19. Thakur, R.S.; Awasthi, A. A Review Paper on Memory Testing Using BIST. Glob. Res. Dev. J. Eng. 2016, 1, 94–98. [Google Scholar]
  20. Hou, C.; Li, J.; Fu, T. A BIST Scheme with the Ability of Diagnostic Data Compression for RAMs. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2014, 33, 2020–2024. [Google Scholar]
  21. Nair, C.; Thatte, S.M.; Abraham, J.A. Efficient Algorithms for Testing Semiconductor Random-Access Memories. IEEE Trans. Comput. 1978, C-27, 572–576. [Google Scholar] [CrossRef]
  22. Papachristou, C.A.; Saghal, N.B. An Improved Method for Detecting Functional Faults in Random-Access Memories. IEEE Trans. Comput. 1985, C-34, 110–116. [Google Scholar] [CrossRef]
  23. Cockburn, B.F. Deterministic Tests for Detecting Single V-Coupling Faults in RAMs Journal of Electronic Testing: Theory and Applications. J. Electron. Test. 1994, 5, 91–113. [Google Scholar] [CrossRef]
  24. Wu, T.C.; Fan, W.K.; Gu, Y.F.; Fan, F.F.; Li, Q.H. A New Test Algorithm and Fault Simulator of Simplified Three-Cell Coupling Faults for Random Access Memories. IEEE ACCESS 2024, 12, 109218–109229. [Google Scholar] [CrossRef]
  25. Hayes, J.P. Detection of Pattern-Sensitive Faults in Random-Access Memories. IEEE Trans. Comput. 1975, C-24, 150–157. [Google Scholar] [CrossRef]
  26. Ramana Kumari, K.L.V.; Asha Rani, M.; Balaji, N. Testing of Neighborhood Pattern-Sensitive Faults for Memory. In Soft Computing and Signal Processing, Advances in Intelligent Systems and Computing; Reddy, V.S., Prasad, V.K., Wang, J., Reddy, K.T.V., Eds.; Springer: Singapore, 2022. [Google Scholar]
  27. Hayes, J.P. Testing Memories for Single-Cell Pattern-Sensitive Faults. IEEE Trans. Comput. 1980, 29, 249–254. [Google Scholar] [CrossRef]
  28. Suk, D.; Reddy, S. Test Procedures for a Class of Pattern-Sensitive Faults in Semiconductor Random-Access Memories. IEEE Trans. Comput. 1980, C-29, 419–429. [Google Scholar] [CrossRef]
  29. Karpovsky, M.G.; Yarmolik, V.N. Transparent Random-Acces Memory Testing for Pattern-Sensitive Faults, Journal of Electronic Testing. Theory Appl. 1996, 9, 860–969. [Google Scholar]
  30. Kang, D.C.; Cho, S.B. An Efficient Build–in Self–Test Algorithm for Neighborhood Pattern-Sensitive Faults in High–Density Memories, Proc. 4th Korea–Russia Int. Proceedings of the 4th Korea-Russia International Symposium On Science and Technology, Ulsan, Republic of Korea, 27 June–1 July 2000; Volume 2, pp. 218–223. [Google Scholar]
  31. Buslowska, E.; Yarmolik, V.N. Multi-Run March Tests for Pattern-Sensitive Faults in RAM. In Proceedings of the 2018 IEEE East-West Design & Test Symposium (EWDTS 2018), Kazan, Russia, 14–17 September 2018. [Google Scholar]
  32. Cheng, K.L.; Tsai, M.F.; Wu, C.W. Neighborhood Pattern-Sensitive Fault Testing and Diagnostics for Random-Access Memories. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2002, 21, 1328–1336. [Google Scholar] [CrossRef]
  33. Huzum, C.; Caşcaval, P. A Multibackground March Test for Static Neighborhood Pattern-Sensitive Faults in Random-Access Memories, Electronics and Electrical Engineering (Elektronika ir Elektrotechnika)—Section System Engineering. Comput. Technol. 2012, 119, 81–86. [Google Scholar]
  34. Wang, L.T.; Wu, C.W.; Wen, X. Memory Testing and Built-In Self-Test. In VLSI Test Principles and Architectures, 1st ed.; Design for Testability; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  35. Bernardi, P.; Grosso, M.; Sonza Reorda, M.; Zhang, Y. A Programmable BIST for DRAM testing and diagnosis. In Proceedings of the IEEE International Test Conference, Austin, TX, USA, 2–4 November 2010; pp. 1–10. [Google Scholar]
  36. Rusli, J.R.; Sidek, R.M.; Wan Zuha, W.H. Development of Automated Neighborhood Pattern Sensitive Fault Syndrome Generator for SRAM. In Proceedings of the 10th IEEE International Conference on Semiconductor Electronics (ICSE), Kuala Lumpur, Malaysia, 19–21 September 2012. [Google Scholar]
  37. Yarmolik, V.N.; Levantsevich, V.A.; Demenkovets, D.V.; Mrozek, I. Construction and Application of March Tests for Pattern-Sensitive Memory Faults Detection. Informatics 2021, 18, 25–42. [Google Scholar] [CrossRef]
  38. Mrozek, I. Analysis of Multibackground Memory Testing Techniques. Int. J. Appl. Math. Comput. Sci. 2010, 20, 191–205. [Google Scholar] [CrossRef]
  39. Parvathi, M.; Himasree, T.; Bhavyasree, T. Novel Test Methods for NPSF Faults in SRAM. In Proceedings of the 2018 International Conference on Computational and Characterization Techniques in Engineering & Sciences (CCTES), Lucknow, India, 14–15 September 2018. [Google Scholar]
  40. Ribenboim, P. Classical Theory of Algebraic Numbers; Springer Science: New York, NY, USA, 2001. [Google Scholar]
Figure 1. Cell configuration in the NPSF model: b —base cell (with dark background, the cell affected by the fault) and i ,   j ,   k ,   a n d   m —neighboring cells.
Figure 1. Cell configuration in the NPSF model: b —base cell (with dark background, the cell affected by the fault) and i ,   j ,   k ,   a n d   m —neighboring cells.
Mathematics 13 02594 g001
Figure 2. Example of a passive NPSF: in blue, the state of the aggressor cells and in red, the memory error affecting the victim cell.
Figure 2. Example of a passive NPSF: in blue, the state of the aggressor cells and in red, the memory error affecting the victim cell.
Mathematics 13 02594 g002
Figure 3. Example of an active NPSF: in blue, the state of the enabling cells that allow the aggressor cell to also change the state of the victim cell (highlighted in red).
Figure 3. Example of an active NPSF: in blue, the state of the enabling cells that allow the aggressor cell to also change the state of the victim cell (highlighted in red).
Mathematics 13 02594 g003
Figure 4. MT_NPSF_81N march memory test.
Figure 4. MT_NPSF_81N march memory test.
Mathematics 13 02594 g004
Figure 5. Memory initialization patterns used by the MT_NPSF_81N multirun march test .
Figure 5. Memory initialization patterns used by the MT_NPSF_81N multirun march test .
Mathematics 13 02594 g005
Figure 6. Example of memory initialization (BG12 data background)—two groups of cells initialized in the same way are highlighted in gray and yellow, respectively.
Figure 6. Example of memory initialization (BG12 data background)—two groups of cells initialized in the same way are highlighted in gray and yellow, respectively.
Mathematics 13 02594 g006
Figure 7. Division of memory cells and the g -groups of cells into 16 classes: several g -groups of cells from different classes are highlighted in different colors (the central cell is marked with a slightly darker color).
Figure 7. Division of memory cells and the g -groups of cells into 16 classes: several g -groups of cells from different classes are highlighted in different colors (the central cell is marked with a slightly darker color).
Mathematics 13 02594 g007
Figure 8. The initial and complementary states, expressed in hexadecimal as I x / I x , for two groups of cells of classes G 1 (light yellow) and G 11 (dark yellow), depending on the data background, BG 1 , B G 2 ,   ,   B G 16 .
Figure 8. The initial and complementary states, expressed in hexadecimal as I x / I x , for two groups of cells of classes G 1 (light yellow) and G 11 (dark yellow), depending on the data background, BG 1 , B G 2 ,   ,   B G 16 .
Mathematics 13 02594 g008
Figure 9. Incomplete graph with the transitions performed in a g -group of cells of class G 1 in the first 6 of the 16 iterations of the MT_NPSF_81N memory test—the six different routes marked (1) to (6) are highlighted in different colors to make it easier to see that they have no arc in common.
Figure 9. Incomplete graph with the transitions performed in a g -group of cells of class G 1 in the first 6 of the 16 iterations of the MT_NPSF_81N memory test—the six different routes marked (1) to (6) are highlighted in different colors to make it easier to see that they have no arc in common.
Mathematics 13 02594 g009
Figure 10. The state evolution for a group of cells g = { i , j , b , k , m } by applying the march element r w t starting from different initial states, I x 0 and I y 0 .
Figure 10. The state evolution for a group of cells g = { i , j , b , k , m } by applying the march element r w t starting from different initial states, I x 0 and I y 0 .
Mathematics 13 02594 g010
Figure 11. The operations carried out on a group of cells g = i ,   j ,   b ,   k ,   m when applying the MT_NPSF_81N memory test: in blue, the write operations for memory fault sensitization, and in red, read operations for detecting memory errors.
Figure 11. The operations carried out on a group of cells g = i ,   j ,   b ,   k ,   m when applying the MT_NPSF_81N memory test: in blue, the write operations for memory fault sensitization, and in red, read operations for detecting memory errors.
Mathematics 13 02594 g011
Figure 12. MT_ENPSF march memory test.
Figure 12. MT_ENPSF march memory test.
Mathematics 13 02594 g012
Figure 13. Block diagram for a BIST-RAM architecture.
Figure 13. Block diagram for a BIST-RAM architecture.
Mathematics 13 02594 g013
Figure 14. Data generation logic for the M T _ N P S F _ 81 N memory test.
Figure 14. Data generation logic for the M T _ N P S F _ 81 N memory test.
Mathematics 13 02594 g014
Figure 15. CLC1 combinational logic.
Figure 15. CLC1 combinational logic.
Mathematics 13 02594 g015
Figure 16. Processing the values obtained from two basic CLC1 logics.
Figure 16. Processing the values obtained from two basic CLC1 logics.
Mathematics 13 02594 g016
Figure 17. CLC2 combinational logic.
Figure 17. CLC2 combinational logic.
Mathematics 13 02594 g017
Figure 18. Example of a series–parallel structure for calculating the R 3 W   value.
Figure 18. Example of a series–parallel structure for calculating the R 3 W   value.
Mathematics 13 02594 g018
Figure 21. Six cell configurations for three-cell coupling fault model, highlighted in gray.
Figure 21. Six cell configurations for three-cell coupling fault model, highlighted in gray.
Mathematics 13 02594 g021
Figure 22. Cell configurations for the four-cell coupling fault model, highlighted in gray.
Figure 22. Cell configurations for the four-cell coupling fault model, highlighted in gray.
Mathematics 13 02594 g022
Table 1. Notations used to describe memory operations or certain memory errors.
Table 1. Notations used to describe memory operations or certain memory errors.
NotationMeaning of the Notation
  r Operation of reading a memory cell and checking the value read by comparing it with the expected value.
w 0 w 1 Operation of writing the logical value 0 (1) to the addressed cell.
0 w 1 ( 1 w 0 ) Write operation to change the state of the addressed cell from 0 to 1 (from 1 to 0).
w t Transition   write   operation ,   0 w 1   or   1 w 0 .
w n t Write operation without changing the state of the addressed cell (non-transition write operation).
r i ,   w n t i , w t i ,   w 0 i ,   w 1 i Memory   operations   with   cell   i .
  ( ) A   change   of   logical   state   in   a   memory   cell   from   0   to   1   ( 1 to 0) by sensitizing a memory fault (i.e., a memory error).
A r ( A c ) Row (column) address in a memory operation.
v i The   logical   value   of   memory   cell   i .
Table 2. Example of primitive faults for the NPSF model: 1–32 active faults and 129–160 passive faults.
Table 2. Example of primitive faults for the NPSF model: 1–32 active faults and 129–160 passive faults.
Fault NumberFault Sensitization (Transition Write Operation) and
the Initial State of the Other Cells in the Group
Effect   of   Fault   Sensitization   on   the   Victim   Cell   b (Memory Error)
Cell   i Cell   j Cell   k Cell   m Cell   b
1 0 w 1 0000
2 1 w 0 0000
3 0 w 1 0001
4 1 w 0 0001
5 0 w 1 0010
6 1 w 0 0010
7 0 w 1 0011
8 1 w 0 0011
29 0 w 1 1110
30 1 w 0 1110
31 0 w 1 1111
32 1 w 0 1111
1290000 0 w 1 0
1300000 1 w 0 1
1310001 0 w 1 0
1320001 1 w 0 1
1330010 0 w 1 0
1340010 1 w 0 1
1350011 0 w 1 0
1360011 1 w 0 1
1591111 0 w 1 0
1601111 1 w 0 1
Table 3. Dividing the cells into sixteen classes by row and column addresses.
Table 3. Dividing the cells into sixteen classes by row and column addresses.
A c   mod 4
0123
A r  mod 40 C 1 C 2 C 3 C 4
1 C 5 C 6 C 7 C 8
2 C 9 C 10 C 11 C 12
3 C 13 C 14 C 15 C 16
Table 4. Initial and complementary states p r e s e n t e d   a s   I x I x   for a g -group of cells depending on the data background and the class to which the group belongs.
Table 4. Initial and complementary states p r e s e n t e d   a s   I x I x   for a g -group of cells depending on the data background and the class to which the group belongs.
G 1 G 2 G 3 G 4 G 5 G 6 G 7 G 8 G 9 G 10 G 11 G 12 G 13 G 14 G 15 G 16
BG100000000000000000000000000000000
1F1F1F1F1F1F1F1F1F1F1F1F1F1F1F1F
BG2111111110E0E0E0E1111110E0E110E0E
0E0E0E0E111111110E0E0E11110E1111
BG30A150A150A150A150A0A0A150A150A15
150A150A150A150A1515150A150A150A
BG41B041B04041B041B1B041B04041B041B
041B041B1B041B04041B041B1B041B04
BG50802071D0802171D0802171D0802171D
171D0802171D0802171D0802171D0802
BG602171D0802171D0802171D0802171D08
1D0802171D0802171D0802171D080217
BG710101010010101010F0F0F0F1E1E1E1E
0F0F0F0F1E1E1E1E1010101001010101
BG8010101010F0F0F0F1E1E011E10101010
1E1E1E1E1010101001011E010F0F0F0F
BG90B140B14051A051A140B140B1A051A05
140B140B1A051A050B140B14051A051A
BG101A051A050B140B14051A051A140B140B
051A051A140B140B1A051A050B140B14
BG111913060C060C19131913060C060C1913
060C19131913060C060C19131913060C
BG1213060C190C19130613060C190C191306
0C19130613060C190C19130613060C19
BG1303161C090D1812071C09031612070D18
1C09031612070D1803161C090D181207
BG1412070D1803161C090D1812071C090316
0D1812071C09031612070D1803161C09
BG15161C09031812070D0903161C070D1812
0903161C070D1812161C09031812070D
BG16070D1812161C09031812070D0903161C
1812070D0903161C070D1812161C0903
Fulfillment Equation (5)
Table 5. The intermediate states illustrated in Figure 10.
Table 5. The intermediate states illustrated in Figure 10.
Evolution   from   I x 0 Evolution   from   I y 0
I x 1 = I x 0 10 H I y 1 = I y 0 10 H
I x 2 = I x 0 18 H I y 2 = I y 0 18 H
I x 3 = I x 0 1 C H I y 3 = I y 0 1 C H
I x 4 = I x 0 1 E H I y 4 = I y 0 1 E H
Table 6. The initialization value of a cell depending on the address in the MT_NPSF_81N memory test.
Table 6. The initialization value of a cell depending on the address in the MT_NPSF_81N memory test.
Background Pattern The   Value   of   a   Cell   ( V ) According to the Address
B G 1 0000 V = 0
0000
0000
0000
B G 2 0000 V = A r 0
1111
0000
1111
B G 3 0101 V = A c [ 0 ]
0101
0101
0101
B G 4 0101 V = A r [ 0 ] A c [ 0 ]
1010
0101
1010
B G 5 0011 V = A c [ 1 ]
0011
0011
0011
B G 6 0110 V = A c 0 A c 1 = X
0110
0110
0110
B G 7 0000 V = A r [ 1 ]
0000
1111
1111
B G 8 0000 V = A r 0 A r 1 = Y
1111
1111
0000
B G 9 0101 V = A r [ 0 ] A r [ 1 ] ¯   A c 0 + A r 0 A r 1   A c 0 ¯
=   A r 0 A r 1 A c 0 = Y A c 0
1010
1010
0101
B G 10 0101 V = A r 1 ¯ A c 0 + A r 1 A c 0 ¯ = A c 0 A r 1
0101
1010
1010
B G 11 0011 V = A r 0 A c 1 ¯ + A r 0 ¯ A c 1 = A r 0 A c 1
1100
0011
1100
B G 12 0110 V = A r 0 ¯   A c [ 0 ] A c [ 1 ] + A r 0 A c [ 0 ] A c [ 1 ] ¯
=   A r 0 A c 0 A c 1 = X A r 0
1001
0110
1001
B G 13 0110 V = A r [ 0 ] A r [ 1 ] ¯   A c [ 0 ] A c [ 1 ] + A r [ 0 ] A r [ 1 ]   A c [ 0 ] A c [ 1 ] ¯
= A r 0 A r 1   A c 0 A c 1 = X Y .
1001
1001
0110
B G 14 0110 V =   A r 1 ¯   A c [ 0 ] A c [ 1 ] + A r 1 A c [ 0 ] A c [ 1 ] ¯
=   A r 1 A c 0 A c 1 = X A r 1
0110
1001
1001
B G 15 1100 V = A r [ 0 ] A r [ 1 ] ¯    A c [ 1 ] ¯ + A r [ 0 ] A r [ 1 ] A c [ 1 ]
= A r 0 A r 1 A c 1 ¯ = Y A c 1 ¯
0011
0011
1100
B G 16 1100 V = A r 1 ¯   A c 1 ¯ + A r 1 A c 1 = A r [ 1 ] A c [ 1 ] ¯
1100
0011
0011
Table 7. Values for R 3 2 i .
Table 7. Values for R 3 2 i .
i 0123456
2 i 1248163264
R 3 2 i 1212121
Table 8. Truth table for logical variables R 0 , R 1 , R 2 .
Table 8. Truth table for logical variables R 0 , R 1 , R 2 .
c 2 i c 2 i + 1 R 0 R 1 R 2
00100
01001
10010
11100
Table 9. Complexity of the CLC1 and CLC2 logics.
Table 9. Complexity of the CLC1 and CLC2 logics.
Combinational LogicComplexity (eqg)
CLC1 C 1 = 5.5
CLC2 C 2 = 13.5
Table 10. Determining the value of a cell ( V ) based on the address in the March_76N memory test.
Table 10. Determining the value of a cell ( V ) based on the address in the March_76N memory test.
Background Pattern The   Value   of   a   Cell   ( V ) According to the Address
B G 1 000 V = 0
000
000
B G 2 000 V = R 1 r
111
000
B G 3 111 V = R 2 r ¯
111
000
B G 4 011 V = R 2 r ¯   R 0 c ¯ + R 2 r R 0 c = R 2 r R 0 c ¯
011
100
B G 5 011 V = R 0 r   R 0 c ¯ + R 0 r ¯ R 0 c = R 0 r R 0 c
100
100
B G 6 100 V = R 0 c
100
100
B G 7 100 V = R 1 r   R 0 c ¯ + R 1 r ¯ R 0 c = R 1 r R 0 c
011
100
B G 8 110 V = R 1 r   R 2 c + R 1 r ¯   R 2 c ¯ = R 1 r   R 2 c ¯
001
110
B G 9 001 V = R 2 r   R 2 c ¯ + R 2 r ¯   R 2 c = R 2 r   R 2 c
001
110
B G 10 101 V = R 2 r   R 1 c + R 2 r ¯   R 1 c ¯ = R 2 r   R 1 c ¯
101
010
B G 11 010 V = R 1 r   R 1 c ¯ + R 1 r ¯   R 1 c = R 1 r   R 1 c
101
010
B G 12 010 V = R 1 c
010
010
B G 13 110 V = R 2 c ¯
110
110
B G 14 001 V = R 0 r   R 2 c + R 0 r ¯   R 2 c ¯ = R 0 r   R 2 c ¯
110
110
B G 15 101 V = R 0 r   R 1 c ¯ + R 0 r ¯   R 1 c = R 0 r   R 1 c
010
010
B G 16 111 V = R 0 r
000
000
Table 11. Dividing cells into nine classes by row and column addresses.
Table 11. Dividing cells into nine classes by row and column addresses.
A c   mod 3
012
A r  mod  3 0 C 1 C 2 C 3
1 C 4 C 5 C 6
2 C 7 C 8 C 9
Table 12. Simulation results: fault coverage (expressed as %).
Table 12. Simulation results: fault coverage (expressed as %).
Memory TestMemory Fault Model
Two-Cell CouplingThree-Cell CouplingFour-Cell Coupling Model 1Four-Cell Coupling Model 2NPSF
March_76N
(3 × 3 patterns)
94.4410010075.69100
MT_NPSF_81N
(4 × 4 patterns)
96.8810010078.03100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cașcaval, P.; Cașcaval, D. Near-Optimal Multirun March Memory Test Algorithms for Neighborhood Pattern-Sensitive Faults in Random-Access Memories. Mathematics 2025, 13, 2594. https://doi.org/10.3390/math13162594

AMA Style

Cașcaval P, Cașcaval D. Near-Optimal Multirun March Memory Test Algorithms for Neighborhood Pattern-Sensitive Faults in Random-Access Memories. Mathematics. 2025; 13(16):2594. https://doi.org/10.3390/math13162594

Chicago/Turabian Style

Cașcaval, Petru, and Doina Cașcaval. 2025. "Near-Optimal Multirun March Memory Test Algorithms for Neighborhood Pattern-Sensitive Faults in Random-Access Memories" Mathematics 13, no. 16: 2594. https://doi.org/10.3390/math13162594

APA Style

Cașcaval, P., & Cașcaval, D. (2025). Near-Optimal Multirun March Memory Test Algorithms for Neighborhood Pattern-Sensitive Faults in Random-Access Memories. Mathematics, 13(16), 2594. https://doi.org/10.3390/math13162594

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop