Next Article in Journal
Comparative Analysis of Physical Activity and Neuromuscular Characteristics in Middle-Aged and Young Men
Previous Article in Journal
Characteristics of Mining-Induced Stress Rotation Due to Unloading in Deep Roadway Excavation and Surrounding Rock Control Countermeasures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Approach for Controlled Random Tests with a Given Hamming Distance Generation

by
Ireneusz Mrozek
1,*,
Marek Kopczewski
1 and
Vyacheslav N. Yarmolik
2
1
Faculty of Computer Science, Bialystok University of Technology, 15-351 Białystok, Poland
2
Faculty of Computer Science, Belarusian State University of Informatics and Radioelectronics, 220013 Minsk, Belarus
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 9951; https://doi.org/10.3390/app15189951
Submission received: 29 July 2025 / Revised: 3 September 2025 / Accepted: 9 September 2025 / Published: 11 September 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

This paper addresses the challenges of testing computing systems and their hardware components, especially memory devices. It highlights the limitations of traditional random testing. Such methods often fail to use available information about the system under test and previously generated test patterns. The potential of controlled random testing, which incorporates knowledge of prior patterns, is therefore explored. A class of controlled random tests with a limited number of test patterns is identified and analyzed, including existing standard approaches. The paper introduces a novel measure of dissimilarity between test patterns. This measure is based on calculating Hamming distances for binary patterns after mapping them into different numeral systems, including quaternary, octal, and hexadecimal. We propose a method for generating controlled random tests with a guaranteed minimum Hamming distance. It is based on representing binary patterns as symbols from non-binary numeral systems. In this way, ensuring a specific Hamming distance in the symbolic domain also guarantees at least the same distance in the binary representation. We evaluate the effectiveness of the proposed method through simulations, particularly in the context of memory testing and the detection of multicell faults, i.e., errors caused by interactions between multiple memory cells. This approach can enhance the efficiency and reliability of test procedures in embedded systems, memory diagnostics, and safety-critical applications.

1. Introduction

Testing is still the main method used to verify the quality of software, hardware, memory devices, and applications. Even though many design-for-test and analysis methods have been proposed, testing remains a process that requires considerable effort and time. Therefore, systematic and automated procedures for generating test data are very important. Since the 1960s, probabilistic approaches, usually called random testing, have been widely used because they are simple to understand and easy to apply [1,2,3]. However, the efficiency of purely random methods is low: they usually ignore information about the system under test and about earlier test patterns, which often results in weaker fault detection [4,5].
To overcome this limitation, different controlled versions of random testing have been developed. In this paper, we use the term controlled random testing (CRT) [6] for methods that select the next pattern based on its difference from the previous ones. The Hamming distance is most often used as a measure of this difference. A related idea, well known in software testing, is called adaptive random testing (ART). A more detailed overview of these approaches is given in Section 2.1, while the theoretical background relevant for CRT is presented in Section 2.2.
The main problem studied in this paper is the choice of a difference measure that (i) can properly describe the diversity between test patterns and (ii) can be computed with low complexity [5,7]. The classical Hamming distance is widely used, but it often treats different patterns as equally distant, even when their structures are sufficiently different. To address this, we propose to extend the idea of Hamming distance by interpreting binary patterns in other numeral systems and by defining a new vector-based dissimilarity measure. This allows us to construct controlled tests that remain efficient to compute but can better distinguish between candidate patterns.
Contributions. The paper offers the following contributions. First, we introduce a framework in which an n-bit binary pattern can be mapped to sequences over alphabets of higher radix while still allowing for distance calculations in a consistent way. Second, we define a new vector-based dissimilarity measure based on these representations and discuss its main properties. Third, we propose a method for generating controlled random tests with a given minimum Hamming distance that reduces the need for expensive candidate checks. Finally, we show in experiments with memory-oriented examples that our method provides a more effective selection of patterns at a reasonable computational cost.
Paper organization. Section 2 is divided into two parts. Section 2.1 reviews earlier work on controlled random testing. Section 2.2 presents the coding-theoretic background and shows bounds that limit the construction of CRT. Section 3 introduces the representation-based extensions of the Hamming distance and the new dissimilarity measure. Section 4 describes the method for generating tests with a given minimum Hamming distance. Section 5 presents the experimental results, and Section 6 gives the conclusions and future directions.

2. Controlled Random Tests Analysis

2.1. Related Work

The related work can be grouped into three major streams: (i) pseudo-random and pseudo-exhaustive testing, (ii) diversity-driven methods such as Antirandom Testing and its extensions, and (iii) adaptive random testing (ART) and its numerous variants. Below, we summarize key contributions within each stream.
Random testing has long been applied in hardware and memory verification due to its simplicity and low implementation cost. In practice, purely random vectors are seldom used directly; instead, pseudo-random sequences are commonly employed, often generated by linear feedback shift registers (LFSRs) within built-in self-test (BIST) schemes [8]. Exhaustive and pseudo-exhaustive approaches have also been investigated to guarantee high fault coverage [9]. Early seminal work by McCluskey [10] introduced a pseudo-exhaustive testing methodology that became a foundation for later approaches. Fujiwara [11] provided further theoretical and practical developments, placing pseudo-exhaustive methods within a broader framework of logic testing and design for testability. Karpovsky, Yarmolik, and van de Goor [12] subsequently applied pseudo-exhaustive techniques to RAM testing, demonstrating their potential for memory devices. While such methods ensure thorough fault coverage, they are impractical for large memories due to excessive computational requirements. Such drawbacks motivated the development of more advanced approaches that explicitly control the diversity of generated test patterns.
A significant breakthrough was the introduction of Antirandom Testing by Malaiya [13,14,15], who first defined and demonstrated the method as a distance-based black-box testing strategy. Yin [16] subsequently developed a practical tool for generating hardware test sequences based on the principle of maximizing dissimilarity between test vectors, providing one of the earliest implementations of the antirandom concept. The idea of maximizing dissimilarity was later refined through extensions, including, among others, Fast Antirandom Testing (FAR) [17], Scalable Antirandom Testing (SAT) [18], and Pseudo-Ring Testing (PRT) [19]. Notably, most of these extensions were developed in the context of hardware and memory testing.
In parallel, the concept of selecting tests according to their distance from previous ones was generalized into what became known as adaptive random testing (ART) [20,21,22]. Unlike the earlier hardware-oriented approaches, ART and its numerous variants were primarily investigated in software testing. These include Good Random Testing [23], Restricted Random Testing [24,25], Maximum Distance Testing [26], Mirror Random Testing [27], Orderly Random Testing [28], hybrid adaptive random testing [29], and Evolutionary Random Testing [30], among others. A variety of distance metrics have been explored in this context, including minimum, average, maximum, centroid-based distances, discrepancy, and membership grade, as well as Hamming distance [7]. None of these metrics can be regarded as predominant across all ART variants, but they share the common goal of enforcing diversity when selecting new test cases. Comprehensive surveys, such as Anand et al. [5] on automated test case generation, Grindal et al. [31] on combination and diversity-driven strategies, Chen et al. [32] and Huang et al. [7] on adaptive random testing, and Feldt [33] on quantifying test diversity, provide broader overviews of these approaches and underline the central role of diversity metrics in general.
Despite these contributions, most existing methods still rely on evaluating specific characteristics of previously generated test sets. The vast majority of the approaches presented share the common goal of maximizing diversity among test patterns. However, this goal is most often achieved at the expense of increased computational overhead [34]. While such costs may be acceptable in some software testing contexts, they become a serious limitation in hardware- and memory-oriented testing, where the efficiency of test generation is a critical requirement. Therefore, there is a clear need for methods that ensure sufficient diversity of test patterns while significantly reducing the computational burden.

2.2. Formal Analysis of Controlled Random Tests

Following the discussion of related work, this subsection provides a theoretical background on controlled random tests (CRTs). The analysis emphasizes the role of Hamming distance as the principal diversity metric and introduces fundamental bounds and constructions that shape the efficiency of CRT generation. The presented considerations are rooted in the classical results of coding theory, including Hamming’s seminal work on error-detecting and error-correcting codes [35], the comprehensive treatment by Peterson and Weldon [36], and the Plotkin bound [37], while also extending our earlier research on Multi-Run Memory Tests [38] and optimal controlled random tests [39].
In the following discussions, we consider a sequence T i of data t i , 0 , t i , 1 , , t i , n 1 as a test pattern T i = t i , 0 t i , 1 t i , n 1 consisting of n elements t i , l , where l { 0 , 1 , 2 , , n 1 } , generally represented in an arbitrary alphabet. As shown in [7], the next test pattern T i in a controlled random test is designed to differ as much as possible from the previously generated patterns T 0 , T 1 , , T i 1 . The hypothesis assumes that for two test patterns with the maximum difference, the number of faults (errors) detected by the second pattern will also be maximized. The Hamming distance H D ( T i , T j ) for j { 0 , 1 , , i 1 } is often used as a criterion to distinguish the test pattern T i from the previous patterns T 0 , T 1 , , T i 1  [7,32].
For the general case, the Hamming distance is calculated by comparing two sequences of data, T i = t i , 0 t i , 1 t i , n 1 and T j = t j , 0 t j , 1 t j , n 1   each consisting of n characters t i , l and t j , l from an arbitrary alphabet [35,36].
The Hamming distance H D ( T i , T j ) between T i and T j is defined as the number of positions at which t i , l and t j , l differ, and it can be expressed as
H D ( T i , T j ) = l = 0 n 1 δ ( t i , l , t j , l ) ,
where
δ ( t i , l , t j , l ) = 1 , if t i , l t j , l , 0 , if t i , l = t j , l .
When comparing n characters in the patterns T i and T j , the minimum value of the Hamming distance, m i n H D ( T i , T j ) , is 0 if all characters match, and the maximum value, m a x H D ( T i , T j ) , is n if all n characters differ. For example, in the case of binary number systems, the Hamming distance H D ( T i , T j ) between T i = 0110 and T j = 1100 is 2, as they differ at two positions.
Most commonly, binary test patterns are considered, but they can also be interpreted as sets of characters from other alphabets corresponding to different numeral systems. For example, quaternary, octal, hexadecimal, and other alphabets can be used, where a fixed number of consecutive bits in the original binary patterns represent the binary code of a character in the corresponding alphabet. For instance, the binary pattern T i = 01011011 2 , when divided into groups of two consecutive bits, can be represented in the quaternary number system, which uses an alphabet of four characters (0, 1, 2, and 3), as  T i = 1123 4 . In the hexadecimal system, the same pattern T i takes the form T i = 5 B 16 .
The main idea behind most approaches to controlled random test generation is to select, from a given set of test candidates, the pattern T i that has the maximum Hamming distance with respect to the previously included patterns T 0 , T 1 , , T i 1 . Various criteria can be used for selecting T i ; however, the most common is to maximize the value of m i n H D ( T i , T j ) , where j { 0 , 1 , , i 1 } . In this case, the generated test will be characterized by the minimum Hamming distance between any two test patterns included in the test [7]. As a result, the controlled random test is defined by the value of the minimum Hamming distance, as described in the following definition.
Definition 1.
The value m i n H D ( T i , T j ) for a controlled random test T { T 0 , T 1 , , T q 1 } is equal to the minimum Hamming distance between two arbitrary test patterns T i and T j , where i j and i , j { 0 , 1 , 2 , , q 1 } .
In terms of coding theory, the  m i n H D ( T i , T j ) characteristic can be regarded as the code distance d of the code T { T 0 , T 1 , , T q 1 } , which represents the smallest Hamming distance between different pairs of code words T 0 , T 1 , , T q 1 . Therefore, based on the fundamental principles of coding theory, several useful conclusions can be drawn that must be considered when generating controlled random tests.
In particular, a significant feature of controlled random tests is their limited length. This follows from the fact that the larger the minimum distance m i n H D ( T i , T j ) , used as a criterion for including T i in the test, the fewer patterns exist that satisfy this criterion. This relationship is described by the Hamming bound [35,36].
Hamming Bound. The estimation of the Hamming bound for d = m i n H D ( T i , T j ) = 2 r + 1 , where r is an integer, can be expressed as the inequality:
q b n k = 0 r n k ( b 1 ) k .
Here, the Hamming bound denotes the maximum possible size q of a b-ary block code T of length n and minimum Hamming distance d between code words. In the context of controlled random tests, the value d = m i n H D ( T i , T j ) directly affects the test length. For example, in the case of binary patterns ( b = 2 ) with n = 8 and d = m i n H D ( T i , T j ) = 7 = 2 × 3 + 1 , the Hamming bound can be calculated as
q 2 8 k = 0 3 8 k ( 2 1 ) k .
As shown in this example, increasing the Hamming distance m i n H D ( T i , T j ) to a value of 7 reduces the estimate of q to 2. This means that the controlled random test T for n = 8 and d = m i n H D ( T i , T j ) = 7 will consist of no more than two patterns: T 0 and T 1 . It is important to note that the pattern T 0 is generated randomly and can take any of 2 n = 2 8 binary values, while the second pattern T 1 is selected to satisfy the criterion m i n H D ( T 0 , T 1 ) 7 . Thus, there is a large variety of controlled random tests T with m i n H D ( T 0 , T 1 ) 7 , but each test consists of only two patterns: T 0 and T 1 . In practice, this result shows that for short memory words, only a very limited number of maximally distant test patterns can be constructed, which restricts the applicability of such strict distance requirements.
Let us consider approaches for constructing controlled random tests consisting of a minimal number q of test patterns, for which m i n H D ( T i , T j ) takes the maximum possible value.
For the synthesis of controlled random tests with a small number of patterns q, we first examine classic codes with m i n H D ( T i , T j ) n / 2  [37]. It is known that the Plotkin theorem allows for determining the maximum possible number q of code words in a binary code of length n with m i n H D ( T i , T j ) n / 2 . The Plotkin bound provides an upper limit for this value [37,39].
Plotkin Bound. If d = m i n H D ( T i , T j ) n / 2 and n is even, the following inequality holds for q:
q 2 d 2 d n , for 2 d n > 0 ; 4 d , for 2 d n = 0 .
For odd values of n, the Plotkin bound is expressed as
q 2 d + 1 2 d + 1 n , for 2 d + 1 n > 0 ; 4 d + 4 , for 2 d + 1 n = 0 .
Based on the application of the Plotkin bound, a formal algorithm for synthesizing controlled random tests MMHD ( q ) , characterized by a small number q of patterns with the maximum–minimum Hamming distance ( m a x _ m i n H D ( T i , T j ) ) between test patterns T i and T j , is proposed in [38].
For q = 2 , based on (3) and (4), the maximum possible value m a x _ m i n H D ( T i , T j ) of the distance can be estimated. This result, m a x _ m i n H D ( T i , T j ) = n , and the corresponding test MMHD ( 2 ) = { T 0 , T 1 } = { T 0 , T 0 ¯ } , is supported by previous findings for the optimal random test consisting of two inverse patterns, T 0 and T 0 ¯  [35,36].
In the case of q = 3 , according to the Plotkin bound, d = m a x _ m i n H D ( T i , T j ) 3 n / 4 . As shown in [38], the closest optimal solution MMHD ( 3 ) can be achieved only for m a x _ m i n H D ( T i , T j ) = 2 n / 3 , where 2 n / 3 < 3 n / 4 . For q = 4 , the test MMHD ( 4 ) can be constructed with d = m a x _ m i n H D ( T i , T j ) 2 n / 3  [38].
By generalizing the heuristic procedure for constructing MMHD ( q ) for small values of q, a formal algorithm for synthesizing the MMHD ( q ) test for a given q 4 was presented in [38]. According to this algorithm, the  MMHD ( q ) test consists of q patterns with
m a x _ m i n H D ( T i , T j ) = ( 2 q 3 ) · n / ( 2 q 2 1 ) .
It should be noted that for any integer q, the distance ( 2 q 3 ) · n / ( 2 q 2 1 ) is greater than n / 2 ; however, as q increases, it approaches n / 2 .
A very important remark concerns the size n of the test patterns, which must be considered. In order to generate MMHD ( q ) tests with q 4 , the value of n must be divisible by 2 q 2 1 , and its minimal value is 2 q 2 1 . For example, in the case of q = 4 , one variant of the MMHD ( 4 ) test is T = { 000 , 011 , 101 , 110 } with the minimal value n = 2 q 2 1 = 2 4 2 1 = 3 .
Based on the Hamming distance H D ( T i , T j ) for test patterns T i and T j , and their Cartesian distance C D ( T i , T j ) as described in [39], a method for synthesizing optimal controlled random tests (OCRTs) is considered. These tests are characterized by the conditions H D ( T i , T j ) n / 2 and m i n H D ( T i , T j ) = n / 2 . In the general case, the number of OCRT patterns is defined as q = 2 log 2 n + 1 . A constructive algorithm for generating test patterns is presented in [39]. For the specific case when n = 2 m , where m is an integer, the number q of OCRT patterns T 0 , T 1 , , T q 1 is given by q = 2 ( m + 1 ) . For example, when n = 4 , the number of OCRT patterns is q = 6 , and for n = 8 , the number of patterns is q = 8 .
The example of the MMHD ( 4 ) test with H D ( T i , T j ) = 2 for n = 3 presented in [38] and the OCRT for n = 4 are shown in Table 1.
These small examples demonstrate how theoretical bounds directly limit the number of feasible patterns in controlled random tests, especially for memory diagnostics where compact but diverse test sets are needed. At the same time, the MMHD ( 4 ) and OCRTs shown in Table 1 can be interpreted as templates for generating patterns of similar tests. A specific MMHD ( 4 ) or OCRT can be defined by a randomly chosen initial test pattern T 0 , based on which subsequent patterns are generated by inverting the bits of T 0 according to the given templates. For example, in the case of MMHD ( 4 ) shown in Table 1, if the random initial pattern is chosen as T 0 = 101 , the corresponding new MMHD ( 4 ) test will consist of the patterns { 101 , 110 , 011 , 000 } . Therefore, in the following text, the abbreviation MMHD ( q ) is used to denote a family of tests with q test patterns and the corresponding value of m a x _ m i n H D ( T i , T j ) .
A common drawback of both approaches to generating controlled random tests, MMHD and OCRT, is the limitation and restriction on the size of their test patterns. Four main algorithms are known for constructing a set of test patterns (code words) with given properties based on an initial test (code) [36]. These algorithms utilize the following four properties, which we formulate for the case of MMHD ( q )  [36].
Property 1.
The result of permuting the bits t i , l simultaneously in all q test patterns T i = t i , 0 t i , 1 t i , n 1 of the test MMHD ( q ) = { T 0 , T 1 , , T q 1 } is the test MMHD ( q ) .
Property 2.
The result of inverting the bits t i , l in all q test patterns T i = t i , 0 t i , 1 t i , n 1 of the test MMHD ( q ) = { T 0 , T 1 , , T q 1 } is also the test MMHD ( q ) .
Property 3.
The test MMHD ( q ) = { T 0 , T 1 , , T q 1 } with test patterns
T i = t i , 0 t i , 1 t i , u · n 1 ,
consisting of u × n bits, is obtained from the test pattern T i = t i , 0 t i , 1 t i , n 1 of the original test MMHD ( q ) by concatenating it (repeating) u times. For example, if  u = 2 , then
T i = t i , 0 t i , 1 t i , 2 n 1 = t i , 0 t i , 1 t i , n 1 t i , 0 t i , 1 t i , n 1 .
Property 4.
The result of scaling (increasing) by s times the test MMHD ( q ) = { T 0 , T 1 , , T q 1 } is the test MMHD ( q ) , consisting of test patterns T i = ( t i , 0 ) s ( t i , 1 ) s ( t i , n 1 ) s , where T i = t i , 0 t i , 1 t i , n 1 is the test pattern of the original test MMHD ( q ) .
The results of applying the above properties to the test MMHD ( 4 ) shown in Table 1 are illustrated in Table 2.
The presented analysis of controlled random tests with a small number of test patterns demonstrates the feasibility of generating such tests without significant computational costs. The examples provided in Table 2 illustrate the derivation of new controlled random tests using formal methods, which enable the generation of a new set of test patterns as well as the adjustment of the pattern size n. It should be noted that the given properties apply not only to the MMHD ( q ) and OCRT but also to any controlled random tests.
As noted above, the key characteristic of tests with a small number of test patterns is the relationship between the value of the m a x _ m i n H D ( T i , T j ) Hamming distance and the number q of test patterns. Increasing the required minimum Hamming distance m i n H D ( T i , T j ) —essentially maximizing it—reduces the number q of patterns in the generated test. It is evident that both test parameters, namely, the Hamming distance m i n H D ( T i , T j ) and the number of test patterns, influence the efficiency and quality of the test. Intuitively, one might conclude that increasing both parameters improves the test properties. Indeed, the more test patterns that are maximally distant from each other, the more efficient the test becomes. However, as the analysis above has shown, it is impossible to simultaneously increase both parameters. Therefore, in the subsequent discussion, we will consider an approach that focuses on increasing the number of test patterns while maintaining the minimum Hamming distance m i n H D ( T i , T j ) at an acceptable level.
In summary, the theoretical analysis highlights a fundamental trade-off: increasing the minimum Hamming distance between test patterns inevitably reduces the number of patterns in the test. These coding-theoretic constraints motivate the search for alternative distance measures, which can better balance diversity and efficiency. Section 3 introduces representation-dependent interpretations of the Hamming distance, aimed at achieving this balance.

3. Modified Approach to Hamming Distance Calculation

The Hamming distance has significant limitations as a dissimilarity metric, as it only distinguishes fully matching patterns T i and T j with H D ( T i , T j ) = 0 while treating all other non-identical patterns equally. One argument that confirms the indistinguishability of non-matching sequences is the case of binary patterns T i and T i ¯ , for which the Hamming distance is always constant and equal to n. For example, H D ( 10000000 , 01111111 ) = H D ( 10101010 , 01010101 ) = H D ( 00001111 , 11110000 ) = 8 . As seen above, the Hamming distance H D ( T i , T i ¯ ) in all the given examples is equal to n = 8 , indicating the same maximum difference across all pairs of patterns. However, the structural differences between these pairs of sequences are significant. An even greater structural difference exists in the following character sequence pairs, 00000000 ,   11110000 ;   11111111 ,   00001111 ;   01011010 ,   11010100 , for which the Hamming distance is H D ( T i , T j ) = 4 . These examples highlight the need for alternative dissimilarity measures capable of capturing not only the number of differing bits but also the spatial and structural relationships within the sequences.
Let us examine the potential for extending the use of Hamming distance in comparing finite sequences of characters T i = t i , 0 t i , 1 t i , n 1 and T j = t j , 0 t j , 1 t j , n 1 , which represent test patterns consisting of n characters (elements) t i , l and t j , l , where l { 0 , 1 , , n 1 } . The alphabet of characters t i , l and t j , l can be arbitrary, as well as the number n of elements in the patterns T i and T j . Without loss of generality, we assume that the test pattern T i is initially a binary pattern, meaning that the characters t i , l { 0 , 1 } .
The primary objective of the existing modifications to the Hamming distance calculation is to select, from among potential test pattern candidates, a test pattern T i that is most different from the previously included pattern T j .
The first modification assumes that the length of a binary test pattern is restricted to n = 2 w , where w is an integer. Such constraints frequently occur in practice when addressing diagnostic problems in computer systems. Under this condition, the original binary sequence
T i = t i , 0 t i , 1 t i , n 1
can be represented in w + 1 different ways denoted as T i ( 2 v ) , where v { 0 , 1 , , w } . The index 2 v specifies the number of consecutive bits that form each character in the new alphabet. For v = 0 ( 2 0 = 1 ), we obtain the binary alphabet:
T i ( 1 ) = t i , 0 ( 1 ) t i , 1 ( 1 ) t i , n 1 ( 1 ) .
For v = 1 ( 2 1 = 2 ), we obtain the quaternary alphabet:
T i ( 2 ) = t i , 0 ( 2 ) t i , 1 ( 2 ) t i , n / 2 1 ( 2 ) ,
where each character is formed from two consecutive bits of T i ( 1 ) . For larger values of v, the construction continues in the same manner, producing
T i ( 4 ) , T i ( 8 ) , , T i ( n / 2 ) , T i ( n ) = T i .
In the general case, the sequence T i ( 2 v ) consists of 2 w v characters. Each character of this alphabet is obtained by concatenating two neighboring characters of the previous representation T i ( 2 v 1 ) . For instance, for  T i ( 2 ) :
t i , 0 ( 2 ) = t i , 0 ( 1 ) t i , 1 ( 1 ) , t i , 1 ( 2 ) = t i , 2 ( 1 ) t i , 3 ( 1 ) ,
and, more generally,
t i , l ( 2 v ) = t i , 2 l ( 2 v 1 ) t i , 2 l + 1 ( 2 v 1 ) , l = 0 , 1 , , n / 2 v 1 .
Thus, each representation T i ( 1 ) , T i ( 2 ) , T i ( 4 ) , , T i ( 2 w ) defines a sequence over a different alphabet, offering multiple perspectives on the same original binary pattern.
The given interpretation of the original binary patterns does not prevent the determination of the Hamming distance between the patterns T i and T j . Just as in the case of binary vectors, Equation (1) can also be applied here, provided that both patterns are expressed in the same chosen alphabet. Let us illustrate this with the following example for the case where n = 2 3 .
Example 1. 
As an example of binary test patterns, consider T i = 01100011 2 and T j = 01011011 2 , for which the condition n = 2 w = 2 3 is satisfied. For each pattern of binary characters T i = 01100011 2 and T j = 01011011 2 , in accordance with the above-described definitions, there are w + 1 = 4 representations in the form of sequences of characters belonging to different alphabets (see Table 3).
In Table 3, the Hamming distance for the original binary patterns T i ( 1 ) and T j ( 1 ) , as well as for their representations in different alphabets with their respective characters, is presented. In this example, ASCII codes are used to represent T i ( 8 ) and T j ( 8 ) . For all cases, the value of the Hamming distance has been calculated based on Equation (1). The resulting characteristic H D ( T i , T j ) , represented by the four components { 3 , 2 , 2 , 1 } , provides a more accurate assessment of the differences between these test patterns.
The requirement that the dimension n = 2 w of a binary pattern T i , where w is an integer, may not always be satisfied in practice. Consequently, for cases where n 2 w , when mapping the original pattern T i into the sequences T i ( 1 ) , T i ( 2 ) , T i ( 4 ) , , the required number of bits equal to 2 v may be insufficient for the last character of the sequence T i ( 2 v ) , where v { 0 , 1 , 2 , , w } . For example, considering the pattern T i = 0110001 2 , where n = 7 , it can be represented as the sequences T i ( 1 ) , T i ( 2 ) , T i ( 4 ) , and T i ( 8 ) . However, in three cases— T i ( 2 ) , T i ( 4 ) , and  T i ( 8 ) —the required number of bits is insufficient for the last character of the corresponding alphabet; specifically, one bit is missing for T i ( 2 ) , and one bit is missing in both T i ( 4 ) and T i ( 8 ) . An obvious solution to overcome this limitation is a cyclic interpretation of the original pattern T i = t i , 0 t i , 1 t i , n 1 . This interpretation assumes that the bit following the last bit t i , n 1 is the first bit t i , 0 , thereby using the initial bits of the pattern to obtain the required number of bits for the last character of T i ( 2 v ) . For the pattern T i = 0110001 2 , such an interpretation allows us to obtain T i ( 1 ) = 01100010 2   T i ( 2 ) = 01100010 2 = 1202 4   T i ( 4 ) = 01100010 2 = 62 16   T i ( 8 ) = 01100010 2 = b 256 = ( 98 ) 256 .
The notation b 256 above, as well as the symbols “c” and “[” in Table 3, represent values in the base-256 numeral system. In each case, a group of 8 consecutive bits is interpreted as a single element of a 256-ary alphabet. Thus, 01100010 2 = 98 10 is represented by b 256 (ASCII code for the letter b), while 01100011 2 = 99 10 and 01011011 2 = 91 10 correspond to the ASCII symbols “c” and “[”, respectively. It should be emphasized that these ASCII representations are used only as illustrative examples, since the base-256 system also includes non-printable and control characters. The purpose of this notation is to demonstrate that every 8-bit block can be treated as one symbol of a base-256 alphabet.
Removing the restriction on the size n of the binary pattern T i by extending it to the required number of bits allows for an expansion in the number of alphabets available for different mappings of the original pattern. Naturally, considering the possibility of extending the original binary pattern to the required number of bits, the number of alphabets can be increased up to n. These alphabets consist of characters specified by one bit, two bits, three bits, four bits, and so on, up to the alphabet in which each character is determined by n consecutive bits. For example, considering the original pattern T i = 01100 2 with n = 5 and its cyclic extensions, it can be represented in the form of sequences obtained for n = 5 different alphabets. The sequential representations are as follows: T i ( 1 ) = T i = 01100 2 , T i ( 2 ) = 011000 2 = 120 4 , T i ( 3 ) = 011000 2 = 30 8 , T i ( 4 ) = 01100011 2 = 63 16 , and T i ( 5 ) = 01100 2 = C 32 .
Another approach to representing the original test pattern in various numerical systems with different character sets is to expand the last character of the pattern by appending, for example, all zero values. Consider the example of a test pattern T i = 01100 , which can be represented in five different numerical systems, each with its own alphabet. To avoid potential conflicts related to the absence of a complete set of characters (or their graphical representation) in alphabets containing a large number of symbols, each character in all numerical systems will be represented in binary form and separated by spaces. Thus, the test pattern T i = 01100 can be represented in five different numerical systems as follows: T i ( 1 ) = 0 1 1 0 0 2   T i ( 2 ) = 01 10 00 4   T i ( 3 ) = 011 000 8   T i ( 4 ) = 0110 0000 16   T i ( 5 ) = 011000 32 .
Let us define the binary n-bit test pattern T i as a pattern in a numerical system other than binary.
Definition 2.
The test pattern T i , consisting of n binary characters, can be interpreted in a 2 r numerical system with 2 r characters as the pattern T i ( r ) , where r { 1 , 2 , , n } . This pattern consists of n / r characters, where T i is expanded to a size of n / r × r bits by adding n / r × r n zeros.
For example, the test pattern T i = 0110001 with n = 7 can be represented in the octal ( 2 3 ) numerical system with n / r = 7 / 3 = 3 characters as T i ( 3 ) = 011 000 100 8 = 304 8 . To achieve this representation, n / r × r n = 7 / 3 × 3 7 = 2 zeros have been added.
Note that the above examples of interpreting the pattern T i and Definition 2 allow us to consider binary test patterns in various number systems. Using the last example of representing the test pattern T i = 01100 in n = 5 different number systems, let us illustrate the determination of the Hamming distance H D ( T i , T j ) (Equation (1)) for each interpretation of two patterns: T i = 01100 and T j = 01011 .
The below example (see Table 4) of determining the Hamming distance demonstrates the possibility of obtaining, based on Equation (1), several numerical assessments of the relationship between the original binary patterns T i and T j .
Let us now define a new measure of dissimilarity between the binary test patterns T i and T j , which consists of a set of numerical characteristics represented by the Hamming distances.
Definition 3
(Dissimilarity Measure M D ( T i , T j ) ). The dissimilarity measure M D ( T i , T j ) between two binary test patterns T i = t i , 0 t i , 1 t i , n 1 and T j = t j , 0 t j , 1 t j , n 1 , where t i , l , t j , l { 0 , 1 } and l { 0 , 1 , , n 1 } , is defined as an n-component vector composed of the Hamming distances
H D 1 = H D [ T i ( 1 ) , T j ( 1 ) ] , H D 2 = H D [ T i ( 2 ) , T j ( 2 ) ] , , H D n = H D [ T i ( n ) , T j ( n ) ]
calculated according to Equation (1).
The analyzed characters t i , l and t j , l of the test patterns T i ( r ) and T j ( r ) , according to Definition 2, are represented by r { 1 , 2 , , n } binary bits. Accordingly, using Equation (1), the numerical values of the components H D 1 , H D 2 , , H D n of the dissimilarity measure M D ( T i , T j ) are determined. Table 5 presents examples of calculating M D ( T i , T j ) for various pairs of test patterns T i and T j in the case where n = 5 .
Note that in all three examples presented in Table 4 and Table 5, the same pattern T i = 01100 was used as the test pattern T i , while three different patterns T j were selected to determine the value of the measure M D ( T i , T j ) . Accordingly, for the three cases shown in Table 4 and Table 5, the measure of dissimilarity M D ( T i , T j ) takes the following values: M D ( 01100 , 01011 ) = { 3 , 2 , 2 , 2 , 1 } , M D ( 01100 , 10000 ) = { 3 , 2 , 1 , 1 , 1 } , M D ( 01100 , 11001 ) = { 3 , 3 , 2 , 2 , 1 } .
The examples presented in Table 4 and Table 5 demonstrate the indistinguishability of all three patterns T j with respect to the reference pattern T i = 01100 when using the classical measure of difference—the Hamming distance—since in all three cases H D ( T i , T j ) = H D 1 = 3 . At the same time, applying the new measure of dissimilarity (see Definition 3) reveals different degrees of difference between the patterns T j and T i , as expressed by the varying values of the components H D 2 , H D 3 , and  H D 4 of the measure M D ( T i , T j ) .
The measure of dissimilarity M D ( T i , T j ) for the binary test patterns T i and T j has the following obvious properties.
Property 1.
The minimum value of all components H D 1 , H D 2 , , H D n of the measure M D ( T i , T j ) is zero, that is,
m i n H D 1 = m i n H D 2 = = m i n H D n = 0 .
This condition occurs when the test patterns are identical, i.e.,  T i = T j .
Property 2.
If one component H D r , where r { 1 , 2 , , n } , equals zero, then all the others are also equal to zero. Conversely, if any component H D r > 0 , then all other components are greater than zero as well.
Property 3.
The maximum values of the components H D 1 , H D 2 , , H D n depend on the number of characters in the representations T i ( r ) and T j ( r ) . Specifically,
m a x H D 1 = n , m a x H D 2 = n / 2 , m a x H D 3 = n / 3 , m a x H D n / 2 1 = n n / 2 1 = 3 , m a x H D n / 2 = = m a x H D n 1 = 2 , m a x H D n = 1 .
The maximum difference between test patterns T i and T j in terms of the new dissimilarity measure M D ( T i , T j ) is achieved when T j is the bitwise inverse of T i . In this case, all components H D 1 , H D 2 , H D 3 , , H D n of the measure M D ( T i , T j ) reach their maximum values.
For example, for  T i = 01100 and its inverse pattern T j = T i ¯ = 10011 , the corresponding component values are
H D 1 = m a x H D 1 = n = 5 , H D 2 = m a x H D 2 = n / 2 = 5 / 2 = 3 , H D 3 = m a x H D 3 = n / 3 = 5 / 3 = 2 , H D 4 = m a x H D 4 = n / 4 = 5 / 4 = 2 , H D 5 = m a x H D 5 = 1 .
Property 4.
The components of M D ( T i , T j ) satisfy the following relation:
H D 1 H D 2 H D 3 H D n .
The fulfillment of this property is explained by the fact that when calculating H D r + 1 , the number of characters included in the patterns T i ( r + 1 ) and T j ( r + 1 ) is less than or equal to the number of characters within the patterns T i ( r ) and T j ( r ) . Therefore, the following inequality holds: H D r T i ( r ) , T j ( r ) H D r + 1 T i ( r + 1 ) , T j ( r + 1 ) .
As noted in [7,13,32], the idea of controlled random tests is as follows: the next test pattern T i is generated to be as different (or distant) as possible from the previously generated patterns T 0 , T 1 , , T i 1 in terms of predetermined measures of dissimilarity. For this purpose, at each step of forming the next test pattern, a candidate is selected from a set of potential test patterns [7,13,32]. The main operation of the selection procedure is to determine the numerical value of the chosen measure of dissimilarity between two patterns: T i , which is one of the test patterns, and  T j , which is one of the candidate test patterns. As a result, the candidate test pattern for which the measure (or measures) of dissimilarity attains the maximum value is selected as the next test pattern.
Let us explain the procedure for generating a controlled random test using the examples presented in Table 4 and Table 5 for the case where the Hamming distance is applied as a measure of dissimilarity. Assume that the first pattern of the controlled random test is T i = 01100 , and three randomly generated candidates for the next test pattern are T j = 01011 , T j = 10000 , and  T j = 11001 . For each candidate pattern T j , the value of the dissimilarity measure, as defined in Equation (1), is calculated with respect to the test pattern T i . As shown in Table 4 and Table 5, the value of H D 1 is equal to 3 in all three cases. The classical technique for generating controlled random tests assumes that any of the three candidate patterns— T j = 01011 , T j = 10000 , or  T j = 11001 —can be selected as the next test pattern.
In cases where multiple test pattern candidates yield the maximum value of H D 1 , the new measure of dissimilarity M D ( T i , T j ) , introduced by the authors (see Definition 3), provides a more comprehensive way to distinguish between test pattern candidates T j with respect to the test pattern T i . To achieve this, it is necessary to analyze the values of the next component, H D 2 , of the dissimilarity measure. As demonstrated in the given example, the maximum value H D 2 = 3 is obtained for the pattern T j = 11001 , which can then be selected as the next test pattern in the controlled random test.
Based on the above example and following the classical strategy for generating random tests, we will formulate one of the rules for applying the new dissimilarity measure.
MD ( T i , T j ) Application Rule. The test pattern candidate T j is selected as the next test pattern T i if it is the only candidate, among the entire set of test pattern candidates, that has the maximum value H D r for the minimum value of r { 1 , 2 , , n } in the dissimilarity measure M D ( T i , T j ) , specifically among the components H D 1 , H D 2 , , H D n . Otherwise, if multiple candidates have the same maximum value of H D r , one of them is selected randomly.
Other strategies for generating controlled random tests are possible, differing from the given M D ( T i , T j ) application rule for the new dissimilarity measure. For example, instead of selecting the next test pattern based on a single component of the measure, one can use an integral measure of dissimilarity, M D Total ( T i , T j ) , defined as the arithmetic sum of its components, i.e.,  M D Total ( T i , T j ) = H D 1 + H D 2 + + H D n .
M D Total ( T i , T j ) = r = 1 n H D r ( T i , T j ) .
Table 6 presents the results of calculations, based on Equation (1), of the components H D 1 , H D 2 , H D 3 , , H D 8 of the dissimilarity measure M D ( T i , T j ) for the binary pattern T i = 00000000 and for four test pattern candidates T j : 11110000, 00110011, 11100010, and 10010101. The last column of Table 6 contains the value of the integral measure M D Total ( T i , T j ) for all four candidate patterns T j .
As can be seen from Table 6, according to both criteria, namely, the  M D ( T i , T j ) application rule and its integral value M D Total ( T i , T j ) , the pattern T j = 10010101 will be selected as the next test pattern.
An analysis of the data presented in Table 6 shows that as r increases, the significance of the H D r component decreases significantly. This can be explained by the fact that for r n / 2 (see M D ( T i , T j ) Property 3), all H D r components, except for the last H D n , take only three possible values: 0 if T i = T j , and either 1 or 2 if T i T j .
The given measure of dissimilarity M D ( T i , T j ) demonstrates its effectiveness in generating controlled random tests. It enables the selection of an optimal pattern T j from a set of candidates that share the same Hamming distance from the previously included test pattern T i . However, its application is associated with the same drawbacks as classical approaches, requiring significant computational costs. Most notably, it necessitates the determination of dissimilarity measures between candidate test patterns and previously selected test patterns.

4. Controlled Random Test Generation with the Given Hamming Distance

The significant computational complexity of generating controlled random tests has led to the development of methods for constructing such tests that do not require selecting the next test pattern from a set of possible candidates. The core idea behind these methods is to use a small number of test patterns that are maximally distant from each other in terms of the Hamming distance while avoiding the computationally expensive process of candidate selection and enumeration.
As noted in previous sections, there are approaches for constructing controlled random tests with a small number of test patterns based on formal procedures that eliminate computational costs, such as MMHD(q) and OCRTs [38]. The key characteristic of such tests is the relationship between the maximum–minimum Hamming distance, m a x _ m i n H D ( T i , T j ) , and the number of test patterns, q. Increasing the required minimum Hamming distance, m i n H D ( T i , T j ) , effectively maximizing it for the generated test, results in a reduction in the number of test patterns, q. Unfortunately, a simultaneous increase in both parameters—namely, the required m i n H D ( T i , T j ) and the number of test patterns q—is not possible.
As an alternative to existing approaches, we propose a method based on increasing the number of test patterns q while maintaining the value of m i n H D ( T i , T j ) at a moderate level. The result of implementing the proposed approach is a controlled random test consisting of binary patterns T i = t i , 0 t i , 1 t i , n 1 , where t i , l { 0 , 1 } for l { 0 , 1 , , n 1 } , and where m i n H D ( T i , T j ) , for  j i , takes given values from the set { 0 , 1 , , q 1 } . The main feature of the proposed approach is the use of a new measure of dissimilarity, M D ( T i , T j ) (see Definition 3), introduced by the authors, which is defined for an arbitrary alphabet of test patterns. This measure allows for the estimation of the n components H D 1 , H D 2 , , H D n that quantify the dissimilarity between two arbitrary binary patterns T i = t i , 0 t i , 1 t i , n 1 and T j = t j , 0 t j , 1 t j , n 1 . Property 4 of this measure states that the components are related according to the following inequality: H D 1 H D 2 H D 3 H D n , where H D r = H D r [ T i ( r ) , T j ( r ) ] . According to Definition 2, the patterns T i ( r ) and T j ( r ) represent the binary patterns T i and T j in a base- 2 r numerical system consisting of 2 r distinct characters.
Based on Property 4 of the new measure of dissimilarity M D ( T i , T j ) , we formulate a statement that serves as the foundation for generating controlled random tests with a small number q of test patterns while maintaining a given m i n H D ( T i , T j ) value.
Statement 1.
A controlled random test consisting of q = 2 r binary patterns, where r { 1 , 2 , , n } , is the minimum value of r for which H D r [ T i ( r ) , T j ( r ) ] = m a x H D r [ T i ( r ) , T j ( r ) ] for all i j { 0 , 1 , , q 1 } and n mod r = 0 has m i n H D ( T i , T j ) = n / r .
The limited number of test patterns, q = 2 r , is determined by the restricted number of characters in the alphabet, which is also equal to 2 r , in which the test patterns T i ( r ) = t i , 0 ( r ) t i , 1 ( r ) t i , n / r 1 ( r ) and T j ( r ) = t j , 0 ( r ) t j , 1 ( r ) t j , n / r 1 ( r ) are represented. Only in this case can the characters at the same positions in all q test patterns assume different values without repetition. This is the necessary condition for achieving the maximum value H D r [ T i ( r ) , T j ( r ) ] of the Hamming distance for all pairs of test patterns T i ( r ) and T j ( r ) , where i j { 0 , 1 , , q 1 } . To illustrate the meaning of this statement, let us consider the following example of a controlled random test.
Example 2.
In the case of n = 6 , the controlled random test consisting of q = 4 patterns has the following form in binary ( r = 1 ), quaternary ( r = 2 ), and octal ( r = 3 ) number systems (see Table 7).
As can be seen from Table 7, there are no repeating characters in any digit of the quaternary and octal representations of the test patterns. This indicates that in both cases, the Hamming distance between the test patterns, according to Equation (1), takes its maximum values. Indeed, for any two patterns T i and T j in the test, H D 2 [ T i ( 2 ) , T j ( 2 ) ] = m a x H D 2 [ T i ( 2 ) , T j ( 2 ) ] = n / 2 = 3 , as well as H D 3 [ T i ( 3 ) , T j ( 3 ) ] = m a x H D 3 [ T i ( 3 ) , T j ( 3 ) ] = n / 3 = 2 . Moreover, in the quaternary case, all four characters (0, 1, 2, and 3) are used in each digit of the test patterns without repetition.
Following the above statement, we can conclude that a test consisting of q = 2 2 binary patterns with a minimum value of r = 2 , for which H D 2 [ T i ( 2 ) , T j ( 2 ) ] = m a x H D 2 [ T i ( 2 ) , T j ( 2 ) ] for all i j { 0 , 1 , 2 , 3 } , satisfies the condition H D ( T i , T j ) m i n H D ( T i , T j ) = n / r = 6 / 2 = 3 . Indeed, as can be observed, H D 1 [ T 0 ( 1 ) , T 1 ( 1 ) ] = H D 1 [ T 0 ( 1 ) , T 2 ( 1 ) ] = H D 1 [ T 1 ( 1 ) , T 3 ( 1 ) ] = H D 1 [ T 2 ( 1 ) , T 3 ( 1 ) ] = 3 and H D 1 [ T 0 ( 1 ) , T 3 ( 1 ) ] = H D 1 [ T 1 ( 1 ) , T 2 ( 1 ) ] = 6 . All values of H D ( T i , T j ) are greater than or equal to 3, which confirms that the condition stated in the statement is fulfilled.
Based on the statement, we propose a formal procedure for constructing controlled random tests with q = 2 r binary patterns and a given value of m i n H D ( T i , T j ) n / r . The possible values of m i n H D ( T i , T j ) depend on the number n of bits in the binary test patterns T i and T j . For example, for  n = 16 , the possible test configurations with a given value of m i n H D ( T i , T j ) n / r and the number q of test patterns are presented in Table 8.
As can be seen from Table 8, the fixed value n of the test pattern bit length determines the possible values of m i n H D ( T i , T j ) for which a test can be constructed based on the statement. Naturally, the most interesting cases are those where m i n H D ( T i , T j ) attains acceptably large values, which correspond to the smallest values of r.
The algorithm for generating binary controlled random tests with a given Hamming distance consists of the steps outlined in Algorithm 1. An extension of this algorithm (Algorithm 1) can involve selecting not necessarily consecutive r bits of the patterns but any arbitrary r out of n bits to specify the binary code of characters. The only limitation is the requirement to select non-overlapping blocks of r bits.
Algorithm 1 Generation of Binary Controlled Random Tests with a Given Hamming Distance
Input data: the size n of the test patterns (in bits) and the required value of R e c _ m i n H D ( T i , T j ) , which denotes the minimum Hamming distance between any two test patterns.
  • From the inequality
    R e c _ m i n H D ( T i , T j ) n r ,
    determine the largest possible value of r { 1 , 2 , , n } . Based on this condition, compute the number of test patterns as q = 2 r . The minimum Hamming distance between any two patterns will then satisfy
    m i n H D ( T i , T j ) n r , for all i j , i , j { 0 , 1 , , 2 r 1 } .
  • Assign the first r bits of each test pattern T 0 , T 1 , , T q 1 to distinct binary codes selected randomly from an alphabet of 2 r possible r-bit combinations. Each code is assigned without repetition, starting from T 0 to T q 1 . As a result, each pattern contains in the first r bits a unique binary combination corresponding to one of the 2 r possible codes.
  • Repeat step 2 for the next n r 1 blocks of r bits. In each iteration, assign the next r bits of all test patterns (e.g., bits r to 2 r 1 , 2 r to 3 r 1 , etc.) to new sets of unique binary codes of length r, again selected randomly without repetition.
  • If the pattern length n is not divisible by r, i.e.,  n n r · r > 0 , then assign the remaining bits randomly for all test patterns.
The described algorithm generates test patterns with a guaranteed minimum Hamming distance between any pair of test patterns. By partitioning each test pattern into independent r-bit blocks and ensuring that each block contains a unique binary code selected from a maximally distinct set, the method guarantees that the resulting test set is both compact and diverse. The final step introduces randomness in the unused bit positions, further enhancing the variability of the test without violating the distance constraint. It should be emphasized that the guaranteed minimum Hamming distance is determined solely by the disjoint allocation of unique codes in the complete r-bit blocks. When the pattern length n is not divisible by r, the remaining bits are filled by random padding. This step only affects the residual part of the patterns and does not reduce the guaranteed minimum Hamming distance between them. On the contrary, it adds additional variability to the generated tests while fully preserving the distance constraint.
The computational complexity of Algorithm 1 is O ( q · n ) , where q = 2 r denotes the number of generated patterns and n is the pattern length. This is significantly more efficient than classical candidate–selection approaches, which usually require O ( q 2 · n ) operations due to pairwise comparisons.
The following example demonstrates the operation of Algorithm 1 for a specific input configuration, highlighting the structure of the generated patterns and validating the achieved minimum Hamming distance.
Example 3.
Let the size n of the test patterns be 7, and let the required value of R e c _ m i n H D ( T i , T j ) = 3 .
1.
Based on the inequality R e c _ m i n H D ( T i , T j ) n / r , we obtain r = 2 . This is the largest value of r for which the inequality holds: 3 7 / 2 = 3 . Therefore, the generated test T will consist of 2 r = 4 patterns, T 0 , T 1 , T 2 , T 3 , with a guaranteed minimum Hamming distance m i n H D ( T i , T j ) 3 .
2.
The first two bits t i , 0 and t i , 1 of the test patterns are assigned binary values corresponding to four distinct characters from the quaternary alphabet: 00, 01, 10, and 11. These binary codes are assigned randomly, without repetition, starting from T 0 to T 3 . As a result, each test pattern contains a unique 2-bit prefix: 10, 11, 00, and 01.
3.
Step 2 is repeated 7 / 2 1 = 2 times for the next two r-bit blocks, i.e., ( t i , 2 , t i , 3 ) and ( t i , 4 , t i , 5 ) . For each block, values are assigned using random permutations of the quaternary alphabet.
4.
The remaining bit t i , 6 , since 7 7 / 2 × 2 = 1 , is assigned randomly for all patterns.
  • The resulting controlled random test is presented in Table 9.
  • All pairwise Hamming distances between patterns satisfy the required minimum value:
H D ( T 0 , T 1 ) = 5 , H D ( T 0 , T 2 ) = 3 , H D ( T 0 , T 3 ) = 6 , H D ( T 1 , T 2 ) = 6 , H D ( T 1 , T 3 ) = 3 , H D ( T 2 , T 3 ) = 5 .
Since all values are greater than or equal to 3, the condition R e c _ m i n H D ( T i , T j ) 3 is fulfilled.
It should be noted that the proposed algorithm was intentionally formulated in the binary domain, since it directly corresponds to the digital world at the low level of hardware implementation, where the binary alphabet is natural and fundamental. Although the theoretical framework allows for the use of higher-radix alphabets and non-binary symbols, our focus on binary patterns reflects the practical context of memory testing and built-in self-test environments. Extending the method to real non-binary alphabets remains an interesting direction for future research.

5. Experimental Investigation

This section presents a comparative analysis of the effectiveness of two types of tests: controlled random tests with a given Hamming distance (CRTs), generated using the proposed algorithm, and standard random patterns. The comparison is conducted in the context of their ability to detect multicell faults, particularly Pattern-Sensitive Faults (PSFs) occurring in RAM. Due to the size of the test patterns and the vast number of their permutations, the comparisons are based on the average values obtained from the generated test collections.
The first test collection consists of patterns generated using the proposed algorithm, based on the controlled random test generation method described earlier. Using this approach, a controlled random test of length 1024 bits was generated with m i n H D ( T i , T j ) = 256 . For the input parameters n = 1024 and R e c _ m i n H D ( T i , T j ) = 256 , the value of r was determined to be 4, resulting in the generation of 2 r = 16 patterns per test. The average value of the metric M D Total ( T i , T j ) (Equation (5)) for these patterns is 287,092, with a standard deviation of 34.54.
In contrast, the second test collection consists of 16 purely random patterns of the same size. The average value of M D Total ( T i , T j ) for these test sets is 273,815, with a standard deviation of 871.
The basic statistical parameters of the generated test collections are summarized in Table 10.
The generated test collections confirm their statistical reliability, as evidenced by low relative errors ( E r e l ) and consistent coefficient of variation (CV) values.
Similar test collections of 1024-bit size and comparable statistical parameters were generated for r = 2, 3, and 5 and will be used in further analyses.
In Table 11, the detailed results for individual values of H D 1 , H D 2 , , H D 8 and M D for r = 4 are compared.
The results presented in Table 11 highlight the comparative performance of the analyzed CRT and standard random tests across individual H D n values ( H D 1 to H D 8 ) and the overall metric M D Total ( T i , T j ) for r = 4 . On average, the CRT outperforms random tests across all tested H D values, with percentage differences ranging from 0.42% to 7.27%. The highest difference (7.27%) was observed for H D 4 , which aligns with the parameter r = 4 used in generating the CRTs. This correlation underscores the effectiveness of the proposed algorithm in targeting specific test conditions based on the selected r parameter. Although the percentage differences in Table 11 may appear moderate, they are systematic across all evaluated parameters. More importantly, the subsequent experiments (Table 12 and Figure 1) confirm that these differences translate into noticeable improvements in memory fault coverage.
In the next set of experiments, conducted in a simulation environment, the focus was on evaluating the effectiveness of test patterns generated using the proposed algorithm in detecting multicell RAM faults. Multicell memory faults, such as Pattern-Sensitive Faults (PSFs), involve dependencies between any k out of N memory cells (N being the memory size). These faults are triggered when specific binary patterns are present in the related cells or when particular transitions occur based on predefined conditions. Consequently, effective detection of such faults requires generating the largest possible number of binary patterns during testing. These patterns activate the faults and enable their detection.
The simulations analyzed groups consisting of k memory cells for k = 2 10 . For each group, up to 2 k distinct k-bit binary patterns (i.e., values ranging from 0 to 2 k 1 ) could potentially appear. The objective was to determine the average number of unique k-bit patterns generated during a march test, with the memory being initialized in each iteration using test patterns from the CRT with a given Hamming distance. The obtained results were compared with the results for random tests presented in Table 8.12 in [38].
Each simulation-based test consisted of a specific number of iterations, determined by the value of r: 4 iterations for r = 2 , 8 iterations for r = 3 , and 16 iterations for r = 4 . During each iteration, the simulated memory was initialized with a given test pattern from the analyzed set, followed by the execution of a transparent version of the MATS+ memory test. Throughout the simulation, the memory model was monitored, and the number of unique k-bit binary patterns observed in individual groups of k-cells was recorded. The results are presented in Table 12.
Based on the results presented in Table 12, it can be concluded that CRTs consistently achieve better results than random tests in most cases. The difference is most noticeable for lower values of r and k, where the CRT outperforms random tests by several percentage points. For instance, for r = 2 and k = 3 , the CRT achieves a fault coverage of 97.27%, while random tests reach 93.74%. Fault coverage decreases as the value of k increases. This is expected, since the number of possible binary combinations 2 k grows exponentially, making full coverage harder to achieve. However, the results in Table 12 indicate that CRTs perform slightly better for larger k compared to random tests, highlighting the greater ability of a CRT to generate diverse test patterns.
In the final experiment, the average number of unique k-bit test patterns generated in arbitrary groups of k out of N memory cells using the proposed algorithm was compared with that obtained using traditional CRT generation methods, including native antirandom tests [13], concatenated antirandom tests [13], and STPG [40]. The comparison was carried out for fault groups of size k = 3 , using tests generated for r = 3 (i.e., 8 iterations). During the simulation, the number of distinct k-bit patterns generated in each iteration was recorded to assess the performance of the proposed method relative to the standard techniques. The outcomes of this analysis are presented in Figure 1, which illustrates the differences in the number of generated k-bit patterns across the tested methods.
The results show that the CRT method with a given Hamming distance consistently outperforms other test generation methods in terms of fault coverage, with one exception in the second iteration, where it achieves a slightly lower result (76.13%) compared to native antirandom (77.77%). However, starting from the third iteration, the CRT surpasses all other methods, demonstrating a faster increase in fault coverage (e.g., between iterations 2 and 3, the CRT rises from 76.13% to 90.05%). In the later iterations (7 and 8), the CRT approaches near-complete fault coverage, reaching 99.92% and 99.99%, respectively. Although the differences between the CRT and other methods diminish with a higher number of iterations, the CRT consistently demonstrates superior effectiveness, confirming its ability to generate diverse and efficient test patterns even in advanced stages of testing.
In summary, the experimental evaluation demonstrates that the proposed method consistently provides superior results compared to both purely random tests and classical controlled random tests. The improvements are systematic across all examined cases, particularly in terms of fault coverage and test diversity, while being achieved with significantly reduced computational effort.

6. Conclusions

This paper presented a method for generating controlled random tests with a given Hamming distance, aimed at improving the diversity and effectiveness of test sets used in computing systems, particularly memory devices. A new dissimilarity measure was introduced, based on Hamming distances calculated for binary patterns represented in various numeral systems. This extended measure allows for a more detailed assessment of differences between patterns compared to the classical Hamming distance alone.
We proposed an algorithm that generates test sets with a predefined minimum Hamming distance, without selecting patterns from large pools of candidates. This approach reduces computational effort while ensuring sufficient diversity in the generated patterns.
The effectiveness of the proposed method was evaluated through a series of comparative experiments. The results showed that the generated tests outperform not only purely random test sets but also traditional controlled random tests (CRTs) in several aspects. Specifically, tests created using the proposed method achieved higher total dissimilarity values and better coverage of multicell memory faults, particularly for lower numbers of iterations and smaller fault group sizes. Although some improvements observed in the experiments may appear moderate, they are systematic across all evaluated cases. More importantly, the obtained results demonstrate that these differences translate into tangible practical benefits, as the proposed method consistently achieves higher fault coverage than random and classical controlled random testing, especially in scenarios with smaller fault groups and lower iteration counts.
These results suggest that the method may be a practical alternative in contexts where test diversity and efficiency are important. Future work may include extending the approach to more complex fault models or exploring its use in different types of systems.

Author Contributions

Conceptualization, V.N.Y.; methodology, V.N.Y.; software, I.M.; validation, V.N.Y., I.M. and M.K.; formal analysis, V.N.Y. and I.M.; investigation, V.N.Y. and I.M.; writing—original draft preparation, V.N.Y. and I.M.; writing—review and editing, V.N.Y., I.M. and M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by grant WZ/WI-ITI/3/2023 from the Faculty of Computer Science at Bialystok University of Technology, Ministry of Science and Higher Education, Poland.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Malaiya, Y.K.; Yang, S. The Coverage Problem for Random Testing. In Proceedings of the IEEE International Test Conference (ITC ’84), Philadelphia, PA, USA, 16–18 October 1984; pp. 237–245. [Google Scholar]
  2. Duran, J.W.; Ntafos, S.C. An Evaluation of Random Testing. IEEE Trans. Softw. Eng. 1984, 10, 438–444. [Google Scholar] [CrossRef]
  3. Arcuri, A.; Iqbal, M.Z.; Briand, L. Random Testing: Theoretical Results and Practical Implications. IEEE Trans. Softw. Eng. 2012, 38, 258–277. [Google Scholar] [CrossRef]
  4. Renfer, G. Automatic program testing. In Proceedings of the 3rd Conference of the Computing and Data Processing Society of Canada, Toronto, ON, USA, 2–3 June 1962. [Google Scholar]
  5. Anand, S.; Burke, E.K.; Chen, T.Y.; Clark, J.; Cohen, M.B.; Grieskamp, W.; Harman, M.; Harrold, M.J.; Mcminn, P. An Orchestrated Survey of Methodologies for Automated Software Test Case Generation. J. Syst. Softw. 2013, 86, 1978–2001. [Google Scholar] [CrossRef]
  6. Yarmolik, S.V.; Yarmolik, V.N. Controlled random tests. Autom. Remote Control 2012, 73, 1704–1714. [Google Scholar] [CrossRef]
  7. Huang, R.; Sun, W.; Xu, Y.; Chen, H.; Towey, D.; Xia, X. A survey on adaptive random testing. IEEE Trans. Softw. Eng. 2019, 47, 2052–2083. [Google Scholar] [CrossRef]
  8. Bardell, P.H.; McAnney, W.H.; Savir, J. Built-In Test for VLSI: Pseudorandom Techniques; John Wiley & Sons: New York, NY, USA, 1987. [Google Scholar]
  9. Das, D.; Karpovsky, M. Exhaustive and Near-Exhaustive Memory Testing Techniques and their BIST Implementations. J. Electron. Test. 1997, 10, 215–229. [Google Scholar] [CrossRef]
  10. McCluskey, E.J. Verification Testing—A Pseudoexhaustive Test Technique. IEEE Trans. Comput. 1984, C-33, 541–546. [Google Scholar] [CrossRef]
  11. Fujiwara, H. Logic Testing and Design for Testability; MIT Press: Cambridge, MA, USA, 1985. [Google Scholar] [CrossRef]
  12. Karpovsky, M.G.; van de Goor, A.J.; Yarmolik, V.N. Pseudo-Exhaustive Word-Oriented DRAM Testing. In Proceedings of the European Design and Test Conference (ED&TC), Paris, France, 6–9 March 1995; pp. 126–132. [Google Scholar] [CrossRef]
  13. Malaiya, Y.K. Antirandom Testing: Getting the Most out of Black-Box Testing. In Proceedings of the International Symposium on Software Reliability Engineering (ISSRE), Toulouse, France, 24–27 October 1995; pp. 86–95. [Google Scholar] [CrossRef]
  14. Wu, S.H.; Malaiya, Y.K.; Jayasumana, A.P. Antirandom vs. pseudorandom testing. In Proceedings of the IEEE International Conference on Computer Design: VLSI in Computers and Processors, Austin, TX, USA, 5–7 October 1998; p. 221. [Google Scholar]
  15. Wu, S.H.; Jandhyala, S.; Malaiya, Y.K.; Jayasumana, A.P. Antirandom Testing: A Distance-Based Approach. VLSI Des. 2008, 2008, 1–9. [Google Scholar] [CrossRef]
  16. Yin, H. Antirandom Test Patterns Generation Tool. In Technical Report CS-98-101; Computer Science Department, Colorado State University: Fort Collins, CO, USA, 1996; Fall 1996. [Google Scholar]
  17. von Mayrhauser, A.; Bai, A.; Chen, T.; Anderson, C.; Hajjar, A. Fast Antirandom (FAR) Test Generation. In Proceedings of the 3rd IEEE International Symposium on High-Assurance Systems Engineering, Washington, DC, USA, 13–14 November 1998; HASE ’98. pp. 262–269. [Google Scholar]
  18. Sahari, M.S.; A’ain, A.K.; Grout, I.A. Scalable Antirandom Testing (SAT). Int. J. Innov. Sci. Mod. Eng. 2015, 3, 33–35. [Google Scholar]
  19. Bodean, G.; Bodean, D.; Labunetz, A. New Schemes for Self-Testing RAM. In Proceedings of the Design, Automation and Test in Europe (DATE), Munich, Germany, 7–11 March 2005; Volume 2, pp. 858–859. [Google Scholar]
  20. Chen, T.Y.; Leung, H.; Mak, I.K. Adaptive Random Testing. In Proceedings of the 9th Asian Computing Science Conference, Chiang Mai, Thailand, 8–10 December 2004; pp. 320–329. [Google Scholar]
  21. Zhou, Z.Q. Using Coverage Information to Guide Test Case Selection in Adaptive Random Testing. In Proceedings of the 34th Annual IEEE Computer Software and Applications Conference Workshops (COMPSACW), Seoul, Republic of Korea, 19–23 July 2010; pp. 208–213. [Google Scholar] [CrossRef]
  22. Jiang, B.; Zhang, Z.; Chan, W.K.; Tse, T.H. Adaptive Random Test Case Prioritization. In Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering (ASE), Auckland, New Zealand, 16–20 November 2009; pp. 233–244. [Google Scholar]
  23. Chan, K.P.; Chen, T.Y.; Towey, D. Good Random Testing. In Proceedings of the 9th Ada-Europe International Conference on Reliable Software Technologies, Palma de Mallorca, Spain, 14–18 June 2004; Llamosí, A., Strohmeier, A., Eds.; 2004; pp. 200–212. [Google Scholar] [CrossRef]
  24. Chan, K.P.; Chen, T.Y.; Towey, D. Restricted Random Testing. In Proceedings of the 7th European Conference on Software Quality, Helsinki, Finland, 9–13 June 2002; Kontio, J., Conradi, R., Eds.; pp. 321–330. [Google Scholar] [CrossRef]
  25. Chan, K.P.; Chen, T.Y.; Towey, D. Normalized Restricted Random Testing. In Lecture Notes in Computer Science, Proceedings of the 8th Ada-Europe International Conference on Reliable Software Technologies (Ada-Europe 2003), LNCS, Toulouse, France, 16–20 June 2003; Springer: Toulouse, France, 2003; Volume 2655, pp. 368–381. [Google Scholar] [CrossRef]
  26. Xu, S.; Chen, J. Maximum Distance Testing. In Proceedings of the Asian Test Symposium, Guam, GU, USA, 18–20 November 2002; pp. 15–20. [Google Scholar]
  27. Kuo, F. An Indepth Study of Mirror Adaptive Random Testing. In Proceedings of the Ninth International Conference on Quality Software, QSIC 2009, Jeju, Republic of Korea, 24–25 August 2009; Choi, B., Ed.; IEEE Computer Society: Washington, DC, USA, 2009; pp. 51–58. [Google Scholar]
  28. Xu, S. Orderly Random Testing for Both Hardware and Software. In Proceedings of the 14th IEEE Pacific Rim International Symposium on Dependable Computing, Washington, DC, USA, 15–17 December 2008; pp. 160–167. [Google Scholar]
  29. Nikravan, E.; Parsa, S. Hybrid adaptive random testing. Int. J. Comput. Sci. Math. 2020, 11, 209–221. [Google Scholar] [CrossRef]
  30. Tappenden, A.; Miller, J. A Novel Evolutionary Approach for Adaptive Random Testing. IEEE Trans. Reliab. 2009, 58, 619–633. [Google Scholar] [CrossRef]
  31. Grindal, M.; Offutt, J.; Andler, S.F. Combination Testing Strategies—A Survey; Technical Report ISE-TR-04-05, GMU Technical Report; George Mason University: Fairfax, VA, USA, 2004. [Google Scholar]
  32. Chen, T.Y.; Kuo, F.C.; Merkel, R.G.; Tse, T.H. Adaptive Random Testing: The ART of test case diversity. J. Syst. Softw. 2010, 83, 60–66. [Google Scholar] [CrossRef]
  33. Feldt, R.; Poulding, S.; Clark, D.; Yoo, S. Test Set Diameter: Quantifying the Diversity of Sets of Test Cases. In Proceedings of the IEEE International Conference on Software Testing, Verification and Validation (ICST), Chicago, IL, USA, 11–15 April 2016; pp. 223–233. [Google Scholar] [CrossRef]
  34. Arcuri, A.; Briand, L.C. Adaptive random testing: An illusion of effectiveness? In Proceedings of the 20th International Symposium on Software Testing and Analysis (ISSTA 2011), Toronto, ON, Canada, 17–21 July 2011; pp. 265–275. [Google Scholar] [CrossRef]
  35. Hamming, R.W. Error detecting and error correcting codes. Bell Syst. Tech. J. 1950, 29, 147–160. [Google Scholar] [CrossRef]
  36. Peterson, W.W.; Weldon, E.J. Error-Correcting Codes, 2nd ed.; MIT Press: Cambridge, MA, USA, 1972. [Google Scholar]
  37. Plotkin, M. Binary codes with specified minimum distance. IIRE Trans. Inf. Theory 1960, 6, 445–450. [Google Scholar] [CrossRef]
  38. Mrozek, I. Multi-Run Memory Tests for Pattern Sensitive Faults; Springer International Publishing: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  39. Mrozek, I.; Yarmolik, V.N. Optimal Controlled Random Tests. In Proceedings of the Computer Information Systems and Industrial Management: 16th IFIP TC8 International Conference, CISIM 2017, Białystok, Poland, 16–18 June 2017; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10244, pp. 27–38. [Google Scholar] [CrossRef]
  40. Yiunn, D.B.Y.; Bin A’ain, A.K.; Khor Ghee, J. Scalable test pattern generation (STPG). In Proceedings of the IEEE Symposium on Industrial Electronics Applications (ISIEA ’10), Penang, Malaysia, 3–5 October 2010; pp. 433–435. [Google Scholar] [CrossRef]
Figure 1. Fault coverage comparison for different test generation methods for k = 3 and r = 3 (8 iterations).
Figure 1. Fault coverage comparison for different test generation methods for k = 3 and r = 3 (8 iterations).
Applsci 15 09951 g001
Table 1. Examples of MMHD(4) for n = 3 and OCRT with n = 4 tests.
Table 1. Examples of MMHD(4) for n = 3 and OCRT with n = 4 tests.
MMHD(4)OCRT
Pattern123Pattern1234
T0000T00000
T1011T11111
T2110T20011
T3101T31100
T40101
T51010
Table 2. MMHD(4) test extension examples.
Table 2. MMHD(4) test extension examples.
Original MMHD(4)Property 1 (Bit Permutation)Property 2 (Bit Inversion)
0 0 00 0 01 1 1
0 1 11 0 10 1 0
1 1 01 1 00 0 1
1 0 10 1 11 0 0
Original MMHD(4)Property 3 (Repetition, u = 3)Property 4 (Bit Scaling, s = 2)
0 0 00 0 0 0 0 0 0 0 00 0 0 0 0 0
0 1 10 1 1 0 1 1 0 1 10 0 1 1 1 1
1 1 01 1 0 1 1 0 1 1 01 1 1 1 0 0
1 0 11 0 1 1 0 1 1 0 11 1 0 0 1 1
Table 3. Hamming distance computation in multiple alphabets for n = 8.
Table 3. Hamming distance computation in multiple alphabets for n = 8.
w = 0 w = 1 w = 2 w = 3
T i T i ( 1 ) = 01100011 T i ( 2 ) = 1203 T i ( 4 ) = 63 T i ( 8 ) = c = ( 99 ) 256
T j T j ( 1 ) = 01011011 T j ( 2 ) = 1123 T j ( 4 ) = 5 B T j ( 8 ) = [ = ( 91 ) 256
H D ( T i , T j ) 3221
Table 4. Example of the Hamming distance calculation.
Table 4. Example of the Hamming distance calculation.
T i ( 1 ) T i ( 2 ) T i ( 3 ) T i ( 4 ) T i ( 5 )
T i 0 1 1 0 001 10 00011 0000110 000001100
T j 0 1 0 1 101 01 10010 1100101 100001011
H D ( T i , T j ) 32221
Table 5. Example of the dissimilarity measure M D ( T i , T j ) calculation.
Table 5. Example of the dissimilarity measure M D ( T i , T j ) calculation.
T i ( 1 ) T i ( 2 ) T i ( 3 ) T i ( 4 ) T i ( 5 )
T i 0 1 1 0 001 10 00011 0000110 000001100
T j 1 0 0 0 010 00 00100 0001000 000010000
H D ( T i , T j ) 32111
T i 0 1 1 0 001 10 00011 0000110 000001100
T j 1 1 0 0 111 00 10110 0101100 100011001
H D ( T i , T j ) 33221
Table 6. Numerical values for dissimilarity measure M D ( T i , T j ) .
Table 6. Numerical values for dissimilarity measure M D ( T i , T j ) .
T j HD 1 HD 2 HD 3 HD 4 HD 5 HD 6 HD 7 HD 8 MD Total
111100004221111113
001100114232222118
111000104322221117
100101014432222120
Table 7. Binary controlled random test for n = 6 and its representation in quaternary and octal notation.
Table 7. Binary controlled random test for n = 6 and its representation in quaternary and octal notation.
T i ( r ) t i , 0 ( 1 ) t i , 1 ( 1 ) t i , 2 ( 1 ) t i , 3 ( 1 ) t i , 4 ( 1 ) t i , 5 ( 1 ) t i , 0 ( 2 ) t i , 1 ( 2 ) t i , 2 ( 2 ) t i , 0 ( 3 ) t i , 1 ( 3 )
T 0 00110103115
T 1 01100012030
T 2 10011121347
T 3 11001030262
Table 8. Dependence between the number of bits n = 16 of binary patterns and m i n H D ( T i , T j ) .
Table 8. Dependence between the number of bits n = 16 of binary patterns and m i n H D ( T i , T j ) .
r23456789101516
m i n H D ( T i , T j ) = 16 / r 85432221111
q48163264128256512102432,76865,536
Table 9. Controlled random test with m i n H D ( T i , T j ) = 3 .
Table 9. Controlled random test with m i n H D ( T i , T j ) = 3 .
T t i , 0 t i , 1 t i , 2 t i , 3 t i , 4 t i , 5 t i , 6
T 0 1000011
T 1 1111000
T 2 0001111
T 3 0110100
Table 10. Comparison of statistical parameters between CRT and random tests.
Table 10. Comparison of statistical parameters between CRT and random tests.
ParameterCRTRandom Tests
Number of tests in collection10002000
Patterns per test1616
Bits per pattern10241024
Average MD Total ( T i , T j ) 261,052.62248,361.75
Standard deviation34.54455.55
Coefficient of variation (CV)0.0132%0.1834%
Relative error ( E rel )0.00082%0.00804%
Confidence level95%95%
Table 11. Average results for H D 1 , H D 2 , , H D 8 and M D Total ( T i , T j ) for r = 4 .
Table 11. Average results for H D 1 , H D 2 , , H D 8 and M D Total ( T i , T j ) for r = 4 .
Test HD 1 HD 2 HD 3 HD 4 HD 5 HD 6 HD 7 HD 8 MD Total ( T i , T j )
CRT65,53649,15237,71130,72024,43620,52017,61615,360287,092
Random61,31645,97835,61228,63723,82420,21117,47715,295274,354
% Diff6.88%6.90%5.87%7.27%2.57%1.53%0.80%0.42%4.64%
Table 12. Fault coverage [%] comparison for random tests and CRTs with a given Hamming distance for different memory fault sizes k and different numbers of iterations ( 2 r ).
Table 12. Fault coverage [%] comparison for random tests and CRTs with a given Hamming distance for different memory fault sizes k and different numbers of iterations ( 2 r ).
k345678910
r = 2 (4 iterations)
random [38]93.7477.6756.4237.0722.7513.347.594.31
CRT with given H D r 97.2781.4958.4237.7922.9313.357.584.22
r = 3 (8 iterations)
random [38]99.6995.0181.0360.4140.3324.9114.618.28
CRT with given H D r 99.9897.5284.5862.8941.5425.3814.778.33
r = 4 (16 iterations)
random [38]100.0099.7596.4784.3964.4443.6427.1815.88
CRT with given H D r 100.0099.9597.8586.7866.3644.6427.5516.03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mrozek, I.; Kopczewski, M.; Yarmolik, V.N. An Approach for Controlled Random Tests with a Given Hamming Distance Generation. Appl. Sci. 2025, 15, 9951. https://doi.org/10.3390/app15189951

AMA Style

Mrozek I, Kopczewski M, Yarmolik VN. An Approach for Controlled Random Tests with a Given Hamming Distance Generation. Applied Sciences. 2025; 15(18):9951. https://doi.org/10.3390/app15189951

Chicago/Turabian Style

Mrozek, Ireneusz, Marek Kopczewski, and Vyacheslav N. Yarmolik. 2025. "An Approach for Controlled Random Tests with a Given Hamming Distance Generation" Applied Sciences 15, no. 18: 9951. https://doi.org/10.3390/app15189951

APA Style

Mrozek, I., Kopczewski, M., & Yarmolik, V. N. (2025). An Approach for Controlled Random Tests with a Given Hamming Distance Generation. Applied Sciences, 15(18), 9951. https://doi.org/10.3390/app15189951

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop