Next Article in Journal
Integrated Framework to Assess the Extent of the Pandemic Impact on the Size and Structure of the E-Commerce Retail Sales Sector and Forecast Retail Trade E-Commerce
Previous Article in Journal
Optimized Torque Performance of a 7-Phase Outer-Rotor Surface-Mounted Permanent Magnet Synchronous Machine for In-Wheel E-Motorcycle Application
Previous Article in Special Issue
Design of Light-Weight Timing Error Detection and Correction Circuits for Energy-Efficient Near-Threshold Voltage Operation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive BIST for Concurrent On-Line Testing on Combinational Circuits

by
Vasileios Chioktour
* and
Athanasios Kakarountas
*
Department of Computer Science and Biomedical Informatics, University of Thessaly, 35131 Lamia, Greece
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(19), 3193; https://doi.org/10.3390/electronics11193193
Submission received: 24 August 2022 / Revised: 21 September 2022 / Accepted: 26 September 2022 / Published: 5 October 2022
(This article belongs to the Special Issue VLSI Circuits & Systems Design)

Abstract

:
Safety-critical systems embedding concurrent on-line testing techniques are vulnerable to design issues causing the degradation of totally self-checking (TSC) property, which is proved to be fatal for further operations (e.g., space electronics, medical devices). In addition to the exploration of the degradation of TSC property over time, a concurrent on-line testing architecture is offered that adjusts the input activity, addressing the absence of input values or the low frequency of their appearance (e.g., during sleep mode). During concurrent on-line testing, the inputs of the circuit under test (CUT) are, at the same time, its test vectors. This architecture tolerates possible degradation of the terms that contribute to the calculation of the totally self-checking goal (TSCG ( t ) ). An adaptive built-in self-test (BIST) unit is proposed that dynamically applies test vector subsets when permitted, based on the frequency of appearance of the input values. The clustering of the inputs is based on the k-means algorithm and, in combination with the ordering of the test vectors to minimize the subsets, results in partitioning the test procedure in a significantly shorter time. The comparison to other solutions used for concurrent on-line testing showed that the proposed adaptive BIST has significant advantages. It can cope with rare occurrences, or even no occurrence, of input values by enabling the BIST mechanism appropriately. The results showed that it may increase the TSCG ( t ) up to almost 90% when applied during a low-power mode and present better concurrent test latency (CTL) when assumptions regarding the availability of all input values and the probability of occurrence are not realistic.

1. Introduction

Safety-critical systems are found nowadays in various applications, including avionics, satellite electronics, transportation, energy plants, medical electronics and much more. Special consideration should be given to those applications demanding low-power dissipation, extended autonomy and availability in terms of safety. Although several technologies have been proposed over time, a discontinuity in research of nowadays technology targeting low-power dissipation is observed when it comes to meeting safe-operation demanding applications. The safety of such system operations under hostile environments is achieved using either hard-rad materials or by introducing redundant hardware. Thus, with respect to the required level of safety, a system that is characterized as safe may even present three times the initial area cost and power dissipation of a non-safe system. Especially when targeting portable (or autonomous) devices for safety-critical applications, the design requirements include hard restrictions in power dissipation, integration area and safety features [1,2]. This may partially be addressed by adopting design-for-testability (DfT) design flows. A variety of techniques to achieve DfT have been proposed in the past.
The first approach and most commonly used is the utilization of off-line BIST units. These units are used either after manufacturing to verify functionality or before operation after powering up the system. They are considered more as diagnostic tools able to detect permanent errors due to faults belonging to a targeted fault model (e.g., stuck-on or stuck-open fault model) and verification of operation after manufacturing. Their functionality is summarized as the application of test vectors to the inputs of the circuit under test (CUT) and the comparison of its outputs with a pre-calculated set of output values. A deviation from the expected output indicates a fault occurrence, which may be revealed after the application of several test vectors. In the case where the system is checked successfully without the detection of a fault, the BIST unit is deactivated, and the system starts its operation without being interrupted by the BIST. More information on off-line BIST units and techniques can be found in [3,4].
The second approach is on-line BIST, which is achieved using dedicated hardware that allows error detection in due time. On-line BIST is either performed after power-up of the system or during a periodic interruption of the system. During the periodic on-line test, the system is set under a test mode. This is achieved by storing the state of the system in memory and, after this step, applying test vectors (stored or generated by the BIST units) to the inputs of the system or its subsystems. Following the operation of the typical BIST, the output of the system under test is compared to the expected one, which is also stored in (or generated by) the BIST unit. The test mode is terminated if either no erroneous output is detected after a short period of time or an erroneous output is reported. In the case that no fault is detected, the system is recovered to the previous state, before the test mode, by fetching previous values from memory. On the contrary, when an erroneous output is detected, the BIST reports a code corresponding to the erroneous output and its fault origin, and then countermeasures are activated (such as system recovery). Usually, a system enters the test mode during idle operation, that is, no operation of the microprocessor or during low-power mode (which is in many applications similar to sleep or hibernate mode). On-line testing is a good approach to the embedded self-checking (SC) property of a system without significant changes to the circuit design level of abstraction.
A special category of on-line testing is that of concurrent on-line testing, which is performed in parallel to normal operations of the system. This is achieved by exploiting the inputs of the system as test vectors. The methods that are still used in this field were introduced several decades ago and are based on the use of multi-channeled architectures of unreliable components [5] to continuously monitor the operation of the system. This introduces a significant penalty in area integration and power dissipation, making them affordable only to safety-critical applications of high cost. The characteristic of the design complexity is found at the top level of abstraction since there is no need to change the circuits rather than manage the inputs and the comparison of their outputs. However, there is a more sophisticated type of circuit based on special encoding schemes to detect and correct errors. This design approach, although it introduces design complexity at the circuit level of abstraction, has enabled the reduction of the requirements in terms of area and power. In any case, the approach of concurrent testing introduces higher implementation costs than a system not embedding such special hardware. In the case that an encoding scheme is used to achieve the SC property, its selection should be made wisely. Since reliability and self-testing properties of the system are of utmost importance, a metric to evaluate the new structures is required. Lo and Fujiwara have introduced a probabilistic measure for SC circuits in [6].
Figure 1 shows the block diagram of an input vector monitoring a concurrent BIST scheme where the CUT consists of n inputs and m outputs. During the concurrent online mode ( T m = 0), the inputs A = A[1:n] are driven by the input vector V[1:n]. At the same time, A is compared to a set of active test vectors (AGC, active test set generator and comparator). If A is similar to any one of the active test vectors, we say that hit has occurred. In this case, a hit signal is generated by the AGC and driven to a response verifier (RV) unit. The RV is a sequential compactor with m inputs whose content is examined at the end of the test. If there is a difference between the examined content and the expected one, then an error has been detected. The concurrent test latency (CTL) is an evaluation measure of a concurrent BIST scheme, defined as the mean time required to complete test performance while the circuit performance is in normal mode. During the offline mode ( T m = 1), the inputs are driven by the V [1:n] outputs of the AGC.
We were motivated by the fact that the aforementioned works on concurrent on-line testing are based on the assumption that all test vectors required to test a combinational circuit are also available as input values. Our experience in many design and development projects is different since input values depend on the use of the system (or circuit), the environment of operation and the characteristics of the input itself. As an example, a well-being monitoring device will receive inputs from a range of sensors indicating good health. There is no certainty that a fault has not occurred in the meantime, masking its appearance. As time passes, the reliability of the circuit decreases, as well as the probability of successfully testing its appearance. When an input value indicates a health problem, the effect of the masked fault may prove critical. Thus, there is a need to adopt a solution that applies those test vectors that do not appear as expected to the CUT inputs in due time for the successful detection of a fault. In this work, an adaptive BIST is proposed that dynamically selects from the least frequently appeared input values, mapping them to pre-selected test-vector subsets. The application of those test-vectors exploits the modes of operation found in most microcontrollers (e.g., idle mode, power-down mode) without interrupting the normal operation. However, it may be adopted for special purpose systems that embed on-line BIST modes. Furthermore, the probabilistic metric that was presented in [6] is adopted for evaluating the results.
The rest of this work is organized as follows. In Section 2, the fundamental knowledge for TSC circuits is presented. Additionally, the probabilistic measure for SC circuits that will be used throughout this work is analyzed and discussed. In Section 3, the impact of the input’s activity to the T S C G ( t ) measure is presented. In Section 4, the typical approach of applying a BIST to a system achieving the SC property using duplication of units is offered. In Section 5, the proposed approach for the dynamic application of test vectors by a modified BIST is presented and discussed. In Section 6, the results from the proposed work are presented for three scenarios of input characteristics. Finally, this work concludes in Section 7.

2. Reliability of Self-Checking Circuits

The most frequently used fault model is the stuck-at fault model (s-a). It is based on the occurrence of a single fault where a line permanently gets either logic-0, so it is called stuck-at-0 (s-a-0), or logic-1, so it is called stuck-at-1 (s-a-1). It is suitable for transient and non-permanent faults and can be used for simulating various other fault models under certain conditions and assumptions. According to this fault model, we assume that faults occur only on input or output lines of logic gates or memory components and that in a given time, there is only one fault occurrence in the circuit. There is also sufficient time for the detection of a fault between occurrences of any two consecutive faults for all input code words to be applied to the circuit. The initial data bits may be enriched with additional encoding bits, and new larger data words can be created, which are called code words.
Having defined the fault model of interest, the terminology regarding safe-operation of a system follows.
  • A digital system is called fault-secure (FS) if during normal operation any modeled fault either does not affect the system’s output or its presence is indicated no later than when the first erroneous output appears.
  • Furthermore, a digital system is called self-testing (ST) if any modeled fault eventually results in a failure indication during normal system operation.
  • Finally, a digital system is called totally self-checking (TSC) if it is both self-testing and fault-secure.
The FS property is intended to guarantee that any results prior to a failure indication are correct, and the SC property is intended to expose all faults so that they do not build up and form a non-modeled fault.
In [6], a complete probabilistic measure is proposed for TSC circuits, which is proportional to the reliability of fault tolerance systems. The probability to achieve TSCG in a circuit is defined as follows:
T S C G ( t ) = P r o b TSC goal is guaranteed at cycle t = R ( t ) + S ( t )
where R ( t ) represents the conditional probability of no faults in the t cycle and S ( t ) is the conditional probability of faults occurring before or in the t cycle and guarantees the TSC target. The term R ( t ) is a qualitative representation of the reliability of a circuit and refers to the constant failure rate of the circuit components. The term R ( t ) is characterized by the construction process and tends to value ‘0’ (that is, no reliability) as time reaches the mean time before failure (MTBF). Thus, the term R ( t ) is associated with the degraded reliability of a netlist of logic gates over time, which is expected for every electronic component after an extensive operation period. The term S ( t ) is calculated by summing all the probabilities that the circuit is fault-secure and/or self-testing in relation to the first fault and the probabilities of detecting one fault before the next occurs. That means that S ( t ) depends on the number of test vectors applied to a system over a given period of time in order to ensure the TSC property. We observe that the contribution of S ( t ) to the TSCG is proportional to the number of test vectors applied over time. Due to the characteristic behavior of R ( t ) and S ( t ) , the T S C G ( t ) is ranging in the bounds of [0,1]. At this point, it can be said that in the case that several test vectors are not applied (for any reason) in a given time, then S ( t ) is contributing negatively to a circuit in maintaining the TSC property.
Considering the previous observation, in combination with the fact that the most common method of achieving low power consumption for a circuit is to reduce the activity of a unit’s inputs (e.g., idle state or power-down mode), then we may assume that the TSC property will surely degrade in time. A plethora of methods have been proposed for achieving low power dissipation, but the main method is to introduce local control signals to manipulate the functionality of the circuit and introduce a low-power mode of operation. Global signals such as the clock or reset can be restricted by a local control signal, and their propagation in the rest of the system can be dynamically controlled. Another method is to re-schedule the tasks to be performed (e.g., algorithm transformation) to reduce the activity on wide global buses. More information on the latest techniques can be found in [1,2].

3. The Impact of Input Activity on TSCG(t)

A typical structure of a TSC checker tree used to detect faults in Units A and B is shown in Figure 2. Units A and B form a duplication scheme of the same processing unit and are tested to produce the expected output and its complement. Their outputs are then checked in a TSC checker for any deviation. This scheme introduces twice the area and power penalty but ensures low-design complexity for achieving the FS property. Any technique of minimizing the output activity of the units leads to a reduction of the overall power dissipation and consequently leads to the reduction of the frequency of the test vectors’ activity. Consequently, a test vector is applied to the TSC checker inputs for a long time, which can be considered constant. Thus, If the input of the units presents a frequent activity, then the TSC property can be maintained, assuming that the checker tree is also TSC; otherwise, a fault can be masked until a second fault occurs. Using the TSCG equation and separating the terms R ( t ) and S ( t ) , the effect of input activity can be better presented.
The term R ( t ) remains unaffected due to the fact that it refers to the fault-free state of the circuit. Thus, to keep the T S C G ( t ) of a circuit at high levels, its input bits must present high activity. In addition, a system that includes TSC checkers must maintain the T S C G ( t ) at high levels. Assuming exponential failure law, the reliability (at time instance t) of a circuit composed of N total logic gates (can be expanded to components for systems) is given by:
R ( t ) = e x p ( i = 1 N λ i t )
where λ i is the constant failure rate of gate (component) i. The exponential failure law is realistic for electronic components since they are affected by effects such as electromigration and thermal noise, which have an accumulative behavior that is better described by an exponential decrease. The S ( t ) term can be analyzed into two terms S 1 ( t ) and S 2 ( t ) ,
S ( t ) = S 1 ( t ) + S 2 ( t )
The term S 1 ( t ) represents the conditional probability of detecting the fault generated by the erroneous output using one test vector, while S 2 ( t ) represents the conditional probability of detecting it using a combination of two vectors. Because of the need to detect a fault at the first occurrence in due time, it is not necessary to consider faults requiring more than two test vectors in sequence. The term S 2 ( t ) does not contribute to the T S C G ( t ) , while the term S 1 ( t ) contributes to a smaller degree when the input vector is “locked”, i.e., a constant value for a long time period.
S 1 ( t ) = i = 1 M j = 1 t λ i α t j b j ( k = j t Q i k j T i b k j α k j + Q i t j + 1 b t j α t j ) = i = 1 M λ i ( α t + 1 b T i + b t + 2 ( Q i Q i t + 1 ) α b t + 1 ( 1 Q i t + 1 ) ( α b ) ( α b Q i ) + b t ( Q i Q i t + 1 ) T i )
where T i is the probability that error f i can be detected at each cycle, and Q 1 k j T i represents the probability that f i is detected at the k t h cycle given that Q i = 1 T i , α = e λ i , b = e x p ( Σ r = 1 N λ r ) . Furthermore, M represents the number of faults in a fault subset F 1 , provided that the circuit is TSC. The detection of each fault in F 1 requires only one appropriate test vector.
S 2 ( t ) = i = 1 P j = 1 t λ i α t j b j ( k = j + 1 t ( k j ) T i 2 Q i k j 1 b k j α k j + b t j ( Q i t j + 1 + ( t j + 1 ) T i Q i t ) α t j ) = i = 1 P λ i ( α t + 1 b 2 α b t + 2 Q i t + t b t + 3 Q i t + 1 ( α b ) ( α b Q i ) 2 + t T i b t + 1 Q i t b t + 1 α b + α b t + 1 Q i t b t + 2 Q i t + 2 ( α b Q i ) 2 + b t ( 1 + Q i Q i t Q i t + 1 t T i Q i t ) T i )
Let P denote the number of faults in a fault subset F 2 , such that the circuit is also TSC. Each fault in F 2 can be detected with two sequential test vectors. The above-mentioned may be better understood with the following example.
EXAMPLE. Figure 3 shows the TSC two-rail checker. In Table 1, we summarize the T i s of this circuit for all possible stuck-at-0 (s-a-0) and stuck-at-1 (s-a-1) faults. There are 28 possible faults in total: 20 of them are detected by one code word input, i.e., T i = 0.25, and 8 of them are detected by two code word inputs, i.e., T i = 0.5. Therefore, N = 6, M = 28 and P = 0. Assuming identical failure rate, λ i = λ for all i, the reliability R ( t ) , S ( t ) and T S C G ( t ) of this gate level TSC two-rail checker are plotted in Figure 4.
The latter depicts the degradation of reliability as time passes and in parallel with the effect of low activity of the inputs. The same behavior results if not all the test vectors are available at the inputs of the CUT. This behavior increases the demand to explore solutions that will dynamically force the missing test vectors to the inputs to achieve successful testing of the CUT.

4. Static Confrontation of the TSC Property Degradation

In the past, a novel architecture was proposed by the authors to preserve the level of the T S C G ( t ) [7,8] to the TSC checker trees. The main target of this architecture was to stimulate the inputs of the TSC checker trees even when a low-power mode was applied to the processing units. This approach ensures a high level of T S C G ( t ) in the checker trees avoiding the masking of a fault before a second one occurs. The input (T) of these units follows the same encoding of the units’ output (O). The TSC checker tree requires a set of inputs that includes the output value O and its complement O . Thus, by feeding the units’ input T and its complement T during the low-power mode (instead of the units’ outputs O and O ) to the TSC checker tree, the required activity to the TSC checker is achieved. In Figure 5, the proposed architecture is illustrated. The main target of the proposed architecture is to increase the term S 2 ( t ) when the input activity is decreased due to a low-power technique application. This is achieved by contributing with an extra S 2 ( t ) , which is derived from the by-pass technique. Thus, the new input T of the TSC circuit preserves the level of the T S C G ( t ) due to the fact that the T i that corresponds to the sum of the arrival rates of all second tests of error f i is no more equal to zero. This is valid due to the fact that T i , 1 and T i , 2 , the sum of the arrival rates of all first and second tests, respectively, are equal when a low-power technique is applied.
An extra advantage of this architecture is the low area overhead it introduces. The area penalty due to the addition of a 2-to-1 multiplexer, after each unit, is not significant although the critical path is slightly increased. This architecture can guarantee the level of the T S C G ( t ) , assuming that the new input T of the TSC checker tree is not also subject to a low-power technique. Thus, the exploration of every input T must be performed before selecting it as an alternative input for the TSC checker tree. The EDA (electronic design automation) tools (e.g., Synopsys Design Compiler NXT [9], Formality Equivalence Checking [9,10], TestMAX DFT [11]) that are used to implement the targeted algorithm can be used to extract this information. An analytical investigation of the control data flow graph (CDFG) of the system and comparison to the dynamic power management algorithm provides all the required information. In the case that no input T is appropriate for the proposed architecture, a hybrid of the proposed architecture and the BIST technique can be used. This architecture is illustrated on Figure 6. An on-line BIST mode is activated and any error will produce erroneous output (error indication). In that case, the test vectors are generated by a logic generator, then the the unit is called an automatic test pattern generator (ATPG). A linear-feedback shift register (LFSR), such as the one presented in [12], can be used as the ATPG unit. This hybrid architecture presents an extra area overhead due to the insertion of the ATPG unit, while the TSC checker tree must be actually duplicated to check the ATPG unit in addition. The effectiveness of this work, however, is appropriate as proposed for non-concurrent testing. Its application to concurrent on-line testing additionally requires the use of extra hardware to store the current state of the units. In this case, the units do not include memory elements as this requires no extra hardware than the registers placed at the inputs of the units. In the case the units include memory elements, then an alternative approach is required to store the current state of the unit before applying any test vector. Thus, when the system returns to normal operation (either from idle mode or test mode), the state of the units should have been restored. An appropriate mechanism to achieve the return to a previous state is the exploitation of the scan chain. However, this introduces significant latency to the whole process. Alternatively, the use of custom hardware inside the units would be beneficial, increasing the design complexity, the integration area and the power dissipation, and in parallel, it significantly limits the application of the technique to a wide range of circuits. Interesting work on this issue has been presented in [13].
The effect of the degradation of the T S C G ( t ) due to the application of low-power techniques was investigated for the two-rail checker (TRC), which is a commonly used TSC checker. In Figure 7, the T S C G ( t ) is illustrated before and after this technique’s application. This checker presents the S ( t ) and R ( t ) (curves 2 and 3, respectively), and their sum results T S C G ( t ) (curve 1), as they can be calculated from Equations (1)–(3). Applying a low-power technique, there is no effect on the R ( t ) term. However, the S ( t ) degrades, as illustrated in Figure 7 (curve 4), due to the low activity rates, as described in Section 3. Thus, a new degraded T S C G ( t ) results, as shown in curve 5. Using the proposed architecture, the level of T S C G ( t ) is preserved to the initial value. The power dissipation of the unit under dynamic power management is kept low, while power dissipation on the TSC checker is reduced to the lowest possible. Due to the nature of the technique, there is no qualitative measure available to show the order of magnitude of the introduced power dissipation penalty. In fact, it is highly dependent of the unit’s function and the length of the TSC checker’s inputs. Thus, this technique is advisable and efficient for application in high levels of design flow.
In order to show the efficiency of this technique, it was applied on a processing core developed in the bounds of the CoSafe design approach. The core was developed so that it can be embedded in systems targeting safety-critical applications. A significant design constraint of the core is the requirement for low-power dissipation [14], without violating the safety levels of its operation. In Table 2, the characteristics of the core are offered, while in Table 3, the area and power penalty are shown, normalized to the initial design. Although the integration technology is old, it manages to depict the effect (increase) on the T S C G ( t ) , as shown in Table 3. The benefits of the proposed implementation have been identified as the decay of degradation of the TSC property of the digital circuit. However, the drawback of the proposed work is the fact that the test vectors are applied sequentially, as stored in memory. This approach poses a considerable risk since there is a statistically significant probability to mask an error if the appropriate test vector is not applied in due time. Additionally, no statistical analysis is performed during design on the applied test vectors since the generation of the test vector set is independent of the application of the actual input during normal operation. The latter comment poses two significant questions regarding the efficiency of the previous technique in concurrent on-line testing; is the input set in normal operation sufficient to reveal masked faults and, if not, can the test vector set be dynamically selected for the faults not covered by the normal operation?

5. Dynamic Confrontation of the TSC Property Degradation

The previous analysis on concurrent on-line testing, when used on safety-critical systems based on duplication of units, revealed a dependency of high level T S C G ( t ) efficiency on the applied values to the inputs. Although, non-concurrent testing offers the benefit to pre-compute the required test vector sets for the ATPGs and their application at power-up time, this is not the case for concurrent on-line testing. The inputs of the system serve as test vectors, during its normal operation, and they are the main parameter that affects the S ( t ) term. Since the inputs may not be predicted for each use case, application, user or data profile, there is a lack of solutions for the adaptation of an embedded BIST to the characteristics of the dynamic input profile, targeting specifically the concurrent on-line testing. Although the aforementioned works presented sufficient results, they did not consider the limitations of the actual data feeding the inputs and, additionally, they were targeting the TSC parts of the system.
A novel architecture for dynamic confrontation of the TSC property degradation over time is proposed in this Section, as depicted in Figure 8. It is based on the previously mentioned architecture, embedding a mechanism to monitor the inputs and apply a BIST test by dynamically selecting the least frequent test vector subset. An analysis of the test vectors is performed, which are divided into k clusters. The division of clusters is achieved using a k-means clustering algorithm, grouping test vectors with similar characteristics. During the operation of the system, the inputs are read and a set of counters provide real-time information of the appearance of members of the clusters. The new mechanism is comprised of k BIST units, which generate k sets of test-vectors. When the system enters the low-power mode or sleep mode, the i-th counter with the minimum value of all counters enable the appropriate BIST unit, to test the combinational circuits. Thus, testing is adaptive to the appearance of the k test-vector sets, reducing the effect of low-activity or a limited value set of inputs that was identified previously. The proposed architecture gives answers to both questions that were identified in a previous Section.
Specifically, as mentioned before, the k clusters were the result of the application of a k-means algorithm on the digital system during design phase, at the generation of the minimized test-vector set to achieve the highest fault coverage. During the operation of the proposed implementation, the actual inputs are analyzed with the k-means component and are included in one of the k clusters. For performance reasons, high-speed counters [15,16] are used to count the appearance of an input belonging to the corresponding cluster. Then, a comparison of the counters’ values is performed, and the counter of the minimum value is selected. Thus, the cluster of the least appeared inputs is selected and the appropriate BIST is selected to stimulate the circuit under test during its idle state.
This approach allows dynamic selection of the appropriate test-vectors’ subset confronting not only the degradation of the TSC property over time, but also addressing the high probability for masking a single fault in due time. This approach successfully considers the dynamic nature of the inputs and addresses the drawback of the previously mentioned implementation. Although the benefits are significant, there is a collateral increase of the integration area, which, however, is affordable for safety-critical applications.

5.1. Design Methodology of the Adaptive BIST

The proposed work requires two phases to develop the adaptive BIST, as depicted in Figure 9. During the first phase (design time), the test vectors are classified into k clusters. The test vectors resulted from the appropriate tool analyzing the circuit and generating a set of vectors. A k-means algorithm is then used to create the k clusters of vectors, creating custom sets considerably shorter in size. The work presented in [17] proposes an elegant methodology to compress the test vectors by reordering the initial test set and exploiting the k-means algorithm to create the appropriate subsets of test vectors. The targeted benefit from this approach is the low-power dissipation during test mode. Exploiting this approach, the resulted clusters are used as the subsets of test vectors to adapt testing based on the activity of the inputs of the CUT. At this phase, (a) the cluster detectors are designed based on the subsets of test vectors, which output a one-hot output of the identified cluster; (b) the cluster counters are selected, exploiting the fast operation of counters as presented in [16]; and (c) the selector of the minimum value of the counters is designed. Finally, the BIST units for each test vector subset are developed using the commonly used EDA tools. During the second phase (run-time), the inputs of the system are read by the cluster detector and classified to the corresponding subset, enabling the appropriate counter. The counter increases its value, and all the values are compared by the minimum selector unit. The output of the selector is then driving the selector inputs of a multiplexer, allowing the appropriate BIST unit to output the respective test vector subset. The latter testing (output from the BIST unit) is performed when the system enters a low-power mode by the power management unit.

5.2. Design Time Phase

5.2.1. k-Means Algorithm

Clustering is one of the most common techniques in data analysis to understand its structure and identify subgroups in the data. In this way, data in the same subgroup (cluster) have similar characteristics, while data in different clusters are very different. The decision of which similarity measure to use depends on the application and the designer. K-means clustering is used to categorize N data into k groups or clusters. This algorithm is a basic unsupervised machine learning (ML) algorithm that allows categorizing a set of values in k clusters. It is a simple and efficient process, which can be applied even on small processors with low capabilities. In addition, the small memory footprint allows its embedding to low-cost and low-power microcontrollers. This results in stable training of the system through the creation of different clusters and low user interaction. The training algorithm (k-means clustering algorithm) is an iterative algorithm that can separate a dataset into k pre-defined distinct non-overlapping subgroups (clusters), where each data point belongs to only one group. It consists of two inputs:
  • The training set (cluster initialization) contains the data entered by the designer in training mode, such as test vectors.
  • The value k, where k is the number of clusters that the algorithm is going to create and is defined by the designer.
In the beginning, the k-means clustering algorithm randomly choose k data points or take the first k data points as the initial centroids of the clusters. It then repeats the following three steps until the moment the situation between the groups stabilizes:
  • Determines the Euclidean distance of each data point to the centroids;
  • Groups the data points based on minimum distance;
  • Updates the centroids in each cluster by taking means of data points.
The algorithm must be executed several hundred times if we want to create the appropriate clusters. If, between two iterations, there is no change to the centroids, then the process stops and at the end we have our clusters. Another method is to set centroids in the training set manually for the first time in the k-means clustering algorithm. More specifically, let us define one at the beginning, one at the end and the rest of them at positions with step (s) after the first centroid, where:
s = t s l a s t t s f i r s t k 1
since the dataset is one-dimensional (where t s L a s t and t s F i r s t are the last and first values of the training set). The output of a k-means clustering algorithm are the created k clusters and the system stores the boundaries of each (every min and max of each one).

5.2.2. Clustering the Test Vectors

The reordering technique has been considered in other works [18,19] targeting low-power dissipation testing via test set compression. The proposed solutions use the Hamming distance to select the appropriate test vectors to be reordered. Initially, one of the test vectors is selected as a reference test vector and then follows the next vector with minimum Hamming distance. This process is repeated until all vectors have been reordered. In this case, the input vectors (test vectors) present low difference in bit activity resulting in low-power dissipation on the data bus and the scan chain. However, the results presented in these works show that they lost the similarity grouping. In contrast, the recent work in [17], proposed the use of a k-means clustering algorithm for grouping similar test vectors into clusters. This approach is adopted in this work for the formation of k test vector subsets. The resulting clusters are easier to manage and the information derived by them is used to design the cluster detectors (CD), as described in the next subsection.

5.2.3. Mapping an Input to a Cluster

Inputs of the CUT are monitored concurrently as they are applied to the CUT. The CD is responsible to identify, on the fly, the respective cluster they may belong to, in the case that they are part of the subsets of the test vectors. A CD reads the input and outputs a ‘0’ in case the input does not match a test vector belonging to the subset (cluster) under consideration, or ‘1’ in case it is part of the subset of the test vectors. The approaches to design a CD include, but are not limited to, the following techniques:
  • Look-up tables: They are suitable for FPGA implementations and the synthesis tool produces compact results for the targeted FPGA device.
  • Content-addressable memory—CAM: This approach uses a small CAM for each subset of test vectors and reports a hit in case the input is stored in it.
  • MICSET: The approach that was presented in [20] (MICSET—monitoring input vectors for concurrent testing based on a pre-computed test SET) is also appropriate for the presented work. Specifically, the h i t signal is the one that is of interest to this work and triggers the corresponding counter.
The first approach is suitable for small sets of test-vectors, exploiting the benefits of the synthesis tool offered by the FPGA vendor. The main concept is the calculation of a minimized look-up table (LUT) considering do not care bits that allow optimization of the generator logic functions. The main drawback of this approach is the increase of design complexity and the non-deterministic result when a large test-vector set is associated with the LUT.
The second approach is appropriate for the minimization of the design complexity of the system. The benefit of the use of CAMs is that only a matchline activation is adequate for the purpose of the CAMs use, and not the actual line of storage. The main drawback of this approach is the significant increase of the hardware resources required by the proposed system.
The third approach is suitable for deterministic results considering the hardware requirements and the performance penalty introduced by its adoption to the proposed system. The rest of this work considers the MICSET as the appropriate approach for keeping a balanced design complexity for a deterministic area and performance penalty.

5.2.4. Configuration of the BIST Units

The BIST units, as illustrated in Figure 8, include the test-vector subsets as they resulted from the application of the k-means algorithm. The division of the initial test-vector set to smaller subsets allows the maintenance of the T S C G ( t ) during time in high levels. Furthermore, considering the effect of the non-deterministic profile of operation of any embedded computing system, it seems to be the only adaptive approach to cope with concurrent on-line testing when the inputs are limited to a subset of the expected test-vector set.
The BIST units may be implemented either using logic-based BIST (LBIST) or memory-based BIST units (MBIST). The LBIST units are actually ATPG units, generating the targeted test-vector subset. It is similar to the operation of an LBIST unit [21,22] that typically employs scan as its operational baseline to achieve a high quality test while using a limited test-vector set. The implementation of the targeted BIST as an LBIST allows the compaction in terms of hardware requirements. Various approaches may be considered, such as that of an LFSR-based ATPG. The benefit of this approach is the low area penalty it introduces, although its drawback is the increased complexity design for large data sets.
On the contrast, an MBIST imitates a sequential access memory (SAM), storing the test-vector subset and outputting one test-vector per access. The benefit of this approach is the low-design complexity and its drawback is the high area penalty it introduces.
For the proposed system, an MBIST is considered for simplifying the design. However, there is no obvious indication of which choice is the best. A thorough exploration of the various configurations of BIST implementations may prove valuable in the future for how to implement this part of the proposed system.

6. Results

An important evaluation measure for the performance of MICSET is the time required to complete the test while operating only in online mode, defined as the number of normal input vectors that must be applied to the CUT inputs while the CUT operates normally in order to expect that the concurrent test is over. At this point, two critical assumptions must be made, as reported in [23,24,25], to compute the CTL of MICSET. These are, the probability of occurrence of any one pattern is independent of the occurrence of any other pattern and that the circuit input patterns are equally likely to occur during normal operation of the circuit. Although it has not been proven that the above two assumptions hold in practical circuits, it is (to the best of our knowledge) the only assumptions that have been introduced in the literature for the appearance probabilities of the input vectors.
CTL calculation may be made either analytically or after simulation tests. From Table 4, it is derived that, for circuits having more than 40 inputs, i.e., c499, c880, c1355 and c3540, the value of the CTL becomes unrealistically high. In fact, such circuits are typically tested by utilizing different techniques, such as partitioning and pseudoexhaustive testing schemes. For circuits with fewer inputs, i.e., c432, c1908 and c6288, the application of MICSET may result in acceptable values of the CTL. However, the previously mentioned are valid in the case that the two assumptions are satisfied.
In the following, an exploration of the effects of several scenarios is offered. Specifically, the scenarios include the following:
  • All inputs are not available, due to the nature of the application that affects the inputs of the system, and the inputs have the same probability of occurrence.
  • All inputs are available, but do not have the same probability of occurrence.
  • All inputs are not available, and do not have the same probability of occurrence.

6.1. Scenario 1

In Figure 10, the result of the T S C G ( t ) is explored when part of the inputs are not available, although they have the same probability of occurrence. This is a realistic scenario, especially for control-dominant applications, where the operation of a circuit is limited to a narrow set of expected input values corresponding to the safe operation (e.g., temperature measurements in a biology laboratory may range from −30 C to 40 C, however, if everything is safe and operational in the laboratory, then the inputs are limited to the range of 12 C to 24 C). Although this scenario is realistic and finds applicability to safety-critical infrastructures and applications, it gives a perspective of the effects of the absence of all values from the inputs. Specifically, in Figure 10, the degraded T S C G ( t ) is calculated for c6288, according to the reduction (in percentage) of the input values. As seen, the feature of SC is not valid as the number of input values is decreased. Moreover, the CTL becomes infinite and useless as an evaluation measure since the test is never completed while the circuit performance is in normal mode, due to the absence of input vectors.

6.2. Scenario 2

The second scenario is about the availability of all input values; however, they do not have the same probability of occurrence. Again, this scenario is realistic, especially for data-dominant applications, such as the processing of satellite images (e.g., space applications) or road traffic cameras, which are also applications related to safety-critical systems. In this case, input values are susceptible to lighting, environment and population behavior. Although, each pixel can obtain any value (and eventually it will get it), the probability is not the same for all values (e.g., darker shades on earth are not expected under a shiny sky). The effect on T S C G ( t ) is similar to that presented in Figure 10, however, what is noticeable is that CTL is not more deterministic but rather stochastic. Considering a small subset of four test vectors to be rare to appear (near 0.0001), the CTL was nearly equal to c432.

6.3. Scenario 3

In Figure 11, the result of the T S C G ( t ) is explored when part of the inputs are not available, and they do not have the same probability of occurrence. This was not a realistic scenario since the designer should have limited the input value set. However, during the last decade, novel products are based on formal design flows for embedded systems, omitting the fact that they are targeting a special-purpose application. This is the case for numerous internet of things (IoT) devices, which limits inputs from connected sensors to 10 bit values and processes them using 32-bit registers and ALU. Furthermore, since the values are susceptible to external conditions from the system, the probability of appearance is not deterministic. Specifically, in Figure 11, the degraded T S C G ( t ) is calculated for c6288, according to the reduction (in percentage) of the input values and considering a small subset of four test vectors being rare (near 0.0001). As seen, the feature of SC is not valid as the number of input values is decreased. The T S C G ( t ) is degraded significantly, turning the circuit vulnerable to the occurrence of faults. Moreover, the CTL is becoming infinite and useless as an evaluation measure since the test is never completed while the circuit performance is in normal mode, due to the absence of input vectors.

6.4. Applying the Adaptive BIST

The proposed adaptive BIST was tested for c6288. The proposed solution is not suitable for the application of an exhaustive test; however, it performed excellently when the aforementioned scenarios were examined. Initially, the test vector set is extracted from the circuit analyzer (Figure 9 and after the application of the k-means algorithm, four clusters were created. The circuits were then configured appropriately to match the clusters derived from the algorithm.
The proposed adaptive BIST was tested under the scenarios presented in this Section. An on-line test mode was applied every 1,000,000,000 clock cycles, allowing the BIST to test the CUT. The selection of the time interval to apply the test mode is equal to the time that most microcontrollers report for their power-down mode of operation, that is nearly 5–6 s (the calculation was made assuming a 2000 MHz clock, to be fairly comparable to MICSET). The results provided are derived from simulation in the premises of ParICT_CENG supercomputing node.
In Table 5, the results from the simulation of the CTL calculation are depicted. As seen, the results from the proposed work are better than those presented in MICSET for the on-line concurrent testing. As also seen, for Scenarios 1 and 3, the MICSET cannot conclude testing (infinite time to finish testing) using all the required test vectors since some of them do not appear in the input values. This was expected, due to the assumption of MICSET (and all the other works) that all test vectors are available and have the same probability of appearing. In the case of Scenario 2, all test vectors are available and do not have the same probability of appearing; MICSET concludes testing depending on the characteristics of the inputs. In contrast, the proposed adaptive BIST presents a mean value for concluding all tests for the three scenarios that is competitive to MICSET (which is one of the best of its kind), and copes with realistic input scenarios that cannot be addressed by MICSET and similar works.
Finally, if we consider that the CUT is operating frequently under a low-power mode, then the application of the proposed adaptive BIST may be emulated by keeping the input value unchanged for the duration of the low-power mode. In Figure 12, the increase in T S C G ( t ) is presented when the low-power mode is activated for 10% of the overall operation time. In contrast to scenario 3, the T S C G ( t ) is significantly increased since the proposed BIST exploits the opportunity to apply the test vector subsets of the least frequent occurrence.

7. Conclusions

The effect of low activity at the inputs of a CUT was considered for the concurrent on-line testing. It has been illustrated that the application of low-power techniques on safety-critical systems that embed concurrent on-line BIST techniques degrades testability. The same is observed when either line activity is kept low, or a limited set of values is applied to the circuit’s inputs. The terms of T S C G ( t ) have proved to be sensitive to minimization of the activity of the inputs. Previous works are explored and a common assumption was reported amongst them—all test vectors are available as input values during the normal operation of a CUT and have the same probability to appear. Having many real-world applications in mind, this assumption does not seem realistic, and a new testing mechanism for providing a dynamic application of test vectors of rare or no appearance is proposed.
An architecture for on-line concurrent testing that automatically adapts to the profile of the inputs is proposed in this work, confronting T S C G ( t ) degradation due to a number of reasons. The main contribution of this work is the employment of an ML-based algorithm, namely k-means, in order to divide a large test-vector set into smaller subsets. The continuous monitoring of their appearance to the input of the system identifies potential degradation of the T S C G ( t ) metric and automatically activates selective testing (using the least used test-vector subset) in the system. The application of the adaptive testing sequence may be performed either during low-power modes imposed on the system or a special test mode that is applied frequently (if allowed by the application).
The proposed application has been considered only for combinational circuits of the system that achieve the TSC property using multiple (at least two) instances of the same circuit. The concurrent on-line testing is achieved by comparison of their outputs through a two-rail checker tree. Although the proposed architecture is targeting the aforementioned systems, it may also be used in concurrent on-line testing based on TSC circuits employing special encoding schemes.
The results indicated that in comparison to other solutions for concurrent on-line testing, the proposed adaptive BIST is favored. Specifically, it proved that the dynamic application of the least frequently used inputs significantly improves the metrics used to evaluate the reliability and the TSC property of the system. Among the significant advantages of the proposed adaptive BIST is the ability to cope with the rare occurrence of input values and low input activity. The results proved that the proposed adaptive BIST increases the T S C G ( t ) up to almost 90% under a low-power mode.

Author Contributions

Conceptualization, V.C. and A.K.; methodology, V.C.; software, V.C.; validation, A.K.; formal analysis, A.K.; investigation, V.C.; resources, A.K. and V.C.; writing—original draft preparation, A.K. and V.C.; writing—review and editing, A.K. and V.C.; visualization, V.C. and A.K.; supervision, A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We acknowledge support of this work by the project “Par-ICT CENG: Enhancing ICT research infrastructure in Central Greece to enable processing of Big data from sensor stream, multimedia content, and complex mathematical modeling and simulations” (MIS 5047244), which is implemented under the Action “Reinforcement of the Research and Innovation Infrastructure”, funded by the Operational Programme “Competitiveness, Entrepreneurship and Innovation” (NSRF 2014-2020) and co-financed by Greece and the European Union (European Regional Development Fund).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Raghunathan, A.; Jha, N.K.; Dey, S. High-Level Power Analysis and Optimization; Springer Science & Business Media: New York, NY, USA, 2012. [Google Scholar]
  2. Chandrakasan, A.; Brodersen, R.W. Low-Power CMOS Design; John Wiley & Sons: Hoboken, NJ, USA, 1998. [Google Scholar]
  3. Abramovici, M.; Breuer, M.A.; Friedman, A.D. Digital Systems Testing and Testable Design; Computer Science Press: New York, NY, USA, 1990; Volume 2, pp. 203–208. [Google Scholar]
  4. Bushnell, M.; Agrawal, V. Essentials of Electronic Testing for Digital, Memory and Mixed-Signal VLSI Circuits; Springer Science & Business Media: New York, NY, USA, 2004; Volume 17. [Google Scholar]
  5. Pradhan, D.K. Fault-Tolerant Computer System Design; Prentice-Hall, Inc.: Boston, MA, USA, 1996. [Google Scholar]
  6. Lo, J.C.; Fujiwara, E. Probability to achieve TSC goal. IEEE Trans. Comput. 1996, 45, 450–460. [Google Scholar]
  7. Kakarountas, A.P.; Papadomanolakis, K.S.; Goutis, C.E. Low-Power Design for Safety-Critical Applications. In Designing CMOS Circuits for Low Power; Soudris, D., Piguet, C., Goutis, C.E., Eds.; Springer Science & Business Media: New York, NY, USA, 2002; pp. 205–234. [Google Scholar]
  8. Kakarountas, A.P.; Papadomanolakis, K.S.; Nikolaidis, S.; Soudris, D.; Goutis, C.E. Confronting Violations of the TSCG(t) in Low-Power Design. In Proceedings of the IEEE 2002 International Symposium on Circuits and Systems (ISCAS’02), Scottsdale, AZ, USA, 26–29 May 2002; pp. 2606–2609. [Google Scholar]
  9. Sutherland, S. Modeling with SystemVerilog in a Synopsys Synthesis Design Flow Using Leda, VCS, Design Compiler and Formality. SNUG Eur. 2006. [Google Scholar]
  10. Synopsys. “Formality,” 2. 2021. Available online: https://www.synopsys.com/implementation-and-signoff/signoff/formality-equivalence-checking.html (accessed on 16 June 2022).
  11. Zorian, A.; Shanyour, B.; Vaseekar, M. Machine Learning-Based DFT Recommendation System for ATPG QOR. In Proceedings of the IEEE 2019 International Test Conference (ITC), Washington, DC, USA, 9–15 November 2019; pp. 1–7. [Google Scholar]
  12. Teja, N.V.; Prabhu, E. Test Pattern Generation using NLFSR for Detecting Single Stuck-at Faults. In Proceedings of the 2019 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 4–6 April 2019; pp. 0716–0720. [Google Scholar]
  13. Floridia, A.; Mongano, G.; Piumatti, D.; Sanchez, E. Hybrid on-line self-test architecture for computational units on embedded processor cores. In Proceedings of the IEEE 22nd International Symposium on Design and Diagnostics of Electronic Circuits & Systems (DDECS), Cluj-Napoca, Romania, 24–26 April 2019; pp. 1–6. [Google Scholar]
  14. Kakarountas, A.P.; Papadomanolakis, K.S.; Spiliotopoulos, V.; Nikolaidis, S.; Goutis, C.E. Designing a Low Power Fault-Tolerant Microcontroller for Medicine Infusion Devices. In Proceedings of the DATE2002, Paris, France, 4–8 March 2002. [Google Scholar]
  15. Kakarountas, A.P.; Theodoridis, G.; Papadomanolakis, K.S.; Goutis, C.E. A novel high-speed counter with counting rate independent of the counter’s length. In Proceedings of the 10th IEEE International Conference on Electronics, Circuits and Systems, ICECS 2003, Sharjah, United Arab Emirates, 14–17 December 2003; Volume 3, pp. 1164–1167. [Google Scholar]
  16. Chioktour, V.; Kakarountas, A. Constant delay systolic binary counter with variable size cellular automaton based prescaler. Comput. Electr. Eng. 2021, 93, 107291. [Google Scholar] [CrossRef]
  17. Jegannathan, P.; Rajaguru, H. An analysis of low power testing using K-means clustering with reordering approach. Electron. Lett. 2021, 57, 633–635. [Google Scholar] [CrossRef]
  18. Yuan, H.; Guo, K.; Sun, X.; Ju, Z. A Power Efficient Test Data Compression Method for SoC using Alternating Statistical Run-Length Coding. J. Electron. Test. 2016, 32, 59–68. [Google Scholar] [CrossRef]
  19. Zhang, M.; Kuang, J.; Huang, J. Double Hamming distance-based 2D reordering method for scan-in power reduction and test pattern compression. Electron. Lett. 2020, 56, 352–354. [Google Scholar] [CrossRef]
  20. Voyiatzis, I.; Paschalis, A.; Gizopoulos, D.; Halatsis, C.; Makri, F.S.; Hatzimihail, M. An input vector monitoring concurrent BIST architecture based on a precomputed test set. IEEE Trans. Comput. 2008, 57, 1012–1022. [Google Scholar] [CrossRef]
  21. Moghaddam, E.; Mukherjee, N.; Rajski, J.; Solecki, J.; Tyszer, J.; Zawada, J. Logic BIST with capture-per-clock hybrid test points. IEEE Trans. -Comput.-Aided Des. Integr. Circuits Syst. 2018, 38, 1028–1041. [Google Scholar] [CrossRef]
  22. Adithya, K.R.; Gayathri, S. Study on LBIST and comparisons with ATPG. In Proceedings of the 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Bangalore, India, 19–22 September 2018; pp. 2131–2135. [Google Scholar]
  23. Voyiatzis, I.; Halatsis, C. A low-cost concurrent BIST scheme for increased dependability. IEEE Trans. Dependable Secur. Comput. 2005, 2, 150–156. [Google Scholar] [CrossRef]
  24. Voyiatzis, I.; Paschalis, A.; Gizopoulos, D.; Kranitis, N.; Halatsis, C. A concurrent built-in self-test architecture based on a self-testing RAM. IEEE Trans. Reliab. 2005, 54, 69–78. [Google Scholar] [CrossRef]
  25. Sharma, R.; Saluja, K.K. Theory, analysis and implementation of an on-line BIST technique. VLSI Des. 1993, 1, 9–22. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Input vector monitoring concurrent BIST.
Figure 1. Input vector monitoring concurrent BIST.
Electronics 11 03193 g001
Figure 2. Formal architecture of an on-line testable system with a dynamic power management unit.
Figure 2. Formal architecture of an on-line testable system with a dynamic power management unit.
Electronics 11 03193 g002
Figure 3. A TSC two-rail checker.
Figure 3. A TSC two-rail checker.
Electronics 11 03193 g003
Figure 4. The calculation of T S C G ( t ) , S ( t ) and R ( t ) in time for the given example.
Figure 4. The calculation of T S C G ( t ) , S ( t ) and R ( t ) in time for the given example.
Electronics 11 03193 g004
Figure 5. The proposed architecture with by-pass and input re-use technique.
Figure 5. The proposed architecture with by-pass and input re-use technique.
Electronics 11 03193 g005
Figure 6. The proposed architecture with the use of the BIST unit.
Figure 6. The proposed architecture with the use of the BIST unit.
Electronics 11 03193 g006
Figure 7. The degrading effect of the activity reduction on the input of a two-rail checker.
Figure 7. The degrading effect of the activity reduction on the input of a two-rail checker.
Electronics 11 03193 g007
Figure 8. Dynamic selection of test vectors set to confront the degrading effect due to late appearance of the appropriate test vector.
Figure 8. Dynamic selection of test vectors set to confront the degrading effect due to late appearance of the appropriate test vector.
Electronics 11 03193 g008
Figure 9. Methodology for designing the adaptive BIST and dynamic application of the appropriate test vector subset.
Figure 9. Methodology for designing the adaptive BIST and dynamic application of the appropriate test vector subset.
Electronics 11 03193 g009
Figure 10. T S C G ( t ) when inputs have the same probability of occurrence but part of the inputs are not available: (1) all input values are available, (5) a case of 90% input values available, and (7) a case of 80% input values available.
Figure 10. T S C G ( t ) when inputs have the same probability of occurrence but part of the inputs are not available: (1) all input values are available, (5) a case of 90% input values available, and (7) a case of 80% input values available.
Electronics 11 03193 g010
Figure 11. T S C G ( t ) when inputs do not have the same probability of occurrence and part of the inputs are not available: (1) all input values are available, (5) a case of 90% input values available, and (7) a case of 80% input values available.
Figure 11. T S C G ( t ) when inputs do not have the same probability of occurrence and part of the inputs are not available: (1) all input values are available, (5) a case of 90% input values available, and (7) a case of 80% input values available.
Electronics 11 03193 g011
Figure 12. T S C G ( t ) under low-power mode (active for 10% of the total operating time) and when inputs do not have the same probability of occurrence and part of the inputs are not available: (1) all input values are available, (5) a case of 90% input values available, and (7) a case of 80% input values available.
Figure 12. T S C G ( t ) under low-power mode (active for 10% of the total operating time) and when inputs do not have the same probability of occurrence and part of the inputs are not available: (1) all input values are available, (5) a case of 90% input values available, and (7) a case of 80% input values available.
Electronics 11 03193 g012
Table 1. The T i s of the TSC two-rail checker’s nodes.
Table 1. The T i s of the TSC two-rail checker’s nodes.
i f i x 0 , x 0 * , x 1 , x 1 * T i i f i x 0 , x 0 * , x 1 , x 1 * T i
1a s-a-010010.252a s-a-101010.25
3b s-a-010010.254b s-a-110100.25
5c s-a-001100.256c s-a-110100.25
7d s-a-001100.258d s-a-101010.25
9e s-a-010100.2510e s-a-101100.25
11f s-a-010100.2512f s-a-110010.25
13g s-a-001010.2514g s-a-110010.25
15h s-a-001010.2516h s-a-101100.25
17i s-a-010010.2518i s-a-10101, 10100.50
19j s-a-001100.2520j s-a-10101, 10100.50
21k s-a-010100.2522k s-a-10110, 10010.50
23l s-a-001010.2524l s-a-10110, 10010.50
Table 2. The main characteristics of the CoSafe processing core.
Table 2. The main characteristics of the CoSafe processing core.
FeatureMeasure
Core8-bit Data Bus, 16-bit Address Bus,
256 Direct Address Peripherals,
Built-In Signature Analysis, Watchdog,
Wallace Multiplier, Barrel Shifter
Area38,000 eq.gates (27 mm2, AMS 0.6 um)
Power consumption7.5 mW/MHz (upper bound)
Table 3. Effects of the technique’s application (power and area normalized to the initial design).
Table 3. Effects of the technique’s application (power and area normalized to the initial design).
PowerAreaTSCG(t)
In low-power mode100%100%0.55
After technique’s application102.8%101%0.98
Table 4. Concurrent test latency for some of the ISCAS’85 Benchmarks [20].
Table 4. Concurrent test latency for some of the ISCAS’85 Benchmarks [20].
CircuitConcurrent Test Latency (Clock Cycles)
CalculatedSimulation Experiments
c432302,018,534,783301,103,573,926
c4999,936,975,273,5409,849,891,000,171
c8805,253,761,695,546,079,2005,377,187,101,228,100,552
c135511,077,284,205,44811,007,534,645,164
c190845,828,246,00946,053,645,106
c35406,257,069,071,261,4846,333,072,161,372,774
c628816,867,071,17816,772,121,199
Table 5. CTL under three scenarios using the MICSET and after the application of the proposed adaptive BIST.
Table 5. CTL under three scenarios using the MICSET and after the application of the proposed adaptive BIST.
CircuitConcurrent Test Latency (Clock Cycles)
Scenario 1Scenario 2Scenario 3This Work
c432302,995,319,924302,021,554,998
c4999,994,020,475,0329,937,074,643,293
c135511,121,012,475,28911,077,394,978,290
c190846,023,094,88145,828,704,291
c628816,879,799,66816,867,239,849
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chioktour, V.; Kakarountas, A. Adaptive BIST for Concurrent On-Line Testing on Combinational Circuits. Electronics 2022, 11, 3193. https://doi.org/10.3390/electronics11193193

AMA Style

Chioktour V, Kakarountas A. Adaptive BIST for Concurrent On-Line Testing on Combinational Circuits. Electronics. 2022; 11(19):3193. https://doi.org/10.3390/electronics11193193

Chicago/Turabian Style

Chioktour, Vasileios, and Athanasios Kakarountas. 2022. "Adaptive BIST for Concurrent On-Line Testing on Combinational Circuits" Electronics 11, no. 19: 3193. https://doi.org/10.3390/electronics11193193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop