Next Article in Journal
ILP Based Power-Aware Test Time Reduction Using On-Chip Clocking in NoC Based SoC
Previous Article in Journal
Implementing Adaptive Voltage Over-Scaling: Algorithmic Noise Tolerance vs. Approximate Error Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Aggressive Exclusion of Scan Flip-Flops from Compression Architecture for Better Coverage and Reduced TDV: A Hybrid Approach

by
Pralhadrao V. Shantagiri
* and
Rohit Kapur
Department of Computer Science, Jain University, #1/1-1, Atria Towers Palace Road, Bangalore, Karnataka 560 001, India
*
Author to whom correspondence should be addressed.
J. Low Power Electron. Appl. 2019, 9(2), 18; https://doi.org/10.3390/jlpea9020018
Submission received: 11 April 2019 / Revised: 6 May 2019 / Accepted: 15 May 2019 / Published: 29 May 2019

Abstract

:
Scan-based structural testing methods have seen numerous inventions in scan compression techniques to reduce TDV (test data volume) and TAT (test application time). Compression techniques lead to test coverage (TC) loss and test patterns count (TPC) inflation when higher compression ratio is targeted. This happens because of the correlation issues introduced by these techniques. To overcome this issue, we propose a new hybrid scan compression technique, the aggressive exclusion (AE) of scan cells from compression for increasing overall TC and reduce TPC. This is achieved by excluding scan cells which contribute to 12% to 43% of overall care bits from compression architecture, placing them in multiple scan chains with dedicated scan-data-in and scan-data-out ports. The selection of scan cells to be excluded from the compression technique is done based on a detailed analysis of the last 95% of the patterns from a pattern set to reduce correlations. Results show improvements in TC of up to 1.33%, and reductions in TPC of up to 77.13%.

1. Introduction

Scan compression technology is improving with technology nodes shrinking from µm to 5 nm and continuing to shrink. Therefore, more logic is being packed into the design. This has led to the introduction of new faults which are specific to process technology, and as well as to the need for a methodology to detect them. The cost of IC (integrated circuit) structural testing is increasing; it is the most significant part of overall manufacturing costs. Over the last few decades, many DFT (design for test) techniques, including scan compression schemes, have been developed. These are used in the industry to do structural tests to detect manufacturing defects for structural correctness of DUT (design under test). The basic idea behind structural tests is to ensure that the combinational and sequential logic present in the design function are as expected, i.e., without any defects. Scan-based techniques are required to reduce TAT, TDV, and the cost of IC testing. The reduction in TDV helps to lower the number of test data bits being stored on the ATE (automatic test equipment).
In scan mode, all scan cells of the chains need to have input test patterns loaded into them before being applied into the circuit to detect the targeted faults. In scan mode, TAT increases because of the serial shifting of test data into the scan chain. It also increases power consumption, as many scan cells get toggled simultaneously. The scan architecture is shown in Figure 1a. To overcome the drawbacks of scan mode testing, scan chain partitioning is proposed; one such method is described in [1]. This is a framework-based multiple scan path [2,3] having unique scan-data-input and scan-data-output. This architecture is shown in Figure 1b. The disadvantage of this method is its need of many scan-data-input and scan-data-output ports. This drawback has led to further developments in scan-based testing. The appearance of this method has led to numerous scan compression techniques of different types. The various scan compression techniques researched in the last few decades have been reviewed in the Section 2. Today, the scan compression technique is the most popular structural faults testing method of IC Testing. The general scan compression architecture [2,3] is shown in Figure 1c.
Scan compression techniques reduce TDV and TAT but lack the ability to detect certain faults which are detectable in scan mode. This is because of the ATPG constraint for the correlation of scan cell values. The correlation across scan cells is introduced by the scan compression architecture when the same ATE channel broadcasts test data into multiple internal scan chains. Because of this correlation, overall TC is reduced. A few of the TC reduction cases introduced by the compression scheme have been described in detail in Section 3. Maintaining high TC is very important in IC testing. More the TC, the IC becomes defect free and safer to use in automotive, medical, aeronautical, and safety critical devices, etc.
Both scan mode and scan compression mode have advantages and disadvantages. Let us consider scan mode, in which all scan cells are controllable and observable. There is no correlation introduced by the scan mode achieving better test coverage. However, more TAT is needed as the length of the scan chain in scan mode is larger, requiring more shift cycles to shift one pattern. This is equal to the total number of scan cells present in the scan chain. In contrast, in scan compression mode, the scan chain is partitioned into many internal scan chains and test data is fed into these internal scan chains through few scan-data-in ports. This reduces many shift cycles per pattern and helps to reduce the overall TAT. These factors help to reduce the cost of IC testing, but scan compression introduces correlation across scan cells in the internal scan chains which is not present in the full scan mode. This affects the TC and increases the TPC when the compression ratio is increased. Hence, to take advantage of both scan mode and scan compression mode, a hybrid approach is needed—i.e., a mixture of both—to reduce TPC at the same test coverage. This led to the research of the proposed method.
The method proposed in this paper is referred to as ‘aggressive exclusion of scan cells from compression’, subsequently ‘AE’. This method helps to reduce the correlation among free variables and the level of spread (fan-out cone) of the scan cell output. We analyzed scan load test patterns. The initial 5% patterns achieve 56–86% of overall TC. The last 95% of patterns achieve the remaining 18–39.17%. Hence, we considered the last 95% of the test patterns for our analysis. Our research is focused on these patterns to analyze correlation and TC loss issues to reduce TPC and increase TC. There are scan cells which need to have specified value in most of the scan load test patterns. The main focus of the AE method is to increase TC and reduce TPC by detecting faults which are non-detectable using the compression technique. This is essential in automotive, medical, avionics, and safety critical devices which use ICs. The AE method motto is to increase TC and reduce TPC by excluding scan cells which contribute to higher correlations in the compression architecture, stitching them in separate scan chains with dedicated assignment of scan-data-input and scan-data-output ports. The AE method is a hybrid compression architecture which demonstrates the advantages of both scan and scan compression. The AE method architecture is shown in Figure 1d.
This paper is structured as follows: Section 2 details the background of scan compression technology including different types of compression technologies. Section 3 provides different TC loss and TPC inflation cases introduced by the scan compression technique. Section 4 depicts the proposed ‘AE Method’ in detail and presents a flow chart describing pre- and post-AE method results. Section 5 details the algorithm developed in the AE method to exclude scan cells from the compression technique. Section 6 presents the procedure of the AE Method. Section 7 presents the results and Section 8 presents a conclusion of the AE method.

2. Background of Scan Compression Technology

Various scan-based techniques have been proposed in the literature. The following are the categories of scan compression technologies which have been invented by different researchers in the past.
Structural scan-based testing methods can be classified into following categories:

2.1. Partial Scan

In partial scan design, some percentage of flip-flops are converted into scan flip-flops, with the rest remaining as non-scan cells. This methodology uses sequential ATPG, as many non-scan cells need one or more capture cycles to detect the faults. Numerous techniques have been proposed to select the subset of scan cells to be selected to convert them into scan cells. The merits of partial scans are reduced area overhead and improved performance; they also reduce achievable fault coverage because of the limited controllability and observability of the subset of scan cells. Increased sequential depth in partial scans contributes to increasing the TAT. Various categories of partial scan compression techniques have been developed, namely, testability analysis [2], test pattern generation [2,3,4,5], structural analysis [2,3,4,5], hybrid approaches [2,3,4,5,6,7], layout driven [8], timing and retiming driven [9,10], and order scan cells or reset sequence based [11].

2.2. Fully Scan-Based Techniques

2.2.1. Random Access

In a random access scan [2], each scan cell is directly controlled and observed as a unique address is assigned to it through a decoder circuit. This is similar to the way in which we access memory location by address. Each and every scan cell can be controlled and observed directly, and there is no need to shift the test load data into all the scan cells of the chain. Hence, random access scans lead to reduced TDV and TAT. This method suffers from issues of increased area overhead due to the huge address decoder and routing scan data input for each scan cell. An additional logic circuit and routing are also needed to observe the actual output of each scan cell. Hence, it has not been adopted by the industry.

2.2.2. Code-Based Techniques

Code-based techniques [2] partition the scan load test pattern into the subset and assign code to each such partition, so that the overall TDV is reduced. These techniques achieve good scan compression. The decompressor gets coded word from the ATE, and decodes and supplies them to the internal scan chains. These techniques are not employed by industry as increased TAT, area overhead and complex control logic are required. These techniques are weak in exploiting correlations in test patterns. The fixed-to-fixed, fixed-to-variable, variable-to-fixed, and variable-to-variable are types of code-based compression techniques [2].

2.2.3. Linear Decompressor Based Techniques

In this technique, test data is supplied by ATE in combinational or sequential circuits, or both linear and sequential circuits, which in turn transfer data into the internal scan chains. These networks act as expanding networks. These techniques are good at exploiting unknown values in scan load test patterns. These decompressors can be expressed in the form of a set of linear equations, and computation can be done rapidly. The disadvantage of these techniques is the need for changes in the ATPG process. For example, suppose that the tester is supplying input test data through m-bits chains to a linear decompressor. The linear decompressor expands these data bits into an n-bits internal scan channel (output sub-space of decompressor), such that n ≥ m.
There are two types of linear decompressor, namely:
(a)
Linear combinational decompressor: The linear combinational decompressor is made up of combinational logic, which spreads the input test data received from ATE into the output space of the decompressor. The output space of the decompressor is connected into many internal scan chains. The XOR [2] network is a commonly used circuit in linear decompressors [2,3,12].
(b)
Linear sequential decompressor: These decompressors are constructed using LFSRs, a ring generator and cellular automata [2,3]. There are two types of sequential decompressors.
(1)
Fixed length sequential decompressors: Fixed length sequential decompressor techniques make use of the previous state value of scan slice to calculate new values in current clock cycles. Linear combinational decompressors suffer from handling more specified values in scan load test patterns in the worst-case scenarios. This is handled better in sequential decompressors to achieve better compression. The sequential decompressor performance increases with an increased number of sequential cells. This compression technique gets test load patterns fed from the ATE to the sequential logic (LFSR or cellular automata or ring generators) and then distributes this data to internal scan chains through combinational logic like XOR network. This compression technique suffers from encoding efficiency issues when more care bits are specified in the scan load test set [2,3].
(2)
Variable length sequential decompressor: This scan compression technique makes it possible to vary the number of free variables to be used for each test cube. The method provides improved encoding efficiency but comes at increased cost. The disadvantage of this technique is need for a gating channel. Various variable length linear sequential decompressors have been proposed [2,3].
(3)
Hybrid (both linear and non-linear) linear decompressor: These scan compression techniques exploit the don’t-care bits and correlation in scan load test patterns. Hence, this technique achieves better compression compared to Linear sequential and combinational decompressors [2,3].

2.2.4. Broadcasting Based Techniques

This popular scan compression technique broadcasts the same load test data into many internal scan chains through a decompressor. These techniques are independent of patterns and insert internal scan chains between the decompressor and the compressor. These techniques reduce TAT and TDV to a large extent compared to other compression techniques, and have been adopted by the industry. These techniques introduce structural dependencies in internal scan chains, leading to the loss of TC. A great deal of research has been carried out in this area. The following are the categories of broadcast-based scan compression techniques. These techniques have low routing and area overhead.
(c)
General (without reconfiguration) broadcasting-based techniques: The scan compression technique [13] proposed supplies scan load test patterns through a single dedicated scan-data-input to multiple scan chains which are connected. Compared to scan mode, this technique reduces TDV and TAT, but suffers from the correlation of specified values in scan cells. The technique proposed in [14] shares scan-data-in among multiple circuits to supply scan load test patterns. Various scan compression techniques in this category are proposed in [2,3,4,15,16,17].
(d)
Broadcasting techniques with static reconfiguration: The static reconfiguration-based broadcasting scan decompressor techniques use sequential and combinational logic in the decompressor with a reconfiguration capability to handle correlation among scan cells in the scan chains. These techniques are adopted and are currently being used in the industry. In this technique, the reconfiguration of scan chains takes place while applying a new scan load test pattern. The method proposed in Illinois scan dual mode architecture reduces the length of the scan chain in shared scan-data-in mode and reduced TAT [18,19].
(e)
Broadcasting techniques with dynamic reconfiguration: In this scan compression technique, the selection of different configurations of scan chains happens while loading scan load test patterns. This feature of the broadcast scan compression leads to better compression, TAT and TDV. The disadvantage of these techniques is a greater need for control information.
(1)
Broadcasting techniques with streaming dynamic reconfiguration: The streaming decompressor-based broadcasting scan compression [20] which supports reconfiguration of scan compression for better compression ratio, reduced TAT and TDV. In this technique, each clock cycle is applied when data is fed from the ATE to the decompressor, and the same data is streamed into the internal scan chains in a diagonal fashion; Figure 2a shows the architecture. Figure 3a shows the diagonal correlation introduced in the streaming compression technique. Our proposed AE method is validated using a scan synthesis technique; details are provided in the subsequent sections.
(2)
Broadcasting techniques with non-streaming dynamic reconfiguration: In these scan compression techniques, scan load test data is loaded from ATE into the shift register of the decompressor. Then, data is shifted into each scan slice of the scan chain on a per clock cycle basis. These compression techniques have horizontal dependency, as shown in Figure 3b. These techniques are adopted by the industry and provide better compression, TAT, and TDV, but suffer from compression-induced correlation of scan cells. See [21,22,23,24,25] for more details.
(f)
Broadcasting techniques with patterns overlapping: The theme behind pattern overlapping is to identify the overlap of the scan load test pattern for the given scan load pattern, which is already generated by the ATPG. The test pattern overlap is identified by shifting non-overlapping beginning bits and finding overlapping bits at the end of the current scan load test set with the next scan load test set. This way, a new pattern is created. This technique is good at achieving higher compression ratios and TDV reduction. A statistical analysis-based patterns overlapping method for scan architecture was proposed in [26]. The patterns overlapping-based technique to reduce TDV and TAT was proposed in [27]. Dynamic structures are used to store the test data by encoding sparse test vector. The patterns overlapping technique, which works by exploiting the unknown values in scan load test patterns, is proposed in [28]. The deterministic patterns slice overlapping technique based on LFSR reseeding is proposed in [29] to reduce TDV.
(g)
Broadcasting techniques with hybrid approach
(1)
Broadcasting and patterns overlapping: This hybrid technique takes advantage of both the broadcasting and the pattern overlapping scan compression techniques. These techniques achieve better compression ratios and TDV. The TAT of these techniques is huge, as they need to figure out the next pattern based on the current to identify the overlapping pattern. A hybrid approach combining broadcast-based scan architecture along with patterns overlapping is proposed in [30]. The broadcast-based patterns overlapping technique was proposed in [31]. It is claimed that it is able to reduce TAT with increased TDV.
(2)
Broadcasting techniques with a mixture of both compression and scan mode: The hybrid approach, which is a combination of scan and compression to improve pattern count in the presence of unknowns in scan unload patterns, is proposed in [32]. This significantly reduces the pattern count. The proposed AE method results in improvements in TC and reductions in TPC, based on an analysis of scan load patterns.
(h)
Broadcasting techniques with circular scan architecture: In this architecture [33,34,35], the first scan load test pattern is loaded into all internal scan chains. Each scan channel output is connected back to the input of the same channel. These chains have the option of getting scan load data from the ATE or the shift-in capture response of current pattern as the next scan load test pattern for the chain. The disadvantage of these techniques is that the pattern being circulated is non-deterministic.
(i)
Broadcasting techniques with tree-based architecture: Scan tree-based compression techniques are based on compatible scan cells considering ATPG generated test patterns. The success of scan tree-based techniques depends on the presence of compatible sets of scan cells. These techniques are not feasible for highly compacted scan test patterns. A scan tree-based compression technique algorithm which is able to handle scan cells having little or no compatibility in the given scan test patterns is proposed in [36]. The dynamically-configurable dual mode scan tree-based compression architecture is proposed in [37]. This works in both scan mode and scan tree mode. A tree-based LFSR which exploits the merits of both input scan-data-in sharing and re-use methodology is presented in [38] to test both sequential and combinational circuits.

3. Coverage Reduction Cases Because Of Scan Compression

The proposed AE method is validated using the scan synthesis tool-based scan compression technique. The TC reduction happens in the scan compression technique as each data bit of the decompressor register distributes the same test data into multiple internal scan chains that are connected between the codec. Figure 3a,b show the streaming and non-streaming decompressor with the data being shifted into internal scan chains through the decompressor register. Free variables show the correlation in the scan cells. Correlation among the scan cells of internal scan chains exists as many internal scan chains are driven by the same data bit. Different compression techniques have different correlation issues. Consider the streaming compression technique, wherein diagonal correlation introduced by the scan compression technique causes TC loss and TPC inflation when the high compression ratio is targeted. The case of the sequential decompressor with the scan slice correlation (horizontal correlation) causes the TC loss as the scan cells of the slice are structurally dependent. Both the scenarios have been depicted in Figure 3a,b.
These limited load modes help to reduce the correlation issues some extent and not completely. The hardware which is part of the decompressor decides the mode bits decoding and choosing of internal scan chain group. Each mode represents a different group of scan chains to be chosen to shift-in data from the decompressor serial register into them. This leads to the loss of TC and TPC inflation.
The reduced TC in the scan compression has a direct influence on DPM (defects per million) i.e., test escapes that are delivered to users. This impacts the yield issues. Hence to overcome this issue, a compression technique is needed which increases TC by detecting additional faults and reduces TPC.
ATPG generates test patterns for all detectable faults in the DUT. This set of patterns is called pattern set. Each and every pattern generated by the ATPG contains specified value logic-0, logic-1, and logic-X. The logic-0 and logic-1 is called a ‘care bit’. The number of such bits present in the pattern is called ‘care bits’. The percentage of care bits present in the pattern set is called ‘care bit density’. The calculation of care bits and percentage of care bits is shown in Equations (1) and (2) respectively. The care bit to be loaded into the desired scan cell is decided by ATPG tool to detect the target faults from the fault list.
Test sets generated for the scan mode comprises bits having logic-0, logic-1, and logic-X. The number of care bits present in the scan load test pattern set is calculated as
c b = r = 1 M c = 1 N a i j where { 1   a i j = 1 or   a i j = 0 0 otherwise
where ‘a’ is the decompressor outer space matrix of size ‘r × c’ having all the test set. Total number of bits present in the whole test set is calculated as
t b = r × c
where ‘r’ is the number of test patterns present in the test set and ‘c’ is the length of pattern. The percentage of care bits present in the test set generated for the DUT is calculated as
p c b = ( 100 × c b ) ÷ t b
where ‘cb’ and ’tb’ are shown in Equations (1) and (2) respectively.
Such care bits generated by the ATPG tool to detect faults for each scan cell varies. Some scan cells need to have specified values most of the time, a few times, or not at all. It means some scan cells will have always logic-X. Scan cell(s) those have wide fan-out combinational logic or shared wide combinational logic, need to be specified in many patterns. If such scan cells are driven by the same ATE channel, then it becomes difficult for the ATPG tool to load different desired values when required to detect the faults. Figure 4 shows the fan-out cone which depicts this scenario. The number of load modes is limited and not possible to load the desired value into necessary scan cells of chain because of dependency.
As shown in Figure 4a, FF1, FF2, FF3, FF4, FF5, and FF6 are driving the combinational fan-out cones. These are overlapping cones. The cone areas have been numbered from 1 to 15. Here, areas numbered 1 to 6 have no correlation and it is easy to detect the faults present in these areas. Whereas combinational logic cones numbered from 7 to 15 have scan cell correlation. To detect the faults present in area 13, the scan cells FF4, FF5, and FF6 need to have desired specified values. Therefore, these cells will have specified values in most of the patterns based on the number of faults exist in the area 13. Here FF5 has a wide fan-out cone and needs to have a desired value most of the time to detect faults in it. Suppose if those scan cells are fan-out from the same data bit of the serial register of the decompressor, then most of the faults present in areas 1 to 15 could not be detected. This leads to the loss of TC and patterns inflation. Figure 4b shows the fan-out of data bit D0 and D1. So FF1, FF2, and FF3 will have the same ATPG desired values in a load mode. The FF4, FF5, and FF6 are connected to D1 as fan-out. It is not possible to have different desired values in these scan cells. Therefore, FF4, FF5, and FF6 have value in a load mode. This creates structural correlation and leads to pattern inflation and the loss of TC.
The conflict introduced by the decompressor has two important properties which leads to TC loss and TPC inflation. These are:
(1)
The fault being detected must have structural correlation on two or more scan cells.
(2)
These two or more scan cells must be present in different scan chains and fan-out from the same data bit/ATE channel.
The diagonal scan slice in the streaming decompressor and horizontal scan slice in the non-streaming decompressor will have two or more scan chains correlated to each other as free variables being shifted from the same data bit/ATE channel.
The number of diagonal correlation, ‘Ndc’ (streaming decompressor), depends on the length of the longest internal scan chain connected between the codec and the number of internal scan chains. Let us take ‘L’ as the length of the longest internal scan chain and ‘M’ as the number of internal scan chains.
N d c = L + M 3
N f f = ( L × M ) 2
where ‘Nff’ is the number of scan cells in correlation.
The number of horizontal correlations present in the non-streaming decompressor based architecture is calculated as
N h c = L
N f f h = L × M
where ‘Nhc’ is horizontal correlation and ‘Nffh’ is the number of scan cells in horizontal correlation.
At least two scan cells are not in correlation in diagonal correlation of each mode and in horizontal correlation, all scan cells are in correlation.
Figure 5 shows coverage loss case in both the streaming and non-streaming decompressor broadcasting architecture. Here scan cells SFF31, SFF22, and SFF13 will always have either logic-0 or logic-1. Hence ‘G2’ XOR gate will always produce logic-0 as an output. Hence stuck-at-0 fault on the output of ‘G2’ cannot be detected. The fault on ‘G1’ is difficult to detect as always gate produces logic-0. Hence, stuck-at-0 fault on the gate ‘G1’ cannot be detected. The reason is, structural correlation introduced by the compression technique. Such faults can be detected in scan mode as there is no such structural correlation. Similarly, the faults on the fan-out cone are also difficult to detect because of structural correlation. Figure 5 shows the fan-out cone correlation and the coverage loss issues occurred in the scan compression. The coverage loss is seen because of the fan-out cone correlation. The AE method is proposed to overcome such coverage loss issues.

Sparseness in Output Space of the Decompressor

When the load test data is loaded into internal scan chains, the content of scan cells of internal scan chains look as shown in Figure 6. These scan chains and cells represent a matrix having logic-0, logic-1, and logic-X before replacing logic-X with the care bit. The sparseness of this matrix is calculated based on the specified values in each scan cell. This decompressor output space (matrix) can be sparse or dense of the specified values. Dense output space indicates the possibility of patterns inflation and loss of TC. This can be linked to care bits density. Figure 6 shows the output space of the decompressor with sample values loaded into all the scan cells. We can see diagonal correlation of values for one pattern. The AE method increases sparseness of the care bits in this space by pulling scan cells those need to be specified in most of the test patterns. The representative AE method is shown in Figure 7. The sparseness of the care bits, ‘scb’, in the output space of the decompressor and sparse density, ‘sd’, are defined as follows:
s c b = c b ÷ t b
s d = 100 s c b
where ‘cb’ and ‘tb’ are from Equations (1) and (2) respectively.
The AE method reduces the sparseness and sparse density of the output space of the decompressor in the compression technique.

4. Proposed AE Method

4.1. Scan Cells Exclusion and Care Bits Density

The AE method excludes very few percentages of scan cells from the compression architecture to place them in multiple external chains. Figure 8 shows the percentage of cells being moved out of the compression technique. Figure 9 shows the scan cells contributing to 12% to 43% of overall care bits density of the DUT. The AE method performs better in this range. If care bit density of excluded scan cells increases above 43% of overall care bit density, it takes away the advantage of the AE method achieved.
The number of scan cells to be excluded ‘Ce’ from the compression technique is calculated as
C e = L × N
where ‘L’ is the length of external chain and ‘N’ is the number of external chains being created by the AE method.
The ‘L’ is calculated as
L = L i c h × S r
where ‘Lich’ is the length of the longest internal scan chain between the codec and ‘Sr’ is the length of the decompressor serial register.
The AE method is shown in Figure 7 with sparseness of specified bits and Figure 2b shows the general AE method architecture. The number of external chains need to be calculated as
N e c h = C h c s / 2
where ‘Nech‘ represents the number of external scan chains being created in the AE method and ‘Chcs’ represents the number of ATE channels allocated to the compression technique. The number of external chains created are half of the total scan-data-in budget of the compression technique. This increases the sparseness of care bits in the output space of the decompressor and leads to improved TC and reduced TPC. Hence remaining ATE channels are allotted to the compression technique in the AE method maintaining overall scan-data-in and scan-data-out ports budget remains same.

4.2. AE Method

In this proposed work, we analyzed the scan load test patterns generated for the different circuits for different scan configuration synthesis. Generally, first 5% of the test patterns detect most of the faults which are easy to detect faults. It includes faults that can be detected using random test patterns and deterministic patterns. Hence, we notice the first 5% of patterns contain more care bits specified to detect the faults. This first 5% of the patterns detect around 56% to 86% of faults of the total detectable fault set generated by the ATPG tool. Therefore, it means the remaining 18% to 39.17% faults are detected by the last 95% of the patterns. Table 1 shows the percentage of TC achieved with the first 5% of TPC and the last 95% of TPC. Hence, we consider this remaining 95% of the patterns for our analysis to increase the overall TC and reduced TPC. We find the issues which cause coverage loss due to the compression technique introduced correlation, fan-out cone of data bits of serial registers and the AE method to overcome this.
The compression techniques reduce TC to some extent. These techniques do not detect certain faults because of the structural correlation introduced by them, but these faults are detectable in the scan mode. Therefore, in the AE method, we identify such scan cells and exclude them from the compression architecture. Such excluded scan cells are placed in external scan chains. The placing of such scan cells in the external chain is left to the scan synthesis tool to decide. Hence it does not affect the scan routing. Moreover, excluded scan cells are few in number compared to the total number of scan cells present in the DUT. The excluded scan cells in external scan chains with dedicated scan-data-in and scan-data-out ports provide more controllability and observability without structural correlation. This helps to detect more faults which are not detectable in the compression technique. This provides better compressibility as care bits in the output space of the decompressor are sparser.
Once the TC reaches 90%, the ATPG needs to produce many patterns to achieve every fraction of percentage of test coverage. To detect the last few percentages of faults, ATPG produces most of the patterns in the pattern set as shown in Table 1.
The proposed AE method not only improves overall TC but also helps to reduce the TPC. The detailed procedure to select scan cells to be excluded from the compression architecture is shown in Section 5. The AE method shows the scan cells to be excluded contributing 12% to 43% of the total care bits density leads to improved TC and reduced TPC.
The range of percentage of care bits contributed by scan cells are being excluded from the compression architecture is
min _ c b = ( c b × 12 ) ÷ 100 max _ c b = ( c b × 43 ) ÷ 100
The scan cells which have more specified values in the first 5% of patterns may reside inside or outside the compression technique as these are detecting a large percentage of faults. Hence retaining them in the compression technique. If the number of external chains increases more than 50% of the scan-data-in/scan-data-out budget, it increases pattern inflation, reduces TC and takes away the benefit achieved in the AE method. If DUT has more scan cells having fan-out cone as shown in Figure 6, the percentage of care bits of scan cells being excluded increases. Our study shows up to 43% is acceptable in most of the cases to take advantage of both scan mode and scan compression technique. The detailed procedure to decide the number of scan cells to be excluded is described in Section 5. Calculating the length of each external scan chain is shown in Equation (11). The procedure to select the scan cells to be excluded is described in Section 5. The detailed steps to implement the AE method is shown in Section 6.

4.3. Flow Chart Showing Execution Flow of the AE Method

The detailed flow of execution of scan compression and the AE method is shown in Figure 10. The flow of execution of a scan synthesis is same for both scan compression and the AE method. The only difference is the specification of the external scan chains in the AE method. The detailed procedure is depicted in Section 6, which has ‘Phase1_FlowOfExecution()’ and ‘PhaseII_FlowOfExecution()’. The numbers are written at the left of each box in Figure 10 to indicate step number. In the flow diagram Step 1 and Step 2 are part of the ‘Phase1_FlowOfExecution()’ and Step 4 and Step 5 are part of ‘PhaseII_FlowOfExecution()’. The algorithm to extract the scan cells to include them in external scan chains is depicted in Section 5 and the same has been shown in Step 3 of the flow diagram.

5. Aggressive Exclusion of Scan Cells Algorithm from Compression Architecture

The following Algorithm 1 ‘GetExtScanChains()’ takes a set of scan mode test patterns as an input and does analysis of the last 95% of the patterns which are contributing to the majority of hard to detect targeted faults. Hence, the first 5% of patterns are not considered for analysis. Certain scan cells in these patterns have specified values of logic-0 and logic-1. This algorithm extracts some scan cells among a large number of scan cells present in the circuit which need specified values in most of the last 95% of the patterns and contributing to the pattern inflation. To reduce the correlation introduced by the scan compression, we put such scan cells into the multiple external scan chains equal to half of the total scan-data-in port budget. This algorithm returns a ‘Chains[]’ array which includes scan cells for all the external scan chains to be created. The scan cells being excluded from the compression architecture in the AE method contributes to 12% to 43% of the total care bits density. The detailed steps to extract scan cells has been presented in this algorithm. This analysis helps to improve the TC and reduce the TPC at the same coverage level. Therefore, the method proposed has both external chains and remaining scan cells in the compression architecture to take advantage of both scan and compression modes. The time complexity of this algorithm is around O(n) + O (n log n). The run time of this algorithm is less and varies according to the design complexity and the size of the patterns set.
Algorithm 1: GetExtScanChains()
Inputs:
Ts—Set of scan load test stimulus
Si—Number of Scan-Data-In Ports assigned to scan compression technique
Output:
Chains[]—Array of external chains holding relevant scan cells excluded from compression technique
Let Np = SizeOf(Ts) // Number of scan load test stimulus
Let Skip5Per = Np × (5/100) // Skipping first 5% Patterns
Let C = 1
While (Ts[C] < Skip5Per)
 Let C = C + 1
EndWhile
While (C ≤ Np)
 Let Tp = Ts[C]
 Let Len = Length[Tp]
 Let N = Len
 While (Len > 0)
  If (Tp[Len] == ‘0’ OR Tp[Len] == ‘1’) // considering care bit 0 or 1
   SFF[Len] = SFF[Len] + 1
   CB = CB + 1
  EndIF
  Let Len = Len − 1
 EndWhile
 Let C = C + 1
EndWhile
O_SFF[] = ORDER_IN_DESCENDING(SFF, N) // sorting scan cells in descending order based on specified value ranking of scan cell
Let LenExt = FindLenCh() // Finding length of the external scan chains
Let N_Ext = Si / 2 // Number of external scan chains equal to 50% of scan-data-in ports
Let Max = CB × (43/100) // maximum up to 43% of total cbd
Let Min = CB × (12 /100) // minimum 12% of total cbd
Let Chains[] = CreateExtChains(N_Ext, O_SFF, LenExt, Max, Min) // set of external scan chains
Return Chains[] // Set of external scan chains
End
Algorithm 2 calls the FindLenCh() to find the length of an external chain which is shown in Equation (11). The FindLenCh() algorithm considers two inputs namely the longest internal scan chain present in the codec and the length of the decompressor register to arrive length of an external scan chain.
Algorithm 2: FindLenCh()
Input:
Test Protocol file of compression technique
Output:
eChLen—length of an external chain
Read Test protocol file of compression technique
Find length of longest internal scan chain as ‘L’
Find serial register length as ‘Srl’
eChLen = L + Srl // Length of an external scan chain calculation
Return eChLen
End
Finally ‘GetExtScanChains()’ calls the ‘CreateExtChains()’ Algorithm 3 to create the external scan chains and returns it back. Then it returns the Chains() sent back to ‘PhaseII_FlowOfExecution()’ algorithm depicted in Section 6. The criteria to create external scan chains is the range of care bits density contributed by the scan cells in the circuit. If this care bits density is less than the minimum threshold or more than the maximum limit, chains will be discarded.
Algorithm 3: Function: CreateExtChains()
Inputs:
N_Ext—Number of external chains to be formed
O_SFF—Set scan cells having specified value in number of scan load test stimulus
LenExt—Length of each external chain being formed
Max—43% value of care bits density
Min—12% value of the care bits density
 
Output:
Chains[]—To hold scan cells of the external chains
 
Let Cnt = 1
Let CB = 0
Let N = SizeOf(O_SFF)
Let X = 1
While (Cnt < N)
 Let CB = CB + O_SFF[Cnt] // care bits density
 If (CB ≤ Max) // Check whether care bits density of scan chain is less than or equal to the maximum limit
  Let Chains[N_Ext][x] = O_SFF[Cnt]
  If (X < LenExt)
   Let X = X + 1
  Else
   Let X = 1
   Let N_Ext = N_Ext − 1
  EndIf
 Else
  Break
 EndIf
EndWhile
If (CB < Min) // if care bits density is less than the minimum threshold ignore such chains
delete Chains[]
EndIf
Return Chains[]
End

6. Procedure of Aggressive Exclusion Method

The complete procedure of execution of the proposed AE method has been depicted in two phases. The Phase-I is used to generate the ATPG patterns for the scan compression architecture considering the input DFT configuration provided. The total TPC and the TC are measured and recorded. The Phase-I flow of execution is shown in the Algorithm 4 ‘PhaseI_FlowOfExecution()’. Below given is the algorithm for the same.
Algorithm 4: Phase I_FlowOfExecution()
Inputs:
Verilog_netlist—Verilog netlist which is DUT
Verilog_libraries—Verilog libraries for lib cells
DFT configuration—ATE channels, Internal chains and etc.
 
Outputs:
TC—Test coverage
TPC—Test Patterns Count
 
Read verilog_netlist
Read verilig_libraries
Input scan synthesis configuration, including chains count, ATE channels, and etc.
Invoke scan synthesis and insertion engine
Write out scan synthesized output netlist
Write out scan protocol file
Invoke ATPG engine to generate test patterns for all the faults including stuck-at, transition and etc.
Measure percentage of TC and TPC
End
Before invoking the Phase-II of execution (AE method), we invoke an algorithm named ‘GetExtScanChains()’. This has been depicted in Section 5 along with detailed steps. The algorithm returns the ‘Chains[]’ array having created external scan chains based on the AE method’s exclusion of scan cells algorithm.
The invocation of ‘PhaseII_FlowOfExecution()’ will take place by passing ‘Chains[]’ array which has the external scan chains specification to create the external scan chains. Once the Phase-II of execution is completed, we measure the TPC and the TC for the full run and TPC is compared at the same TC level as of Phase-I are measured and recorded.
Then the percentage of the TC improvement is calculated as shown in Equation 16 and the TPC reduction at the same TC as of compression technique is calculated and shown in Equation (15).
The percentage of test coverage is calculated as
T C = N d f / N t d f
where Ndf is the number of detected faults and Ntdf is the number of detectable faults.
The improvement in TPC ‘TPCimpr’ when both compression technique and the AE method are compared as
T P C i m p r = 100 × TPCae / TPCcs
where TPCcs and TPCae are test patterns count achieved by the compression technique and the AE Method at the same TC of compression technique respectively.
The percentage of improvements in the TC ‘TCimp’ when compression technique is compared with the AE method is as given below
T C i m p = ( 100 × T C a e ) ÷ T C c s
where TCae and TCcs are the TC achieved by the AE method and the compression technique respectively.
Below given is the Algorithm 5 for Phase-II flow of the execution.
Algorithm 5: PhaseII_FlowOfExecution()
Inputs:
Verilog _netlist—Verilog netlist which is DUT
Verilog_libraries—Verilog libraries for lib cells
DFT configuration—ATE Channels, Internal scan chains and External scan chains specification
 
Outputs:
TC—Test Coverage
TPC—Test Patterns Count
TCF—Test Coverage at full run of AE method
TPCF—Test Patterns Count at full run of AE method
 
Read verilog_netlist
Read verilig_libraries
Input scan synthesis configuration, including internal scan chains, number of the ATE channels, and etc.
Allot 50% scan-data-in ports into the external chains
Allot 50% of scan-data-in ports into compression technique
Specification for external scan chains creation
Invoke scan synthesis and insertion engine
Write out scan synthesized output netlist
Write out scan protocol file
Invoke ATPG engine to generate the test patterns for all the faults including stuck-at, transition and etc.
Measure percentage of TC and TPC at same coverage as produced at Step (8) of Phase I
Measure percentage of TCF and TPCF at the end of full run and compare it with Step (8) of Phase I
End

7. Experimental Results

We observed both TPC reduction and TC improvement in the AE method. The summary of the percentage of improvements in the TPC reduction for each circuit when compared at the same TC of the compression technique for each circuit is shown in Table 2.
The summary of percentage of improvements in the TC for each circuit when compared to the scan compression technique for each circuit is shown in Table 3. Each fraction of the TC improvement improves the QoR (Quality of Results) of IC production. This helps to reduce the DPM and which is significant. This shows more targeted faults than the scan compression method are detected with the AE method and which is significant in terms of TC. Once the TC crosses 90% it is hard to improve the coverage and for each fraction of the TC, ATPG generates many patterns to achieve it. Significant improvements in the TC are achieved in C1 and C2 and good improvements in other circuits.
The experiments were carried out on the different circuits. These circuits are of varying sizes ranging from 28 K to 530 K scan cells. The results were generated for both with the AE method and without the AE method. Without AE method includes the scan compression technique whereas ‘with AE method’ includes blend of external scan chains and the scan compression technique. The AE method excludes certain scan cells based on the procedure depicted in Section 5 and puts them in multiple external scan chains. Column 1, Column 2, Column 3, and Column 4 represent the name of the circuit, number of scan cells present in the circuit, total number of the ATE channels allocated in each configuration for scan synthesis and the total number of internal scan chains used respectively.
Each external scan chain is assigned a scan-data-in and a scan-data-out port to it. The total scan-data-in and scan-data-out budget for the compression scheme remain the same. In the AE method 50% of the scan-data-in and scan-data-out ports are assigned to the external scan chains and remaining into compression scheme. The results generated for the compression scheme are shown in Column 5 and Column 6. Column 5 shows the TPC generated to achieve the TC shown in Column 6 for compression scheme. The results generated for the AE method and measured at the same coverage as shown in Column 6 are presented in Column 7 and Column 8. Column 7 represents the TPC generated to achieve the TC shown in Column 8 for the AE method at the same coverage as shown in Column 6. The results for the complete run of the AE method are shown in Column 9 and Column 10. Column 9 has TPC to achieve the TC shown in Column 10 by detecting faults which are not detectable in compression technique. The columns with headings ‘#TPC’ and ‘%TC’ represent the total pattern count and test coverage achieved by respective configuration of the circuit. Column 11 shows the percentage of overall test coverage improvement for each configuration and Column 12 represents the percentage of TPC reduction when compared with the scan compression technique at the same TC level. Column 13 represents the percentage of care bits density of the circuit contributed by the scan cells which are part of external scan chains of the respective configuration of the AE method.
Both the results have been compared and shown in Table 4. The ‘AE method’ column indicates the results generated for the proposed method. The ‘scan compression’ method has 16 inputs, and these inputs have been distributed between multiple external chains and compression architecture of the AE method. Overall, scan-data-input and scan-data-output ports budget remain same. We compared both the ‘#TPC’ and ‘%TC’ of the scan compression technique with the AE method. The AE method shows good improvement in the TC and also in the TPC at the same TC level. Both the TC and the TPC are very important in structural tests. The better TC helps to achieve improved yield and reduced DPM. Whereas patterns count helps to reduce the overall TAT and test cost of the IC. The AE method has no area overhead. The AE method needs no change in ATPG to generate test patterns. The proposed AE method improves the TPC and the TC significantly in the DUT. Our experiments on the six circuits show improvements in both the TPC and TC. The results have been shown in Table 4, with 13 columns in total.
The circuit C1 achieved the highest TPC reduction of up to 77.13% and C2 achieved a TPC reduction of up to 76.13%. This is significant improvements in the TPC reduction. It means the same TC as the compression scheme is achieved with the lower number of TPC. This reduces TAT and shift cycles. Proportionately reduces overall cost of the IC testing.
In Table 4, Column 12, we have not shown the percentage of TPC reduction for some configurations as those configurations are producing better TC rather than TPC reduction. Hence, they are compared for the percentage of TC improvement as TC is not negotiable.
The proposed AE method performs better for both TPC reduction and TC for the designs having a fan-out cone correlation. Otherwise, it produces better TC compared to the scan compression.
The AE method achieved significant TPC reduction in C1 and C2. In C1 it achieved up to 77.13% TPC reduction and 1.33% TC improvements whereas in C2 it achieved up to 76.13% TPC and 1.22% TC improvements. The TPC reduction is compared at the same TC level as of compression technique. The TC is compared at full runs of both the method.

8. Conclusions

The proposed AE method significantly increases fault detection and improves the TC. The TC is very critical to enhancing the yield of the product and to reduce the DPM. The AE method combines the merits of scan compression and scan. Hence it improves the controllability and observability of the scan cells those need to be specified most of the times in scan load test patterns. This is achieved by excluding very low percentages of scan cells from the compression architecture and placing them in multiple scan chains outside the compression technique. Which scan cells are excluded is determined based on the correlation analysis carried out considering free variables and scan compression technique introduced correlation. The number of external scan chains is always equal to half the scan-data-inputs assigned to the compression technique. Hence in the AE method compression technique and external scan chains, each uses 50% of the total scan-data-input and scan-data-output ports budget. Hence, the overall scan-data-input and scan-data-output port budget remain the same. The AE method increases TC when total care bits density of the scan cells excluded from the compression technique is in the range of 12% to 43%.
The proposed AE method does not introduce a restriction on placing scan cells in external scan chains and this is decided by the scan synthesis tool used. The proposed AE method has no area overhead. The scan synthesis in the AE method is carried out using DFTMax-Ultra [39] and test patterns generation using TetraMax [40]. The AE method increased TC in all the cases and reduced TPC for the same TC for most of the circuits used in our experiment.

Author Contributions

Research is carried out by the P.V.S. under the guidance and supervision of R.K. Both authors contributed to make this manuscript and research.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kobayashi, S.; Edahiro, M.; Kubo, M. Scan-chain optimization algorithms for multiple scan-paths. In Proceedings of the 1998 Asia and South Pacific Design Automation Conference, Yokohama, Japan, 13 February 1998; IEEE: Piscataway, NJ, USA, 1998; pp. 301–306. [Google Scholar]
  2. Wang, L.T.; Wu, C.W.; Wen, X. VLSI Test Principles and Architectures: Design for Testability; Elsevier: Amsterdam, The Netherland, 2006. [Google Scholar]
  3. Touba, N.A. Survey of test vector compression techniques. IEEE Des. Test Comput. 2006, 23, 294–303. [Google Scholar] [CrossRef]
  4. Kapur, R.; Mitra, S.; Williams, T.W. Historical perspective on scan compression. IEEE Des. Test Comput. 2008, 25, 114–120. [Google Scholar] [CrossRef]
  5. Agrawal, V.D.; Cheng, T.K.T.; Johnson, D.D.; Lin, T. A complete solution to the partial scan problem. In Proceedings of the International Test Conference 1987, Washington, DC, USA, 30 August–3 September 1987; pp. 44–51. [Google Scholar]
  6. Park, S. A partial scan design unifying structural analysis and testabilities. Int. J. Electron. 2001, 88, 1237–1245. [Google Scholar] [CrossRef]
  7. Sharma, S.; Hsiao, M.S. Combination of structural and state analysis for partial scan. In Proceedings of the VLSI Design 2001. Fourteenth International Conference on VLSI Design, Bangalore, India, 7 January 2001; IEEE: Piscataway, NJ, USA, 2001; pp. 134–139. [Google Scholar]
  8. Chickermane, V.; Patel, J.H. An optimization based approach to the partial scan design problem. In Proceedings of the International Test Conference 1990, Washington, DC, USA, 10–14 September 1990; IEEE: Piscataway, NJ, USA, 1990; pp. 377–386. [Google Scholar]
  9. Jou, J.Y.; Cheng, K.T. Timing-driven partial scan. IEEE Des. Test Comput. 1995, 12, 52–59. [Google Scholar]
  10. Kagaris, D.; Tragoudas, S. Retiming-based partial scan. IEEE Trans. Comput. 1996, 45, 74–87. [Google Scholar] [CrossRef]
  11. Narayanan, S.; Gupta, R.; Breuer, M.A. Optimal configuring of multiple scan chains. IEEE Trans. Comput. 1993, 42, 1121–1131. [Google Scholar] [CrossRef]
  12. Balakrishnan, K.J.; Touba, N.A. Improving linear test data compression. IEEE Trans. Very Larg. Scale Integr. (VLSI) Syst. 2006, 14, 1227–1237. [Google Scholar] [CrossRef]
  13. Lee, K.J.; Chen, J.J.; Huang, C.H. Using a single input to support multiple scan chains. In Proceedings of the 1998 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers, San Jose, CA, USA, 8–12 November 1998; IEEE: Piscataway, NJ, USA, 1998; pp. 74–78. [Google Scholar]
  14. Hsu, F.F.; Butler, K.M.; Patel, J.H. A Case Study on the Implementation of the Illinois Scan Architecture. In Proceedings of the International Test Conference 2001, Baltimore, MD, USA, 1 November 2001; IEEE: Piscataway, NJ, USA, 2001; p. 538. [Google Scholar]
  15. Hamzaoglu, I.; Patel, J.H. Reducing test application time for full scan embedded cores. In Proceedings of the Digest of Papers. Twenty-Ninth Annual International Symposium on Fault-Tolerant Computing, Madison, WI, USA, 15–18 June 1999; IEEE: Piscataway, NJ, USA, 1999; pp. 260–267. [Google Scholar]
  16. Shah, M.A.; Patel, J.H. Enhancement of the Illinois scan architecture for use with multiple scan inputs. In Proceedings of the IEEE Computer Society Annual Symposium on VLSI, Lafayette, LA, USA, 19–20 February 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 167–172. [Google Scholar]
  17. Lee, K.J.; Chen, J.J.; Huang, C.H. Broadcasting test patterns to multiple circuits. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 1999, 18, 1793–1802. [Google Scholar]
  18. Pandey, A.R.; Patel, J.H. Reconfiguration technique for reducing test time and test data volume in Illinois scan architecture based designs. In Proceedings of the 20th IEEE VLSI Test Symposium (VTS 2002), Monterey, CA, USA, 28 April–2 May 2002; IEEE: Piscataway, NJ, USA, 2002; pp. 9–15. [Google Scholar]
  19. Jas, A.; Pouya, B.; Touba, N.A. Virtual scan chains: A means for reducing scan length in cores. In Proceedings of the 18th IEEE VLSI Test Symposium, Montreal, QC, Canada, 30 April–4 May 2000; IEEE: Piscataway, NJ, USA, 2000; pp. 73–78. [Google Scholar]
  20. Chandra, A.; Kapur, R.; Kanzawa, Y. Scalable adaptive scan (SAS). In Proceedings of the Conference on Design, Automation and Test in Europe, Nice, France, 20–24 April 2009; European Design and Automation Association: Leuven, Belgium, 2009; pp. 1476–1481. [Google Scholar]
  21. Samaranayake, S.; Sitchinava, N.; Kapur, R.; Amin, M.B.; Williams, T.W. Dynamic scan: Driving down the cost of test. Computer 2002, 10, 63–68. [Google Scholar] [CrossRef]
  22. Li, L.; Chakrabarty, K. Test set embedding for deterministic BIST using a reconfigurable interconnection network. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2004, 23, 1289–1305. [Google Scholar] [CrossRef]
  23. Sitchinava, N.; Gizdarski, E.; Samaranayake, S.; Neuveux, F.; Kapur, R.; Williams, T.W. Changing the scan enable during shift. In Proceedings of the 22nd IEEE VLSI Test Symposium, Napa Valley, CA, USA, 25–29 April 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 73–78. [Google Scholar]
  24. Wang, L.T.; Wen, X.; Furukawa, H.; Hsu, F.S.; Lin, S.H.; Tsai, S.W.; Abdel-Hafez, K.S.; Wu, S. VirtualScan: A new compressed scan technology for test cost reduction. In Proceedings of the 2004 International Conference on Test, Charlotte, NC, USA, 26–28 October 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 916–925. [Google Scholar]
  25. Han, Y.; Li, X.; Swaminathan, S.; Hu, Y.; Chandra, A. Scan data volume reduction using periodically alterable MUXs decompressor. In Proceedings of the 14th Asian Test Symposium (ATS’05), Calcutta, India, 18–21 December 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 372–377. [Google Scholar]
  26. Su, C.; Hwang, K. A serial scan test vector compression methodology. In Proceedings of the IEEE International Test Conference—(ITC), Baltimore, MD, USA, 17–21 October 1993; IEEE: Piscataway, NJ, USA, 1993; pp. 981–988. [Google Scholar]
  27. Jenicek, J.; Novak, O. Test pattern compression based on pattern overlapping. In Proceedings of the 2007 IEEE Design and Diagnostics of Electronic Circuits and Systems (DDECS’07), Krakow, Poland, 11–13 April 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1–6. [Google Scholar]
  28. Rao, W.; Bayraktaroglu, I.; Orailoglu, A. Test application time and volume compression through seed overlapping. In Proceedings of the 40th Annual Design Automation Conference, Anaheim, CA, USA, 2–3 June 2003; ACM: New York, NY, USA, 2003; pp. 732–737. [Google Scholar]
  29. Li, J.; Han, Y.; Li, X. Deterministic and low power BIST based on scan slice overlapping. In Proceedings of the 2005 IEEE International Symposium on Circuits and Systems (ISCAS 2005), Kobe, Japan, 23–26 May 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 5670–5673. [Google Scholar]
  30. Chloupek, M.; Novak, O. Test pattern compression based on pattern overlapping and broadcasting. In Proceedings of the 2011 10th International Workshop on Electronics, Control, Measurement and Signals (ECMS), Liberec, Czech, 1–3 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–5. [Google Scholar]
  31. Chloupek, M.; Novak, O.; Jenicek, J. On test time reduction using pattern overlapping, broadcasting and on-chip decompression. In Proceedings of the 2012 IEEE 15th International Symposium on Design and Diagnostics of Electronic Circuits & Systems (DDECS), Tallinn, Estonia, 18–20 April 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 300–305. [Google Scholar]
  32. Shantagiri, P.V.; Kapur, R. Handling Unknown with Blend of Scan and Scan Compression. J. Electron. Test. 2018, 34, 135–146. [Google Scholar] [CrossRef]
  33. Arslan, B.; Orailoglu, A. CircularScan: A scan architecture for test cost reduction. In Proceedings of the Design, Automation and Test in Europe Conference and Exhibition, Paris, France, 16–20 February 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 1290–1295. [Google Scholar]
  34. Azimipour, M.; Eshghi, M.; Khademzahed, A. A Modification to Circular-Scan Architecture to improve test data compression. In Proceedings of the 15th International Conference on Advanced Computing and Communications (ADCOM 2007), Guwahati, India, 18–21 December 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 27–33. [Google Scholar]
  35. Azimipour, M.; Fathiyan, A.; Eshghi, M. A parallel Circular-Scan architecture using multiple-hot decoder. In Proceedings of the 2008 15th International Conference on Mixed Design of Integrated Circuits and Systems, Poznan, Poland, 19–21 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 475–480. [Google Scholar]
  36. Banerjee, S.; Chowdhury, D.R.; Bhattacharya, B.B. An efficient scan tree design for compact test pattern set. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2007, 26, 1331–1339. [Google Scholar] [CrossRef]
  37. Bonhomme, Y.; Yoneda, T.; Fujiwara, H.; Girard, P. An efficient scan tree design for test time reduction. In Proceedings of the 9th IEEE European Test Symposium (ETS’04), Corsica, France, 23–26 May 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 174–179. [Google Scholar]
  38. Rau, J.C.; Jone, W.B.; Chang, S.C.; Wu, Y.L. Tree-structured LFSR synthesis scheme for pseudo-exhaustive testing of VLSI circuits. IEE Proc.-Comput. Digit. Technol. 2000, 147, 343–348. [Google Scholar] [CrossRef]
  39. DFTMAX Ultra New Technology to Address Key Test Challenges. Available online: https://www.synopsys.com/content/dam/synopsys/implementation&signoff/white-papers/dftmax-ultra-wp.pdf (accessed on 5 May 2019).
  40. TetraMAX. Synopsys ATPG Solution. Available online: http://www.synopsys.com/products/test/tetramax_ds.pdf (accessed on 1 June 2018).
Figure 1. Showing stages of scan methodologies evolution. (a) Scan chain with single in/out; (b) scan chain with multiple in/outs; (c) scan compression architecture; (d) hybrid: blend of scan and scan compression.
Figure 1. Showing stages of scan methodologies evolution. (a) Scan chain with single in/out; (b) scan chain with multiple in/outs; (c) scan compression architecture; (d) hybrid: blend of scan and scan compression.
Jlpea 09 00018 g001
Figure 2. Showing (a) general streaming scan compression architecture and (b) hybrid compression architecture of proposed AE method.
Figure 2. Showing (a) general streaming scan compression architecture and (b) hybrid compression architecture of proposed AE method.
Jlpea 09 00018 g002
Figure 3. Showing both (a) streaming and (b) non-streaming compression correlation in free variables when scan load test data is shifted into them.
Figure 3. Showing both (a) streaming and (b) non-streaming compression correlation in free variables when scan load test data is shifted into them.
Jlpea 09 00018 g003
Figure 4. Showing the fan-out cone correlation (a) correlation of the fan-out cone (b) correlation of the fan-out cone introduced by the scan compression architectures.
Figure 4. Showing the fan-out cone correlation (a) correlation of the fan-out cone (b) correlation of the fan-out cone introduced by the scan compression architectures.
Jlpea 09 00018 g004
Figure 5. (a) Streaming compression showing coverage loss. (b) Non-streaming compression showing coverage loss.
Figure 5. (a) Streaming compression showing coverage loss. (b) Non-streaming compression showing coverage loss.
Jlpea 09 00018 g005
Figure 6. Showing sparseness of the care bits in the compression technique along with correlation in the scan load test patterns shifted-in.
Figure 6. Showing sparseness of the care bits in the compression technique along with correlation in the scan load test patterns shifted-in.
Jlpea 09 00018 g006
Figure 7. Showing sparseness of the care bits in the AE method compression architecture and external chains are packed with more care bits.
Figure 7. Showing sparseness of the care bits in the AE method compression architecture and external chains are packed with more care bits.
Jlpea 09 00018 g007
Figure 8. Showing percentage of scan cells being excluded in the AE method.
Figure 8. Showing percentage of scan cells being excluded in the AE method.
Jlpea 09 00018 g008
Figure 9. Showing percentage of care bits contribution of scan cells being excluded from the compression technique.
Figure 9. Showing percentage of care bits contribution of scan cells being excluded from the compression technique.
Jlpea 09 00018 g009
Figure 10. Diagram showing top level flow of execution.
Figure 10. Diagram showing top level flow of execution.
Jlpea 09 00018 g010
Table 1. Showing percentage of TC achieved with the first 5% and the last 95% of TPC.
Table 1. Showing percentage of TC achieved with the first 5% and the last 95% of TPC.
Test PatternsPercentage of TC AchievedType of Faults
First 5% of TPC56% to 86%Easy to detect faults including random and deterministic.
Last 95% of TPC18% to 39.17%Hard-to-detect faults.
Table 2. Overall percentage of TPC reduction in circuits.
Table 2. Overall percentage of TPC reduction in circuits.
Name of the CircuitOverall TPC Reduction
C1Up to 77.13%
C2Up to 76.13%
C3Up to 24.91%
C5Up to 22.68%
C6Up to 17.55%
Table 3. Showing overall TC improvement in the circuits.
Table 3. Showing overall TC improvement in the circuits.
Name of CircuitPercentage of TC Improvement
C1Up to 1.33%
C2Up to 1.22%
C3Up to 0.09%
C4Up to 0.08%
C5Up to 0.16%
C6Up to 0.16%
Table 4. Showing results comparison of scan compression technique with proposed AE method.
Table 4. Showing results comparison of scan compression technique with proposed AE method.
Circuit#Scan Cells#SI/SO#ChainsScan CompressionAE Method at Same CoverageFull Run of AE MethodImprovementsExternal Chains Contributing
#TPC%TC#TPC%TC#TPC%TC%TC%TPC%Care Bits
12345678910111213
C128 K
6200220774.71105874.74279175.040.3352.0630%
8200233974.66157774.68279774.890.2332.5829%
8400276575.6188876.20311176.831.2267.8826%
105002851759565276.43349477.450.5077.1326%
16400178275.42230675.42282075.500.08--34%
16800213976.9556877.76265079.09%1.3373.4526%
C2147 K
8800688994.42411794.42733794.660.2440.2424%
81000615194.74492594.74742794.900.1619.9320%
82000826692.43415092.80925393.760.3349.7912%
10500965884.5613,64484.5616,53984.600.04--36%
101000565794.37548194.37731794.480.11328%
12600978495.17%----647895.00%0.1733.7943%
12120013,32693.50318193.53804094.611.1176.1324%
161000538094.51327294.5641194.780.2739.1836%
16150012,31092.94358692.94756393.310.3770.8731%
161600695093.02233393.03840894.241.2266.4325%
162000755092.28592292.28848492.430.1521.5621%
242000637893,76536593.76773593.900.1415.8829%
C3
840012,08084.5715,78784.5717,52884.590.02--37%
36180011,97984.63899584.6312,68384.720.0924.9138%
402000961584.6410,55784.6413,49984.700.06--37%
C4
81000819895.90905295.90965195.950.05--
202000859695.90 (9+11)10,39695.9011,13795.930.03--27%
81600865495.8812,44595.881324595.940.06--22%
241200690695.90767695.90825295.930.03--37%
122000977095.8910,37495.8911,95195.970.08--25%
361800644495.88702095.88740295.930.05--37%
C5530 K
203000 (9+11)13,02492.9011,92592.9013,74693.060.168.8436%
12600662592.91718292.92736192.940.03--40%
121200677392.91778692.91823392.970.06--38%
201000646192.88865892.88922692.910.03--40%
16800835192.87749192.87819792.930.0610.3040%
161600735292.92969692.93994792.950.03--38%
10800965092.88746192.88767492.900.0222.6840%
10160011,05492.8915,39292.8916,45292.960.07--36%
202000830992.9310,182
(9+11)
92.9510,32992.960.03--38%
10300014,38992.9515,94592.9517,59493.030.08--32%
C6
16800602793.29665393.29781293.320.03--29%
201000603393.29645393.29747493.310.02--29%
8400619993.29687193.29780393.310.02--28%
361800630593.27516993.27782293.360.091829%
363600727093.29599493.2910,10093.450.1617.5522%

Share and Cite

MDPI and ACS Style

Shantagiri, P.V.; Kapur, R. Aggressive Exclusion of Scan Flip-Flops from Compression Architecture for Better Coverage and Reduced TDV: A Hybrid Approach. J. Low Power Electron. Appl. 2019, 9, 18. https://doi.org/10.3390/jlpea9020018

AMA Style

Shantagiri PV, Kapur R. Aggressive Exclusion of Scan Flip-Flops from Compression Architecture for Better Coverage and Reduced TDV: A Hybrid Approach. Journal of Low Power Electronics and Applications. 2019; 9(2):18. https://doi.org/10.3390/jlpea9020018

Chicago/Turabian Style

Shantagiri, Pralhadrao V., and Rohit Kapur. 2019. "Aggressive Exclusion of Scan Flip-Flops from Compression Architecture for Better Coverage and Reduced TDV: A Hybrid Approach" Journal of Low Power Electronics and Applications 9, no. 2: 18. https://doi.org/10.3390/jlpea9020018

APA Style

Shantagiri, P. V., & Kapur, R. (2019). Aggressive Exclusion of Scan Flip-Flops from Compression Architecture for Better Coverage and Reduced TDV: A Hybrid Approach. Journal of Low Power Electronics and Applications, 9(2), 18. https://doi.org/10.3390/jlpea9020018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop