1. Introduction
Power consumption remains a critical design constraint in modern electronic systems. To address this challenge, Dynamic Voltage and Frequency Scaling (DVFS) and voltage islands have become key techniques for enhancing energy efficiency in high-performance, low-power SoCs, particularly in mobile and battery-powered devices [
1,
2]. DVFS is a power management approach that dynamically adjusts the supply voltage and operating frequency based on the system’s workload, reducing power consumption during periods of low-performance demand. Voltage islands, on the other hand, partition the SoC into multiple regions (islands), with each region operating under an independent power supply rail and potentially at a different voltage level. This design flexibility allows each island to function at the voltage and frequency best suited to its performance and power requirements, enabling localized power optimization across the SoC.
The integration of these low-power techniques in SoCs substantially impacts the manufacturing test process and, consequently, test costs. The multi-voltage testing requirement leads to a substantial increase in test data volume and prolonged test application times [
3], ultimately driving up test costs. The challenge becomes greater by the need to adapt scan frequencies and TAM configurations to accommodate different voltage levels, introducing additional test scheduling and power constraints during the test process [
4,
5]. Consequently, specialized design-for-testability (DFT) solutions and test optimization techniques should be used to address the unique challenges posed by DVFS and multi-voltage SoC architectures while minimizing test time and cost.
This work presents a novel integrated framework, implemented as a software toolkit [
6], that effectively utilizes the Time-Division Multiplexing (TDM) methodology [
4] for testing DVFS-based SoCs with multiple voltage islands. The proposed toolkit generates cost-effective and power-aware test processes tailored for application in the final production stages of the VLSI industry. Overall, the proposed toolkit supports five (5) major functionalities:
Efficient representation and verification of test processes targeting SoCs that consist of IP cores placed in multiple voltage islands operating at different voltage or frequency settings. A specialized, easy-to-use high-level language (HLL) has been developed to specify the modular SoC under test, the test environment, and its constraints. A verification tool is offered to perform syntactical and logical checks in HLL programs.
Compilation and execution of test processes written in HLL. The results are optimal test schedules annotated with power- and thermal-related data. It should be noted that thermal data are acquired from the Hotspot tool [
7]. It can be integrated into the proposed toolkit via a specialized interface.
Test Access Mechanism (TAM) Optimization for multi- SoCs. In addition, special mechanisms have been developed to address the high complexity of large SoC designs.
Visualization of power- and thermal-annotated test schedules generated by the toolkit’s solvers. Animation on SoC’s floorplan is available too.
Generation of test processes for artificial multi-
SoCs that may consist of benchmark cores. They can be used mainly for educational and research activities. A floorplan of the targeted artificial SoC can be created using the hotfloorplan tool [
7] according to the user’s needs.
This introduction is followed by
Section 2, which provides (a) the motivation underlying the Time-Division Multiplexing (TDM) methodology for testing multi-
SoCs and (b) an overview of basic computational techniques integrated into the toolkit. The rationale for developing the proposed toolkit is further elaborated in
Section 3.
Section 4 provides a detailed overview of the toolkit’s components, while
Section 5 outlines its core workflows. In
Section 6, the toolkit’s capabilities and results are demonstrated through three exemplary use cases. Finally,
Section 7 presents concluding remarks and an outlook on future developments.
2. Background
Testing multi-
SoCs is time-intensive due to the need for tests across multiple voltage levels [
8,
9]. Constraints such as restricted concurrent testing of cores within the same voltage island at different voltages and dependencies introduced by TAM structures further complicate test scheduling. Additionally, lower voltage levels require reduced scan frequencies to prevent timing violations in scan chains, wrappers, and decompression logic, leading to significantly longer test times compared with single-
SoCs.
Example 1. Figure 1a illustrates a SoC comprising cores and within voltage island and core within voltage island . Core and core are connected to TAM Bus 1, while core is connected to TAM Bus 2. Both voltage islands support two voltage levels and , with corresponding nominal scan frequencies of F and , respectively. The tester supplies test data at frequency F for SoC testing. Note that core and core cannot be tested simultaneously as they share TAM Bus 1. Furthermore, this shared bus introduces a dependency between voltage islands and , which may impose constraints on test scheduling. Although cores and do not share a bus, they cannot be tested concurrently at different voltage levels because they utilize the same power grid. Figure 1b graphically illustrates the increased complexity of testing a multi- SoC compared with a single- SoC. It highlights the additional constraints introduced by multiple voltage domains, such as test scheduling dependencies, power grid limitations, and varying scan frequencies, which collectively increase the challenges of efficient test execution. As demonstrated in [
4], conventional test scheduling methods are ineffective in reducing test time for multi-
SoCs, as they fail to address the unique challenges posed by multiple voltage domains. On the other hand, TDM optimizes test time by enabling parallel testing of cores that share TAM resources [
4]. With TDM, test data are transmitted by the tester at a high frequency and then shifted through the wrapper’s parallel port (WPP) into the scan chains of multiple cores at lower frequencies, determined by each core’s voltage level. Hence, TDM manages to exploit the gap between (a) the tester frequency and the shift frequencies of the cores, (b) the different shift frequencies of different cores, and (c) the shift frequencies of the cores at different voltage settings.
Example 2. Consider a tester supplying test data at frequency F for testing the SoC depicted in Figure 1a. A clock divider, driven by the tester’s clock, provides a shift frequency of F to Core and to Cores and . In one possible test scenario, Core fully utilizes TAM Bus 2 to be tested at voltage , while Cores and are tested in parallel at and , respectively, using a shift frequency of . Note that the clock divider’s signals ensure non-overlapping test data loading onto the shared TAM Bus 2 at scan frequency . TDM requires specialized techniques to ensure that each test operates at the most efficient shift frequency, thereby minimizing overall test time. To address this complex optimization problem, a Rectangle Packing-Simulated Annealing (RP-SA) method was proposed in [
4]. In the RP-SA approach, each test
at a given frequency
F is represented as a rectangle, where the width corresponds to the frequency and the height represents the test duration at that frequency. Each bus of TAM is modeled as a virtual bin with unlimited height and a width equal to the maximum frequency supported by the ATE channel. The objective is to pack these rectangles into their respective bins while minimizing the total height, effectively reducing test time.
The simulated annealing (SA) algorithm iterates through different test configurations by randomly selecting shift frequencies before applying the rectangle packing strategy. Its computational cost remains manageable for medium to very large SoCs. However, in scenarios where TAM optimization is required or when computational resources and time are highly constrained, a rapid estimation method becomes essential. To address such cases, the Greedy Rectangle Packing (GRP) approach was developed. Unlike , the technique executes rectangle packing only once, providing a significantly faster, more resource-efficient, but sometimes inferior test scheduling solution compared with .
Example 3. Consider again the artificial SoC depicted in Figure 1a. In this example, cores correspond to the IWLS [10,11] benchmark cores , , and . Each voltage island supports three voltage levels, and , with nominal scan frequencies of MHz, MHz, and MHz, respectively. The tester provides a 400 MHz clock signal. Test times for each core at , and , when using the maximum rated shift frequency per voltage level, can be found in [6], along with required data for replicating the test process using the proposed tool. Now, consider the following set of tests, where each test operates at the maximum rated frequency for its voltage level: , , , , , , , , . This set represents a complete test process for the given SoC. Each test is modeled as a rectangle, as explained above. For example, the candidate test is modeled as a rectangle with width of MHz and a height of . Bin corresponds to TAM Bus 1 of Figure 1a, and bin corresponds to TAM Bus 2. Figure 2a illustrates the test schedule generated by the Greedy Rectangle Packing (GRP) method. The x-axis of each bin represents the ATE channel bandwidth, 400 MHz in this case, while the y-axis represents the total test time, which is 1462 µs. In this test schedule, each test is applied using its maximum rated shift frequency, an intuitive and optimal choice for this simple SoC. However, in more complex cases,
and
can explore lower shift frequencies for individual tests to achieve better ATE channel bandwidth utilization, leading to more efficient test scheduling. This improvement is demonstrated in [
4] and also in
Section 6. Additionally, when the test process of Example 3 is re-evaluated using the
method, a more optimized solution is obtained, as shown in
Figure 2b (the total test time is 1412 us in this case), highlighting the advantages of this technique. However, this improvement comes at the cost of a 116-fold increase in computational time.
The work presented in [
4] clearly demonstrates the significant advantages of the above-mentioned TDM-based scheduling methods over conventional approaches such as Shortest-Job-First (SJF) [
12] and Session-Based Scheduling (SBS) [
3]. Specifically, the SJF method, originally effective for single-
designs, was adapted for multi-
scenarios by incorporating relevant constraints and evaluated in both best-case (BC) and worst-case (WC) scenarios. The SBS method was explicitly developed for multi-
SoCs. Two industrial systems-on-chip, SoC-A and SoC-B, were used as benchmarks for this comparison. For SoC-A, the TDM-based methods consistently achieved substantial reductions in test times across various bus configurations, outperforming SJF-WC by up to 5.68, SJF-BC by up to 2.21, and SBS by up to 2.07. Likewise, for SoC-B, the TDM approaches again showed significant improvements, achieving near-optimal or optimal test schedules and outperforming SJF-WC by up to 9.94
x, SJF-BC by up to 2.34
x, and SBS by up to 2.27
x. These results clearly illustrate that the TDM-based scheduling methodologies significantly enhance test-time efficiency through improved parallelization, better resource utilization, and effective handling of multi-voltage and frequency constraints compared with traditional scheduling methods. It should be noted that the above comparison includes an ILP-based TDM approach, which is not incorporated into the proposed toolkit due to scalability limitations. Instead, the toolkit utilizes the
method, as it achieves near-optimal or optimal solutions matching those of the TDM-ILP approach but with substantially lower computational cost, making it significantly more practical for real-world applications.
To fully exploit the benefits of the TDM methodology for testing multi-
SoCs, the TAM architecture must be carefully designed to align with TDM’s requirements. To demonstrate the significant impact of TAM configuration on test scheduling efficiency when employing TDM, a case study was conducted on a virtual SoC comprising thirteen (13) cores distributed across three (3) voltage islands, each supporting up to three (3) voltage levels (further details are available in [
6]). A total of 10,970 random TAM configurations were generated, all assuming the same total number of ATE channels. These configurations varied in terms of the number of buses, test data lines per bus (bus width), and core-to-bus assignments. The
test scheduling method was applied to each configuration to minimize the overall test time.
The resulting test times ranged from 16K to 254K, with an average test time
K and a standard deviation
K (1K =
). These results, illustrated in
Figure 3a, highlight the substantial variation in test time depending on TAM configuration, underscoring the critical role of TAM design in achieving efficient test scheduling when employing TDM.
Identifying an optimal TAM configuration among the millions of possible alternatives is a computationally complex and challenging problem. To address this, a Branch-and-Bound TAM optimization method has been developed [
5], specifically aimed at minimizing test time when TDM is employed for test scheduling. This technique exploits a computationally simple and accurate bounding criterion of TDM [
5] to prune a vast number of suboptimal TAM configurations early in the search process, thereby significantly reducing computational effort. Furthermore, the fast
test-scheduling approach is integrated to evaluate and select promising TAM configurations during the search.
The adoption of the
is justified by its demonstrated high correlation with both the highly effective
method [
5] and the lower bound of each TAM configuration, as depicted in
Figure 3b. This correlation ensures that the selected greedy evaluation method serves as a reliable and efficient means for assessing the quality of TAM configurations during the optimization process.
For very large SoCs, the complexity of the TAM optimization problem grows significantly. In such cases, exhaustively enumerating and evaluating all possible TAM configurations becomes impractical. However, the search space can often be significantly reduced by considering area and layout constraints. For instance, even if the TAM configuration yielding the minimum test time involves connecting two cores located at opposite corners of the die to the same TAM bus, the routing overhead associated with such a solution would likely render it impractical. Physical design constraints of this nature enable the early elimination of many infeasible TAM configurations, thereby reducing the complexity of the TAM optimization process.
A practical design approach for large SoCs is to partition the floorplan into
M adjacent areas
based on design constraints and generate a separate TAM configuration for each one of them. Besides the low routing overhead, this approach also has the benefit of low TAM optimization complexity because (a) of the smaller problem instance that needs to be solved for each area and (b) the parallel nature of the problem where each area can be optimized independently of the other areas. The only question in this case is how to distribute the available tester channels among the
M areas in order to minimize the test time for the whole SoC. In [
5], one exact and one heuristic optimization method are presented for distributing the TAM lines to the various areas
effectively.
3. Motivation for Toolkit
In contemporary semiconductor manufacturing, the testing process for Systems-on-Chip (SoCs) is predominantly governed by proprietary commercial tools. These tools offer robustness, extensive functionality, and high-quality results. However, they are also characterized by significant cost implications, proprietary encryption of intermediate data, and a lack of interfaces for custom extensions. Consequently, the validation and qualification of emerging methodologies and techniques become highly challenging, if not infeasible. As a result, despite their demonstrated efficacy, such tools are unsuitable as foundational platforms for exploratory research and the proof-of-concept validation of novel SoC test techniques and methodologies.
The primary motivation behind this work is to address these limitations and enhance the Technology Readiness Level (TRL) of the Time-Division Multiplexing (TDM)-based test scheduling methodology for multicore, multi- SoCs with multiple voltage islands. The goal is to validate the technical feasibility of this methodology while increasing its investment readiness. To achieve this, a user-friendly and efficient integrated environment for creating, executing, and studying TDM-based test processes has been developed. This environment facilitates the definition of test processes through high-level language (HLL) descriptions, provides structured and accessible procedures for executing customized test processes, and incorporates advanced test scheduling and TAM optimization solvers capable of handling the inherent complexity of modern SoCs. Furthermore, it includes a sophisticated visualization tool that enables strategic analysis and optimized decision-making, offering a single-entry graphical interface for efficiently managing TDM-based test tasks.
The framework of the TDM Test Scheduling and TAM Optimization toolkit may constitute a significant effort towards standardizing the formulation of complex manufacturing test problems, particularly in the context of dynamic voltage and frequency scaling (DVFS)-based SoCs. The complexity of the test process increases with the number of cores in an SoC, the number of voltage levels and voltage islands, the structure of test access mechanisms (TAMs), and the power and thermal constraints imposed on the test procedure.
To the best of the author’s knowledge, the TDM Test Scheduling and TAM Optimization toolkit is the first freely available tool capable of handling TDM-based test planning for multicore, multi- SoCs with voltage islands. Its development aims to inspire further advancements in free tools for manufacturing tests, encouraging their adoption by large research teams and organizations to improve the TRL of innovative test methodologies. In support of this objective, the presented toolkit follows a modular architecture, allowing for the seamless integration of alternative solvers.
The proposed toolkit holds immediate relevance to various user groups, classified as follows:
Professional Users
Test engineers: The TDM Test Scheduling and TAM Optimization toolkit provides test engineers with access to cost-efficient TDM-based test scheduling methodologies within a user-friendly, integrated framework. Its tailored visualization capabilities facilitate informed decision-making throughout the test planning process.
Researchers: The toolkit enables researchers to create artificial SoCs using intellectual property (IP) or benchmark cores, apply their methodologies, and compare outcomes against established solutions. Additionally, its visualization features aid in identifying potential issues, strengths, and weaknesses of newly developed methodologies.
Instructors: The toolkit serves as an educational aid by providing an intuitive environment for students to define and solve test scheduling and TAM optimization problems that consider power and thermal constraints. The visualization tools further support the exploration and presentation of test scheduling results.
Institutes and Organizations
Public and private research institutes: These entities can leverage the TDM Test Scheduling and TAM Optimization toolkit as an efficient computational framework for modeling complex test processes. The toolkit’s workbench supports the development of novel experiments, models, and methodologies.
Industrial Applications
VLSI companies: The proposed toolkit offers value to VLSI companies by providing reusable modules for test planning. Its underlying software architecture ensures seamless integration into existing systems, facilitating potential commercialization as a CAD tool or its adaptation for alternative applications.
In conclusion, the TDM Test Scheduling and TAM Optimization toolkit, by providing an accessible, modular, and efficient framework, aims to bridge the gap between academic research and industrial application, fostering the continued evolution of test scheduling and TAM optimization methodologies in semiconductor design and manufacturing.
4. Description of Toolkit
The TDM Test Scheduling and TAM Optimization toolkit consists of six (6) distinct modules that can function independently or collaboratively to achieve its principal objective: the formulation and execution of an optimal test process for a multicore, multi- SoC with multiple voltage islands. These modules include the (), the , the , the , the (), and the .
The module provides a structured approach for recording technological attributes of intellectual property (IP) or benchmark cores embedded within a targeted SoC. These attributes include technology process, required area, supported voltage and frequency levels, test time per frequency per voltage, average dynamic power per frequency per voltage, static power per voltage, state retention requirements between tests, and wrapper parallel port (WPP) size. A specialized parameter facilitates technology process shifts in core data, adjusting area, power, and test time. The textual representation of a core is stored in a designated (). Additionally, employs files to define a core’s power consumption during testing at fixed time intervals, allowing dynamic substitution of average power values. While this enhances power-constrained test processes, it also increases computational overhead.
The
module supports an efficient and comprehensible textual representation of the SoC architecture and the formulation of TDM-based test processes. It is assumed that SoCs allow for modular testing, wherein each embedded core is treated as a testable unit through the implementation of a core test wrapper, compliant with standards such as IEEE 1500 [
13]. Through
, users can specify SoC-related parameters, including embedded cores, voltage island configurations, and supported voltage levels per island. Additionally, test process parameters such as the maximum scan-shift frequency supported by Automatic Test Equipment (ATE), supported frequency levels for TDM-based test processes, TAM description, and power constraints during testing can be defined. All these specifications are stored in an initialization file of type
.
Given the core definition files and the initialization file of the test process, the toolkit automatically generates TDM-based test process description files, significantly streamlining workflow efficiency. Specifically, two output files are generated: a file () and a file (). The file describes a proposed test access mechanism for the targeted SoC, balancing test load distribution across the available test buses. The file contains all the necessary information for testing the targeted SoC based on the proposed TAM. Since these files are formatted in , users can modify them to evaluate alternative test strategies.
Given a file as input, the toolkit can generate an optimal TAM configuration tailored for testing the targeted SoC. This configuration is stored in a specialized file of type within the framework. Users can combine such files with a file to automatically generate a new TDM-based test process aligned with the described TAM configuration.
Table 1 presents a comparison between HLL, Standard Test Interface Language (STIL) [
14], and Automatic Test Markup Language (ATML) [
15], focusing on their purpose, structure, and suitability for SoC test process modeling and automation.
Overall, HLL is a research-oriented, high-level language specifically designed for power/thermal-aware test process modeling and TAM optimization in multicore SoCs. It prioritizes usability, flexibility, and integration with experimental solvers. STIL is a low-level, production-grade standard used primarily for describing test vectors, timing, and patterns. It is suited for post-ATPG and ATE interfacing but lacks expressiveness for architectural modeling or test planning. ATML, on the other hand, is not a test modeling language per se but rather a metadata and configuration framework used in complex automatic test systems, particularly in the aerospace and defense industries. While STIL and ATML are indispensable in production flows for pattern exchange and test logistics, HLL fills a unique gap by offering a lightweight, modular, and researcher-friendly approach to test planning, TAM design, and test process optimization.
The
module offers advanced functionalities for processing files written in
format. Supporting multiple tabs for concurrent file editing, it includes a comprehensive suite of text utilities [
16]. This feature enables efficient management of test process data within the toolkit without requiring external applications. A notable feature of the
is its smart
code verification tool, which analyzes
code for syntactical and logical errors, providing real-time feedback to optimize test process implementation.
The module operates as a standalone console application that also interfaces with external tools. It is responsible for generating the following:
Test processes using initialization or TAM configuration files.
An optimized TAM configuration for a given SoC, incorporating or ignoring power constraints.
A high-quality test schedule for a given test process, ensuring compliance with power constraints.
Detailed reports on power and thermal effects across the SoC during testing.
Log files documenting execution processes and results.
The
module accepts a configuration file as input, along with a verification flag. There are three types of configuration files within the
framework, guiding the console application in creating a test process, optimizing TAM for a given SoC, or executing a test process. These files include paths to
program files, solver parameters, power constraints, and external tool configurations, such as the
thermal analysis tool [
7]. Additionally, they define parameters affecting output data representation, such as time intervals, intermediate result derivation, and metric units. Configuration files can be automatically generated using the
module or manually edited. In verification mode, a syntax analyzer scans input parameters, while a logical analyzer assesses interdependencies within the data, flagging errors or warnings in the console to prevent computational interruptions.
The
module integrates three main solvers: a Greedy solver and an SA-RP solver, each with specific advantages and limitations [
4], along with a cluster-targeting solver for balancing tasks. The solvers in the toolkit incorporate novel solving enhancements, prioritizing large test loads and mitigating infinite loops in the rectangle packing process. The solvers consistently produce optimal results, even for highly variable test loads. Furthermore, the module interfaces with
to estimate the thermal impact of generated test schedules. The
module automates
configuration, input derivation, and execution, ensuring seamless integration of thermal analysis. A log file is generated for each solver execution, documenting solver actions and results.
The
module is a user-friendly component that generates detailed and interactive graphical representations using the
library [
17]. It comprises four submodules supporting the following:
Visualization of test schedules, with optional annotations for power and thermal data.
Visualization of power graphs at the SoC, voltage island, and core levels.
Visualization of thermal graphs at the SoC, voltage island, and core levels.
Visualization of the SoC floorplan, annotated with power and thermal data.
The module consolidates the toolkit’s functionalities into an integrated user interface. It consists of five submodules: , , , , and . The submodule enables users to define credentials, select and configure solvers, set test process parameters, and integrate the and tools. The module integrates the , , and into a unified workspace. The module provides references to research materials on TDM-based test processes. The submodule supports user profile management, database functionality, remote notifications, and data encryption for security. The submodule automates the creation of program files for test processes.
Finally, the module acts as the glue logic and the quiver of the TDM Test Scheduling and TAM Optimization toolkit, where every utility and solving function can be acquired.
Table 2 presents a comparative analysis highlighting the added value and distinctive features of the TDM Test Scheduling and TAM Optimization Toolkit compared with existing toolkits and software frameworks commonly employed in the test scheduling and TAM optimization domain.
The proposed TDM Test Scheduling and TAM Optimization Toolkit is specifically designed to address the unique complexities associated with testing modern multicore, DVFS-based Systems-on-Chip (SoCs) featuring multiple voltage islands. In contrast to existing commercial tools such as Mentor Tessent and Synopsys DFTMAX, which primarily focus on general Design-for-Test (DfT) tasks and ATPG pattern generation, the TDM Toolkit explicitly integrates advanced optimization methodologies that consider power and thermal constraints. Additionally, commercial solutions are typically limited in their support for multi-voltage/frequency scaling and thermal-aware strategies, and their proprietary nature restricts customization and experimentation. Similarly, while academic and research tools like the ITC’02 Benchmarks and Fault provide valuable benchmarks, DfT, and basic validation capabilities, they typically lack comprehensive support for multi-voltage scenarios and direct integration of power- and thermal-aware optimization. Consequently, the proposed toolkit significantly advances the state-of-the-art by integrating TDM-based test scheduling methods, robust optimization strategies, extensive visualization capabilities, and a user-friendly, modular design. This makes it a highly valuable tool for both academic research and practical applications in industrial test environments.
5. Toolkit’s Workflows
The flowchart in
Figure 4a delineates the sequential stages involved in generating a test process, including loading cores, initializing the SoC, creating a floorplan, and finalizing the test process. Specifically, the process begins with the user determining whether to load Intellectual Property (IP) cores or benchmark cores. If IP cores are selected, the user must load them from
files, with the option to select multiple files simultaneously. If benchmark cores are chosen, the flow proceeds without additional file input, as the corresponding
files are pre-embedded within the toolkit’s resources. Subsequently, the user must select a SoC initialization file,
file, to define the SoC architecture and form the test process. If a floorplan is required for the targeted SoC, the
tool is employed to generate it. Finally, the test process is executed, producing the
and
files, thereby concluding the workflow.
The flowchart in
Figure 4b illustrates the workflow for generating optimal test schedules within the TDM Test Scheduling and TAM Optimization Toolkit. The process begins with the user providing a
file, which contains structural and functional data of the targeted System-on-Chip.
Following this, a decision is made regarding whether additional test scheduling specifications are required. If additional specifications are needed, they are incorporated into the workflow; otherwise, the process proceeds directly to the Test Access Mechanism (TAM) selection stage. At this stage, the user must select a file that defines the TAM configuration for the SoC. Subsequently, the test scheduling algorithm selection takes place. The user can choose between two distinct scheduling algorithms.
First, a Greedy Algorithm (), a heuristic-based approach that efficiently schedules test processes while minimizing computational resource utilization. Second, a Simulated Annealing with Rectangle Packing () approach, an advanced optimization-based method designed to iteratively refine the test allocation for enhanced scheduling efficiency. Upon selecting the algorithm, the system processes the and files to generate an optimized test schedule. The outputs of this stage include a file containing the generated test schedule and metadata files providing additional information related to the test schedule, including power consumption, thermal analysis results from HOTSPOT, execution time, and resource utilization.
At this point, the workflow concludes, marking the finalization of the test scheduling process. This methodology ensures an efficient, systematic, and scalable approach to test schedule generation, incorporating modularity and adaptability to accommodate diverse SoC architectures.
The flowchart in
Figure 4c represents the workflow for the optimization of the TAM of a targeted SoC test process. It commences with the loading of SoC data via a
file. At this stage, the system determines whether the given SoC is classified as a large-scale design. This distinction is crucial as it dictates subsequent steps related to SoC partitioning and
channel allocation.
If the SoC is classified as large, the workflow proceeds with the implementation of the selected partitioning strategy. There are three distinct approaches:
Simple SoC Partitioning: A fundamental approach where the SoC is divided into smaller units without incorporating additional constraints.
Island- and Area-Based Partitioning: A more refined clustering technique that partitions the SoC based on a predefined number of voltage islands per partition, taking into account proximity constraints among islands to enhance overall test process efficiency.
Island-, Area-, and Core-Based Partitioning: This approach clusters voltage islands to form partitions not only based on their number and proximity but also by considering the total number of cores per partition for computational reasons.
Once the partitioning takes place, the available
channels are optimally allocated across each partition, following the methodology outlined in [
5]. Let
represent the number of ATE channels assigned to partition
i. To ensure robust TAM optimization, the toolkit systematically evaluates configurations within the range [
− 0.25
,
+ 0.25
] for each partition
i, allowing for adaptive resource distribution that may enhance test efficiency.
The toolkit employs the technique to optimize the TAM for each partition. The optimization process is repeated for every possible number of ATE channels within the specified range for each partition. Users can choose between two evaluation algorithms for TAM configurations that pass the bounding criterion, namely the computationally efficient and the computationally intensive . At the end of this process, each partition is assigned a set of optimized TAM configurations. The toolkit then systematically explores all feasible sets of partition-level TAM configurations that satisfy the ATE constraint — i.e., the number of ATE channels used in each SoC’s partition in the set does not exceed the total number of ATE channels. Finally, it selects the most efficient set of partition-level TAM configurations, which is then used to derive the optimized test process for the entire SoC.
If the SoC is not classified as large, the workflow branches into three alternative TAM optimization techniques:
Branch and bound optimization: a highly reliable and exhaustive search-based method that explores feasible TAM solutions using a bounding criterion to derive optimal TAM configurations (see
Section 2).
Test load balancing: a method focused on distributing core test loads efficiently across the available TAM resources, ensuring an even allocation. Users can specify an acceptable deviation among the test loads received per TAM bus, thereby influencing both the execution speed and quality of the final results.
Heuristic: a computationally efficient alternative that applies the method to explore the solution space of TAM configurations, rapidly deriving an effective, though potentially suboptimal, TAM configuration.
Following the selection of the TAM optimization method, the toolkit produces two essential outputs: a file encapsulating the generated optimal TAM structure and metadata files with statistical data about the optimization procedure. Finally, the proposed TAM configuration is used to create the optimized test process for the targeted SoC, and the workflow concludes.