Next Article in Journal
Design of a Linear Floating Active Resistor with Low Temperature Coefficient
Previous Article in Journal
Optical Frequency Comb-Based 256-QAM WDM Coherent System with Digital Signal Processing Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TDM Test Scheduler and TAM Optimization Toolkit: An Integrated Framework for Test Processes of DVFS-Based SoCs with Multiple Voltage Islands

by
Fotios Vartziotis
Department of Informatics and Telecommunications, University of Ioannina, 471 00 Arta, Greece
Chips 2025, 4(2), 17; https://doi.org/10.3390/chips4020017
Submission received: 14 February 2025 / Revised: 30 March 2025 / Accepted: 7 April 2025 / Published: 11 April 2025

Abstract

:
The TDM Test Scheduler and TAM Optimization Toolkit is a novel, integrated, and user-friendly solution designed for engineers, researchers, and instructors working in the field of manufacturing tests. It effectively supports test planning for multicore, DVFS-based SoCs with multiple voltage islands, offering optimized solutions that minimize test costs while ensuring compliance with power and thermal constraints. The toolkit provides (a) a high-level language (HLL) for the intuitive representation of test processes, along with a smart syntax and logic checker for verification; (b) an advanced compilation and execution environment featuring two computationally efficient Time-Division Multiplexing (TDM)-specialized solvers; (c) a sophisticated Test Access Mechanism (TAM) optimization framework; (d) a customized visualization environment capable of depicting and animating power- and thermal-annotated test schedules; (e) a versatile testbed for educational and research activities.

1. Introduction

Power consumption remains a critical design constraint in modern electronic systems. To address this challenge, Dynamic Voltage and Frequency Scaling (DVFS) and voltage islands have become key techniques for enhancing energy efficiency in high-performance, low-power SoCs, particularly in mobile and battery-powered devices [1,2]. DVFS is a power management approach that dynamically adjusts the supply voltage and operating frequency based on the system’s workload, reducing power consumption during periods of low-performance demand. Voltage islands, on the other hand, partition the SoC into multiple regions (islands), with each region operating under an independent power supply rail and potentially at a different voltage level. This design flexibility allows each island to function at the voltage and frequency best suited to its performance and power requirements, enabling localized power optimization across the SoC.
The integration of these low-power techniques in SoCs substantially impacts the manufacturing test process and, consequently, test costs. The multi-voltage testing requirement leads to a substantial increase in test data volume and prolonged test application times [3], ultimately driving up test costs. The challenge becomes greater by the need to adapt scan frequencies and TAM configurations to accommodate different voltage levels, introducing additional test scheduling and power constraints during the test process [4,5]. Consequently, specialized design-for-testability (DFT) solutions and test optimization techniques should be used to address the unique challenges posed by DVFS and multi-voltage SoC architectures while minimizing test time and cost.
This work presents a novel integrated framework, implemented as a software toolkit [6], that effectively utilizes the Time-Division Multiplexing (TDM) methodology [4] for testing DVFS-based SoCs with multiple voltage islands. The proposed toolkit generates cost-effective and power-aware test processes tailored for application in the final production stages of the VLSI industry. Overall, the proposed toolkit supports five (5) major functionalities:
  • Efficient representation and verification of test processes targeting SoCs that consist of IP cores placed in multiple voltage islands operating at different voltage or frequency settings. A specialized, easy-to-use high-level language (HLL) has been developed to specify the modular SoC under test, the test environment, and its constraints. A verification tool is offered to perform syntactical and logical checks in HLL programs.
  • Compilation and execution of test processes written in HLL. The results are optimal test schedules annotated with power- and thermal-related data. It should be noted that thermal data are acquired from the Hotspot tool [7]. It can be integrated into the proposed toolkit via a specialized interface.
  • Test Access Mechanism (TAM) Optimization for multi- V d d SoCs. In addition, special mechanisms have been developed to address the high complexity of large SoC designs.
  • Visualization of power- and thermal-annotated test schedules generated by the toolkit’s solvers. Animation on SoC’s floorplan is available too.
  • Generation of test processes for artificial multi- V d d SoCs that may consist of benchmark cores. They can be used mainly for educational and research activities. A floorplan of the targeted artificial SoC can be created using the hotfloorplan tool [7] according to the user’s needs.
This introduction is followed by Section 2, which provides (a) the motivation underlying the Time-Division Multiplexing (TDM) methodology for testing multi- V d d SoCs and (b) an overview of basic computational techniques integrated into the toolkit. The rationale for developing the proposed toolkit is further elaborated in Section 3. Section 4 provides a detailed overview of the toolkit’s components, while Section 5 outlines its core workflows. In Section 6, the toolkit’s capabilities and results are demonstrated through three exemplary use cases. Finally, Section 7 presents concluding remarks and an outlook on future developments.

2. Background

Testing multi- V d d SoCs is time-intensive due to the need for tests across multiple voltage levels [8,9]. Constraints such as restricted concurrent testing of cores within the same voltage island at different voltages and dependencies introduced by TAM structures further complicate test scheduling. Additionally, lower voltage levels require reduced scan frequencies to prevent timing violations in scan chains, wrappers, and decompression logic, leading to significantly longer test times compared with single- V d d SoCs.
Example 1.
Figure 1a illustrates a SoC comprising cores C 1 and C 2 within voltage island L 1 and core C 3 within voltage island L 2 . Core C 2 and core C 3 are connected to TAM Bus 1, while core C 1 is connected to TAM Bus 2. Both voltage islands support two voltage levels V 1 and V 2 , with corresponding nominal scan frequencies of F and F / 2 , respectively. The tester supplies test data at frequency F for SoC testing. Note that core C 2 and core C 3 cannot be tested simultaneously as they share TAM Bus 1. Furthermore, this shared bus introduces a dependency between voltage islands L 1 and L 2 , which may impose constraints on test scheduling. Although cores C 1 and C 2 do not share a bus, they cannot be tested concurrently at different voltage levels because they utilize the same power grid. Figure 1b graphically illustrates the increased complexity of testing a multi- V d d SoC compared with a single- V d d SoC. It highlights the additional constraints introduced by multiple voltage domains, such as test scheduling dependencies, power grid limitations, and varying scan frequencies, which collectively increase the challenges of efficient test execution.
As demonstrated in [4], conventional test scheduling methods are ineffective in reducing test time for multi- V d d SoCs, as they fail to address the unique challenges posed by multiple voltage domains. On the other hand, TDM optimizes test time by enabling parallel testing of cores that share TAM resources [4]. With TDM, test data are transmitted by the tester at a high frequency and then shifted through the wrapper’s parallel port (WPP) into the scan chains of multiple cores at lower frequencies, determined by each core’s voltage level. Hence, TDM manages to exploit the gap between (a) the tester frequency and the shift frequencies of the cores, (b) the different shift frequencies of different cores, and (c) the shift frequencies of the cores at different voltage settings.
Example 2.
Consider a tester supplying test data at frequency F for testing the SoC depicted in Figure 1a. A clock divider, driven by the tester’s clock, provides a shift frequency of F to Core C 1 and F / 2 to Cores C 2 and C 3 . In one possible test scenario, Core C 1 fully utilizes TAM Bus 2 to be tested at voltage V 1 , while Cores C 2 and C 3 are tested in parallel at V 1 and V 2 , respectively, using a shift frequency of F / 2 . Note that the clock divider’s signals ensure non-overlapping test data loading onto the shared TAM Bus 2 at scan frequency F / 2 .
TDM requires specialized techniques to ensure that each test operates at the most efficient shift frequency, thereby minimizing overall test time. To address this complex optimization problem, a Rectangle Packing-Simulated Annealing (RP-SA) method was proposed in [4]. In the RP-SA approach, each test τ at a given frequency F is represented as a rectangle, where the width corresponds to the frequency and the height represents the test duration at that frequency. Each bus of TAM is modeled as a virtual bin with unlimited height and a width equal to the maximum frequency supported by the ATE channel. The objective is to pack these rectangles into their respective bins while minimizing the total height, effectively reducing test time.
The simulated annealing (SA) algorithm iterates through different test configurations by randomly selecting shift frequencies before applying the rectangle packing strategy. Its computational cost remains manageable for medium to very large SoCs. However, in scenarios where TAM optimization is required or when computational resources and time are highly constrained, a rapid estimation method becomes essential. To address such cases, the Greedy Rectangle Packing (GRP) approach was developed. Unlike R P S A , the G R P technique executes rectangle packing only once, providing a significantly faster, more resource-efficient, but sometimes inferior test scheduling solution compared with R P S A .
Example 3.
Consider again the artificial SoC depicted in Figure 1a. In this example, cores C 1 , C 2 , C 3 correspond to the IWLS [10,11] benchmark cores u s b f u n c , w b c o n m a x , and d e s p e r f . Each voltage island supports three voltage levels, V 1 , V 2 , and V 3 , with nominal scan frequencies of F 1 = 400 MHz, F 2 = 200 MHz, and F 3 = 100 MHz, respectively. The tester provides a 400 MHz clock signal. Test times for each core at V 1 , V 2 , and V 3 , when using the maximum rated shift frequency per voltage level, can be found in [6], along with required data for replicating the test process using the proposed tool. Now, consider the following set of tests, where each test operates at the maximum rated frequency for its voltage level: S a = { τ C 1 V 1 F 1 , τ C 1 V 2 F 2 , τ C 1 V 3 F 3 , τ C 2 V 1 F 1 , τ C 2 V 2 F 2 , τ C 2 V 3 F 3 , τ C 3 V 1 F 1 , τ C 3 V 2 F 2 , τ C 3 V 3 F 3 } . This set represents a complete test process for the given SoC. Each test is modeled as a rectangle, as explained above. For example, the candidate test τ C 1 V 1 F 1 is modeled as a rectangle with width of F 1 = 400 MHz and a height of τ C 1 V 1 F 1 = 95.35 u s . Bin V B 1 corresponds to TAM Bus 1 of Figure 1a, and bin V B 2 corresponds to TAM Bus 2. Figure 2a illustrates the test schedule generated by the Greedy Rectangle Packing (GRP) method. The x-axis of each bin represents the ATE channel bandwidth, 400 MHz in this case, while the y-axis represents the total test time, which is 1462 µs.
In this test schedule, each test is applied using its maximum rated shift frequency, an intuitive and optimal choice for this simple SoC. However, in more complex cases, G R P and R P S A can explore lower shift frequencies for individual tests to achieve better ATE channel bandwidth utilization, leading to more efficient test scheduling. This improvement is demonstrated in [4] and also in Section 6. Additionally, when the test process of Example 3 is re-evaluated using the R P S A method, a more optimized solution is obtained, as shown in Figure 2b (the total test time is 1412 us in this case), highlighting the advantages of this technique. However, this improvement comes at the cost of a 116-fold increase in computational time.
The work presented in [4] clearly demonstrates the significant advantages of the above-mentioned TDM-based scheduling methods over conventional approaches such as Shortest-Job-First (SJF) [12] and Session-Based Scheduling (SBS) [3]. Specifically, the SJF method, originally effective for single- V d d designs, was adapted for multi- V d d scenarios by incorporating relevant constraints and evaluated in both best-case (BC) and worst-case (WC) scenarios. The SBS method was explicitly developed for multi- V d d SoCs. Two industrial systems-on-chip, SoC-A and SoC-B, were used as benchmarks for this comparison. For SoC-A, the TDM-based methods consistently achieved substantial reductions in test times across various bus configurations, outperforming SJF-WC by up to 5.68, SJF-BC by up to 2.21, and SBS by up to 2.07. Likewise, for SoC-B, the TDM approaches again showed significant improvements, achieving near-optimal or optimal test schedules and outperforming SJF-WC by up to 9.94x, SJF-BC by up to 2.34x, and SBS by up to 2.27x. These results clearly illustrate that the TDM-based scheduling methodologies significantly enhance test-time efficiency through improved parallelization, better resource utilization, and effective handling of multi-voltage and frequency constraints compared with traditional scheduling methods. It should be noted that the above comparison includes an ILP-based TDM approach, which is not incorporated into the proposed toolkit due to scalability limitations. Instead, the toolkit utilizes the R P S A method, as it achieves near-optimal or optimal solutions matching those of the TDM-ILP approach but with substantially lower computational cost, making it significantly more practical for real-world applications.
To fully exploit the benefits of the TDM methodology for testing multi- V d d SoCs, the TAM architecture must be carefully designed to align with TDM’s requirements. To demonstrate the significant impact of TAM configuration on test scheduling efficiency when employing TDM, a case study was conducted on a virtual SoC comprising thirteen (13) cores distributed across three (3) voltage islands, each supporting up to three (3) voltage levels (further details are available in [6]). A total of 10,970 random TAM configurations were generated, all assuming the same total number of ATE channels. These configurations varied in terms of the number of buses, test data lines per bus (bus width), and core-to-bus assignments. The G R P test scheduling method was applied to each configuration to minimize the overall test time.
The resulting test times ranged from 16K to 254K, with an average test time μ = 61 K and a standard deviation σ = 57 K (1K = 10 3 ). These results, illustrated in Figure 3a, highlight the substantial variation in test time depending on TAM configuration, underscoring the critical role of TAM design in achieving efficient test scheduling when employing TDM.
Identifying an optimal TAM configuration among the millions of possible alternatives is a computationally complex and challenging problem. To address this, a Branch-and-Bound TAM optimization method has been developed [5], specifically aimed at minimizing test time when TDM is employed for test scheduling. This technique exploits a computationally simple and accurate bounding criterion of TDM [5] to prune a vast number of suboptimal TAM configurations early in the search process, thereby significantly reducing computational effort. Furthermore, the fast G R P test-scheduling approach is integrated to evaluate and select promising TAM configurations during the search.
The adoption of the G R P is justified by its demonstrated high correlation with both the highly effective R P S A method [5] and the lower bound of each TAM configuration, as depicted in Figure 3b. This correlation ensures that the selected greedy evaluation method serves as a reliable and efficient means for assessing the quality of TAM configurations during the optimization process.
For very large SoCs, the complexity of the TAM optimization problem grows significantly. In such cases, exhaustively enumerating and evaluating all possible TAM configurations becomes impractical. However, the search space can often be significantly reduced by considering area and layout constraints. For instance, even if the TAM configuration yielding the minimum test time involves connecting two cores located at opposite corners of the die to the same TAM bus, the routing overhead associated with such a solution would likely render it impractical. Physical design constraints of this nature enable the early elimination of many infeasible TAM configurations, thereby reducing the complexity of the TAM optimization process.
A practical design approach for large SoCs is to partition the floorplan into M adjacent areas A 1 , A 2 , , A w based on design constraints and generate a separate TAM configuration for each one of them. Besides the low routing overhead, this approach also has the benefit of low TAM optimization complexity because (a) of the smaller problem instance that needs to be solved for each area and (b) the parallel nature of the problem where each area can be optimized independently of the other areas. The only question in this case is how to distribute the available tester channels among the M areas in order to minimize the test time for the whole SoC. In [5], one exact and one heuristic optimization method are presented for distributing the TAM lines to the various areas A i effectively.

3. Motivation for Toolkit

In contemporary semiconductor manufacturing, the testing process for Systems-on-Chip (SoCs) is predominantly governed by proprietary commercial tools. These tools offer robustness, extensive functionality, and high-quality results. However, they are also characterized by significant cost implications, proprietary encryption of intermediate data, and a lack of interfaces for custom extensions. Consequently, the validation and qualification of emerging methodologies and techniques become highly challenging, if not infeasible. As a result, despite their demonstrated efficacy, such tools are unsuitable as foundational platforms for exploratory research and the proof-of-concept validation of novel SoC test techniques and methodologies.
The primary motivation behind this work is to address these limitations and enhance the Technology Readiness Level (TRL) of the Time-Division Multiplexing (TDM)-based test scheduling methodology for multicore, multi- V d d SoCs with multiple voltage islands. The goal is to validate the technical feasibility of this methodology while increasing its investment readiness. To achieve this, a user-friendly and efficient integrated environment for creating, executing, and studying TDM-based test processes has been developed. This environment facilitates the definition of test processes through high-level language (HLL) descriptions, provides structured and accessible procedures for executing customized test processes, and incorporates advanced test scheduling and TAM optimization solvers capable of handling the inherent complexity of modern SoCs. Furthermore, it includes a sophisticated visualization tool that enables strategic analysis and optimized decision-making, offering a single-entry graphical interface for efficiently managing TDM-based test tasks.
The framework of the TDM Test Scheduling and TAM Optimization toolkit may constitute a significant effort towards standardizing the formulation of complex manufacturing test problems, particularly in the context of dynamic voltage and frequency scaling (DVFS)-based SoCs. The complexity of the test process increases with the number of cores in an SoC, the number of voltage levels and voltage islands, the structure of test access mechanisms (TAMs), and the power and thermal constraints imposed on the test procedure.
To the best of the author’s knowledge, the TDM Test Scheduling and TAM Optimization toolkit is the first freely available tool capable of handling TDM-based test planning for multicore, multi- V d d SoCs with voltage islands. Its development aims to inspire further advancements in free tools for manufacturing tests, encouraging their adoption by large research teams and organizations to improve the TRL of innovative test methodologies. In support of this objective, the presented toolkit follows a modular architecture, allowing for the seamless integration of alternative solvers.
The proposed toolkit holds immediate relevance to various user groups, classified as follows:
Professional Users
  • Test engineers: The TDM Test Scheduling and TAM Optimization toolkit provides test engineers with access to cost-efficient TDM-based test scheduling methodologies within a user-friendly, integrated framework. Its tailored visualization capabilities facilitate informed decision-making throughout the test planning process.
  • Researchers: The toolkit enables researchers to create artificial SoCs using intellectual property (IP) or benchmark cores, apply their methodologies, and compare outcomes against established solutions. Additionally, its visualization features aid in identifying potential issues, strengths, and weaknesses of newly developed methodologies.
  • Instructors: The toolkit serves as an educational aid by providing an intuitive environment for students to define and solve test scheduling and TAM optimization problems that consider power and thermal constraints. The visualization tools further support the exploration and presentation of test scheduling results.
Institutes and Organizations
  • Public and private research institutes: These entities can leverage the TDM Test Scheduling and TAM Optimization toolkit as an efficient computational framework for modeling complex test processes. The toolkit’s workbench supports the development of novel experiments, models, and methodologies.
Industrial Applications
  • VLSI companies: The proposed toolkit offers value to VLSI companies by providing reusable modules for test planning. Its underlying software architecture ensures seamless integration into existing systems, facilitating potential commercialization as a CAD tool or its adaptation for alternative applications.
In conclusion, the TDM Test Scheduling and TAM Optimization toolkit, by providing an accessible, modular, and efficient framework, aims to bridge the gap between academic research and industrial application, fostering the continued evolution of test scheduling and TAM optimization methodologies in semiconductor design and manufacturing.

4. Description of Toolkit

The TDM Test Scheduling and TAM Optimization toolkit consists of six (6) distinct modules that can function independently or collaboratively to achieve its principal objective: the formulation and execution of an optimal test process for a multicore, multi- V d d SoC with multiple voltage islands. These modules include the H i g h L e v e l L a n g u a g e ( H L L ), the E d i t o r , the C o n s o l e , the V i s u a l i z e r , the G r a p h i c a l U s e r I n t e r f a c e ( G U I ), and the L i b r a r y .
The H L L module provides a structured approach for recording technological attributes of intellectual property (IP) or benchmark cores embedded within a targeted SoC. These attributes include technology process, required area, supported voltage and frequency levels, test time per frequency per voltage, average dynamic power per frequency per voltage, static power per voltage, state retention requirements between tests, and wrapper parallel port (WPP) size. A specialized f a c t o r parameter facilitates technology process shifts in core data, adjusting area, power, and test time. The textual representation of a core is stored in a designated c o r e d e f i n i t i o n f i l e ( . c d f ). Additionally, H L L employs . p t r a c e files to define a core’s power consumption during testing at fixed time intervals, allowing dynamic substitution of average power values. While this enhances power-constrained test processes, it also increases computational overhead.
The H L L module supports an efficient and comprehensible textual representation of the SoC architecture and the formulation of TDM-based test processes. It is assumed that SoCs allow for modular testing, wherein each embedded core is treated as a testable unit through the implementation of a core test wrapper, compliant with standards such as IEEE 1500 [13]. Through H L L , users can specify SoC-related parameters, including embedded cores, voltage island configurations, and supported voltage levels per island. Additionally, test process parameters such as the maximum scan-shift frequency supported by Automatic Test Equipment (ATE), supported frequency levels for TDM-based test processes, TAM description, and power constraints during testing can be defined. All these specifications are stored in an initialization file of type . i n i t 2 D .
Given the core definition files and the initialization file of the test process, the toolkit automatically generates TDM-based test process description files, significantly streamlining workflow efficiency. Specifically, two output files are generated: a T A M d a t a file ( . t a m ) and a S o C d a t a file ( . s o c ). The T A M d a t a file describes a proposed test access mechanism for the targeted SoC, balancing test load distribution across the available test buses. The S o C d a t a file contains all the necessary information for testing the targeted SoC based on the proposed TAM. Since these files are formatted in H L L , users can modify them to evaluate alternative test strategies.
Given a . s o c file as input, the toolkit can generate an optimal TAM configuration tailored for testing the targeted SoC. This configuration is stored in a specialized file of type . t 2 d within the H L L framework. Users can combine such files with a . s o c file to automatically generate a new TDM-based test process aligned with the described TAM configuration.
Table 1 presents a comparison between HLL, Standard Test Interface Language (STIL) [14], and Automatic Test Markup Language (ATML) [15], focusing on their purpose, structure, and suitability for SoC test process modeling and automation.
Overall, HLL is a research-oriented, high-level language specifically designed for power/thermal-aware test process modeling and TAM optimization in multicore SoCs. It prioritizes usability, flexibility, and integration with experimental solvers. STIL is a low-level, production-grade standard used primarily for describing test vectors, timing, and patterns. It is suited for post-ATPG and ATE interfacing but lacks expressiveness for architectural modeling or test planning. ATML, on the other hand, is not a test modeling language per se but rather a metadata and configuration framework used in complex automatic test systems, particularly in the aerospace and defense industries. While STIL and ATML are indispensable in production flows for pattern exchange and test logistics, HLL fills a unique gap by offering a lightweight, modular, and researcher-friendly approach to test planning, TAM design, and test process optimization.
The E d i t o r module offers advanced functionalities for processing files written in H L L format. Supporting multiple tabs for concurrent file editing, it includes a comprehensive suite of text utilities [16]. This feature enables efficient management of test process data within the toolkit without requiring external applications. A notable feature of the E d i t o r is its smart H L L code verification tool, which analyzes H L L code for syntactical and logical errors, providing real-time feedback to optimize test process implementation.
The C o n s o l e module operates as a standalone console application that also interfaces with external tools. It is responsible for generating the following:
  • Test processes using initialization or TAM configuration files.
  • An optimized TAM configuration for a given SoC, incorporating or ignoring power constraints.
  • A high-quality test schedule for a given test process, ensuring compliance with power constraints.
  • Detailed reports on power and thermal effects across the SoC during testing.
  • Log files documenting execution processes and results.
The C o n s o l e module accepts a configuration file as input, along with a verification flag. There are three types of configuration files within the H L L framework, guiding the console application in creating a test process, optimizing TAM for a given SoC, or executing a test process. These files include paths to H L L program files, solver parameters, power constraints, and external tool configurations, such as the H o t s p o t thermal analysis tool [7]. Additionally, they define parameters affecting output data representation, such as time intervals, intermediate result derivation, and metric units. Configuration files can be automatically generated using the G U I module or manually edited. In verification mode, a syntax analyzer scans input parameters, while a logical analyzer assesses interdependencies within the data, flagging errors or warnings in the console to prevent computational interruptions.
The C o n s o l e module integrates three main solvers: a Greedy solver and an SA-RP solver, each with specific advantages and limitations [4], along with a cluster-targeting solver for balancing tasks. The solvers in the toolkit incorporate novel solving enhancements, prioritizing large test loads and mitigating infinite loops in the rectangle packing process. The solvers consistently produce optimal results, even for highly variable test loads. Furthermore, the module interfaces with H o t s p o t to estimate the thermal impact of generated test schedules. The C o n s o l e module automates H o t s p o t configuration, input derivation, and execution, ensuring seamless integration of thermal analysis. A log file is generated for each solver execution, documenting solver actions and results.
The V i s u a l i z e r module is a user-friendly component that generates detailed and interactive graphical representations using the O x y p l o t library [17]. It comprises four submodules supporting the following:
  • Visualization of test schedules, with optional annotations for power and thermal data.
  • Visualization of power graphs at the SoC, voltage island, and core levels.
  • Visualization of thermal graphs at the SoC, voltage island, and core levels.
  • Visualization of the SoC floorplan, annotated with power and thermal data.
The G U I module consolidates the toolkit’s functionalities into an integrated user interface. It consists of five submodules: C o n f i g u r a t i o n , H o s t , G u i d e , U t i l i t y , and G e n e r a t o r . The C o n f i g u r a t i o n submodule enables users to define credentials, select and configure solvers, set test process parameters, and integrate the C o n s o l e and H o t s p o t tools. The H o s t module integrates the E d i t o r , C o n s o l e , and V i s u a l i z e r into a unified workspace. The G u i d e module provides references to research materials on TDM-based test processes. The U t i l i t y submodule supports user profile management, database functionality, remote notifications, and data encryption for security. The G e n e r a t o r submodule automates the creation of H L L program files for test processes.
Finally, the L i b r a r y module acts as the glue logic and the quiver of the TDM Test Scheduling and TAM Optimization toolkit, where every utility and solving function can be acquired.
Table 2 presents a comparative analysis highlighting the added value and distinctive features of the TDM Test Scheduling and TAM Optimization Toolkit compared with existing toolkits and software frameworks commonly employed in the test scheduling and TAM optimization domain.
The proposed TDM Test Scheduling and TAM Optimization Toolkit is specifically designed to address the unique complexities associated with testing modern multicore, DVFS-based Systems-on-Chip (SoCs) featuring multiple voltage islands. In contrast to existing commercial tools such as Mentor Tessent and Synopsys DFTMAX, which primarily focus on general Design-for-Test (DfT) tasks and ATPG pattern generation, the TDM Toolkit explicitly integrates advanced optimization methodologies that consider power and thermal constraints. Additionally, commercial solutions are typically limited in their support for multi-voltage/frequency scaling and thermal-aware strategies, and their proprietary nature restricts customization and experimentation. Similarly, while academic and research tools like the ITC’02 Benchmarks and Fault provide valuable benchmarks, DfT, and basic validation capabilities, they typically lack comprehensive support for multi-voltage scenarios and direct integration of power- and thermal-aware optimization. Consequently, the proposed toolkit significantly advances the state-of-the-art by integrating TDM-based test scheduling methods, robust optimization strategies, extensive visualization capabilities, and a user-friendly, modular design. This makes it a highly valuable tool for both academic research and practical applications in industrial test environments.

5. Toolkit’s Workflows

The flowchart in Figure 4a delineates the sequential stages involved in generating a test process, including loading cores, initializing the SoC, creating a floorplan, and finalizing the test process. Specifically, the process begins with the user determining whether to load Intellectual Property (IP) cores or benchmark cores. If IP cores are selected, the user must load them from . c d f files, with the option to select multiple files simultaneously. If benchmark cores are chosen, the flow proceeds without additional file input, as the corresponding . c d f files are pre-embedded within the toolkit’s resources. Subsequently, the user must select a SoC initialization file, . i n i t 2 D file, to define the SoC architecture and form the test process. If a floorplan is required for the targeted SoC, the H O T S P O T tool is employed to generate it. Finally, the test process is executed, producing the . s o c and . t a m files, thereby concluding the workflow.
The flowchart in Figure 4b illustrates the workflow for generating optimal test schedules within the TDM Test Scheduling and TAM Optimization Toolkit. The process begins with the user providing a . s o c file, which contains structural and functional data of the targeted System-on-Chip.
Following this, a decision is made regarding whether additional test scheduling specifications are required. If additional specifications are needed, they are incorporated into the workflow; otherwise, the process proceeds directly to the Test Access Mechanism (TAM) selection stage. At this stage, the user must select a . t a m file that defines the TAM configuration for the SoC. Subsequently, the test scheduling algorithm selection takes place. The user can choose between two distinct scheduling algorithms.
First, a Greedy Algorithm ( G R P ), a heuristic-based approach that efficiently schedules test processes while minimizing computational resource utilization. Second, a Simulated Annealing with Rectangle Packing ( R P S A ) approach, an advanced optimization-based method designed to iteratively refine the test allocation for enhanced scheduling efficiency. Upon selecting the algorithm, the system processes the . s o c and . t a m files to generate an optimized test schedule. The outputs of this stage include a . r 2 d file containing the generated test schedule and metadata files providing additional information related to the test schedule, including power consumption, thermal analysis results from HOTSPOT, execution time, and resource utilization.
At this point, the workflow concludes, marking the finalization of the test scheduling process. This methodology ensures an efficient, systematic, and scalable approach to test schedule generation, incorporating modularity and adaptability to accommodate diverse SoC architectures.
The flowchart in Figure 4c represents the workflow for the optimization of the TAM of a targeted SoC test process. It commences with the loading of SoC data via a . s o c file. At this stage, the system determines whether the given SoC is classified as a large-scale design. This distinction is crucial as it dictates subsequent steps related to SoC partitioning and A T E channel allocation.
If the SoC is classified as large, the workflow proceeds with the implementation of the selected partitioning strategy. There are three distinct approaches:
  • Simple SoC Partitioning: A fundamental approach where the SoC is divided into smaller units without incorporating additional constraints.
  • Island- and Area-Based Partitioning: A more refined clustering technique that partitions the SoC based on a predefined number of voltage islands per partition, taking into account proximity constraints among islands to enhance overall test process efficiency.
  • Island-, Area-, and Core-Based Partitioning: This approach clusters voltage islands to form partitions not only based on their number and proximity but also by considering the total number of cores per partition for computational reasons.
Once the partitioning takes place, the available A T E channels are optimally allocated across each partition, following the methodology outlined in [5]. Let N i represent the number of ATE channels assigned to partition i. To ensure robust TAM optimization, the toolkit systematically evaluates configurations within the range [ N i − 0.25 N i , N i + 0.25 N i ] for each partition i, allowing for adaptive resource distribution that may enhance test efficiency.
The toolkit employs the B r a n c h a n d B o u n d technique to optimize the TAM for each partition. The optimization process is repeated for every possible number of ATE channels within the specified range for each partition. Users can choose between two evaluation algorithms for TAM configurations that pass the bounding criterion, namely the computationally efficient G R P and the computationally intensive R P S A . At the end of this process, each partition is assigned a set of optimized TAM configurations. The toolkit then systematically explores all feasible sets of partition-level TAM configurations that satisfy the ATE constraint — i.e., the number of ATE channels used in each SoC’s partition in the set does not exceed the total number of ATE channels. Finally, it selects the most efficient set of partition-level TAM configurations, which is then used to derive the optimized test process for the entire SoC.
If the SoC is not classified as large, the workflow branches into three alternative TAM optimization techniques:
  • Branch and bound optimization: a highly reliable and exhaustive search-based method that explores feasible TAM solutions using a bounding criterion to derive optimal TAM configurations (see Section 2).
  • Test load balancing: a method focused on distributing core test loads efficiently across the available TAM resources, ensuring an even allocation. Users can specify an acceptable deviation among the test loads received per TAM bus, thereby influencing both the execution speed and quality of the final results.
  • Heuristic: a computationally efficient alternative that applies the S i m u l a t e d   A n n e a l i n g method to explore the solution space of TAM configurations, rapidly deriving an effective, though potentially suboptimal, TAM configuration.
Following the selection of the TAM optimization method, the toolkit produces two essential outputs: a . t 2 d file encapsulating the generated optimal TAM structure and metadata files with statistical data about the optimization procedure. Finally, the proposed TAM configuration is used to create the optimized test process for the targeted SoC, and the workflow concludes.

6. Scenarios

6.1. Test Engineer’s Scenario

In a VLSI company, the SoC design department is developing a DVFS-based SoC with multiple voltage islands. Collaborating with the test planning department, they are evaluating various Design-for-Testability (DfT) strategies. Their analysis considers RTL implementation, area impact, and overall cost efficiency of the test process. The test planning department strongly advocates for testing the new SoC across multiple voltage levels and proposes a Time-Division Multiplexing (TDM)-based DfT approach. They argue that this solution has minimal area overhead and can reduce test time to nearly the levels of single- V d d testing.
A test engineer (TE) informs the team about a new, freely available toolkit—the T D M Test Scheduler and TAM Optimization toolkit. After experimenting with it, the TE finds that it provides a user-friendly and efficient environment for making strategic decisions in TDM-based test processes. Both departments agree to further explore the feasibility of the TDM-based solution. Consequently, they assign the test engineer to evaluate the toolkit and present findings on its applicability to the new SoC. The test engineer requests information from the design department regarding the structure of the new SoC and its embedded cores. After receiving the necessary data, TE launches the toolkit on a workstation. Upon entering the required user credentials, the G U I appears (Figure 5a), and TE navigates to the E d i t o r tab.
In the E d i t o r , TE, using the H L L language offered by the toolkit, writes the required programs that integrate various data sources, including SoC and core-related specifications, test data per core, and information related to the TDM-based test scheme. Examples of such programs are shown in Figure 6a,b. Next, TE selects the first menu button, C, to invoke the test generator, which automatically creates H L L files specifying the test process. Sample output files are shown in Figure 6c,d. Figure 6c shows the program for the SoC TDM-based test process, and Figure 6d corresponds to the TAM program for the targeted SoC. The toolkit creates a dedicated folder to store the results of the test process. Upon reviewing the TAM program, TE notes that the test generator has automatically assigned embedded cores to the TAM buses. Consulting the manual of the toolkit, TE discovers that the tool employs a cluster-based technique that optimally balances the overall test load across the TAM buses.
TE carefully reviews the generated H L L programs within the E d i t o r module. Due to their well-defined structure and the clear syntax of the H L L language, these programs are easy to read, understand, and modify, enabling the TE to efficiently adjust them according to their department’s specific testing requirements. TE then chooses to perform syntactical and logical verification before proceeding to compile the H L L programs via the R menu button. As shown in Figure 5b, the C o n s o l e displays messages indicating the status of the compiled H L L programs, along with a prompt to visit the C o n f i g u r a t i o n tab. In response, TE selects a solver, configures its parameters, enables power constraints, and requests a thermal emulation of the test process (Figure 5a). Finally, TE executes the test process using the R menu button once again. Once the solver completes its task, the toolkit generates a notification.
TE opens the folder and examines the generated files. Among them is a file of type . r 2 d containing the optimal test schedule for the SoC, along with files providing statistical, power, thermal, and logging information related to the test process. Next, TE selects the S menu button and clicks to visualize the generated test schedule. The toolkit activates the S c h e d u l e tab, displaying the required graph, as shown in Figure 7a (Figure 7c for a larger SoC). TE then chooses to annotate the schedule with power (Figure 8a) and thermal data (Figure 8b) to analyze the impact of the derived test schedule on the SoC.
Next, TE seeks a clearer view of the power and thermal state of the voltage islands and embedded cores during the test process. Using the P menu button, TE first chooses to visualize the power consumption per voltage island. This activates the P o w e r tab, as shown in Figure 9a. To focus on a specific voltage island, TE deactivates the graphs for the others. Afterward, TE proceeds to visualize the power consumption per core. Again, TE selectively deactivates graphs for irrelevant cores and takes notes on the relevant ones (Figure 9b). Finally, to analyze temperature variations during the test process, TE uses the T menu button, which activates the T h e r m a l tab. This allows them to examine temperature fluctuations per island and per core, as depicted in Figure 9c,d.
To finalize the analysis, TE examines the impact of the test process from the SoC’s area perspective. TE knows that the toolkit provides an animated visualization where power consumption and temperature distribution across the SoC floorplan are represented using color gradients. To access this feature, TE selects the F menu button. First, the toolkit displays a static image of the SoC floorplan (Figure 10a). Next, it animates power consumption across each core during testing (Figure 10b), followed by an animation of temperature variations per core throughout the test process (Figure 10c). TE carefully observes the results, takes additional notes, and prepares to incorporate these findings into the final report.
Having thoroughly analyzed the test process, TE is nearly ready to finalize the report. However, TE seeks an alternative perspective in the test process by exploring the toolkit’s TAM optimization capabilities. To access this feature, TE selects the O menu button. TE then loads the previously defined program for the SoC’s TDM-based test process. The C o n s o l e displays messages indicating the status of the compiled H L L program, along with a prompt to visit the C o n f i g u r a t i o n and T A M tabs (Figure 11). In response, TE selects the branch-and-bound TAM optimization solver, configures the TAM and ATE parameters in the T A M tab, and then, in the C o n f i g u r a t i o n tab, enables power constraints and requests a thermal emulation of the test process (Figure 5a). TE executes the TAM optimization process using the O menu button once again. The toolkit generates an optimized TAM configuration, and TE redefines the overall test process for the targeted SoC using the automated test process generation feature in the O menu. TE uses the toolkit to derive the test schedule for the optimized test process and observe a significant reduction in test time (approximately 50% for the virtual SoC, as shown in Figure 7b, and 32% for the large virtual SoC, as shown in Figure 7d). TE documents the results and finalizes the report accordingly. Before closing the toolkit, TE chooses to encrypt all input and output data in the folder for security purposes.
Finally, the test engineer presents the report, stressing the gained advantages from the followed procedure. Both heads of department finalize their decision to adopt the TDM-based test process. The test planning department is responsible for designing the TDM scheme and the TAM configuration of the targeted SoC. Meanwhile, the Test Engineer (TE) prepares the STIL-based test scripts according to the test scheduling data generated by the toolkit. To obtain accurate timing information for configuring the TDM clock divider, the TE activates the Shift Register Vector option available in the toolkit’s Configuration tab. Additionally, the TE plans to contact the toolkit’s developer to discuss potential collaboration for automating the generation of STIL-based test programs.

6.2. Researcher’s Scenario

A researcher (RES) aims to experiment with a novel test scheduling solver for SoCs. To do so, RES requires an artificial SoC as input, relevant test data, and a tool to visualize experimental results. RES learns about a free toolkit that generates artificial SoCs based on benchmark cores, complete with reliable test data and the corresponding floorplan.
After downloading the toolkit, RES quickly discovers that it can effortlessly generate artificial SoCs, along with their test data and floorplans, using the C menu button (Figure 5a). RES also finds that, with minimal modifications, it can replace the available solvers with an experimental one in the C o n f i g u r a t i o n tab (Figure 5a). Additionally, the toolkit provides built-in visualization tools for analyzing output results.
Recognizing this capability, RES considers comparing a novel solver against existing ones to validate its effectiveness and demonstrate its advantages.

6.3. Instructor’s Scenario

An instructor is preparing a lecture on manufacturing testing and seeks a well-structured, user-friendly tool to introduce and explain fundamental concepts, definitions, and parameters involved in test planning. The instructor learns about a free toolkit that provides such functionality.
After downloading and exploring the toolkit, the instructor finds it useful for both the lecture and an accompanying lab session. To enhance student understanding, the instructor decides to demonstrate an artificial test scheduling process using the toolkit’s integrated environment.
Additionally, the instructor considers assigning a lab exercise based on the toolkit, allowing students to gain hands-on experience with test scheduling and planning.

7. Conclusions

This work introduces the TDM Test Scheduler and TAM Optimization Toolkit, an integrated framework that fully supports TDM-based test processes for multicore, multi- V d d SoCs. The toolkit features a high-level language ( H L L ) and a verification tool, enabling a user-friendly and efficient representation of test processes. It incorporates powerful solvers and a comprehensive visualization environment, facilitating informed decision-making in test planning. The toolkit serves as a valuable testbed for both research on test methodologies and educational activities. It can generate test processes for artificial SoCs, leveraging well-known benchmark cores. Furthermore, its modular architecture allows researchers to integrate their own experimental solvers, utilizing the graphical user interface and H L L to define and execute custom test processes. The results can then be visualized using the built-in graph tool.
Significant effort has been dedicated to developing a robust, integrated, well-structured, user-friendly, and computationally efficient environment capable of producing optimal test processes for multi- V d d SoCs. Test engineers and researchers are encouraged to propose or contribute new functionalities. The TDM Test Scheduler and TAM Optimization Toolkit is free software and is available at [6]. Related information is also available in [22].
Future work includes expanding the computational and visualization environment to effectively support test processes for 3D multi- V d d SoCs.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The proposed toolkit, including input/output data and results, can be accessed at: https://gitlab.com/tdms-fork-group/2DTDMScheduler (accessed on 14 February 2025).

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. ARM Cortex-a725 Documentation. 2024. Available online: https://www.arm.com/products/silicon-ip-cpu/cortex-a/cortex-a725 (accessed on 14 February 2025).
  2. Intel Technical Paper. 2024. Available online: https://networkbuilders.intel.com/solutionslibrary/empowering-mixed-criticality-industrial-real-time-computing-with-intel-s-dvfs-evolution (accessed on 14 February 2025).
  3. Kavousianos, X.; Chakrabarty, K.; Jain, A.; Parekhji, R. Test Schedule Optimization for Multicore SoCs: Handling Dynamic Voltage Scaling and Multiple Voltage Islands. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2012, 31, 1754–1766. [Google Scholar] [CrossRef]
  4. Vartziotis, F.; Kavousianos, X.; Chakrabarty, K.; Jain, A.; Parekhji, R. Time-Division Multiplexing for Testing DVFS-based SoCs. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2015, 34, 668–681. [Google Scholar] [CrossRef]
  5. Vartziotis, F.; Kavousianos, X.; Georgiou, P.; Chakrabarty, K. A Branch-&-Bound Test-Access-Mechanism Optimization Method for Multi-Vdd SoCs. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2017, 36, 1911–1924. [Google Scholar]
  6. Vartziotis, F. 2DTDMSTO. Available online: https://gitlab.com/tdms-fork-group/2DTDMScheduler (accessed on 30 March 2025).
  7. HotSpot Tool Suite. Available online: http://lava.cs.virginia.edu/HotSpot/index.html (accessed on 14 February 2025).
  8. Khursheed, S.; Ingelsson, U.; Rosinger, P.; Al-Hashimi, B.M.; Harrod, P. Bridging fault test method with adaptive power management awareness. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2008, 27, 1117–1127. [Google Scholar] [CrossRef]
  9. Kruseman, B.; Heiligers, M. On test conditions for the detection of open defects. In Proceedings of the Design, Automation and Test in Europe Conference, Munich, Germany, 6 March 2006; pp. 896–901. [Google Scholar]
  10. IWLS. Available online: https://iwls.org/iwls2005/benchmarks.html (accessed on 14 February 2025).
  11. Nangate 45nm. Available online: https://github.com/mflowgen/freepdk-45nm (accessed on 1 April 2025).
  12. Chakrabarty, K. Test Scheduling for Core-Based Systems Using Mixed-Integer Linear Programming. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2000, 19, 1163–1174. [Google Scholar] [CrossRef]
  13. IEEE std 1500—Standard for Embedded Core Test. 2005. Available online: https://standards.globalspec.com/std/14566474/1500 (accessed on 30 March 2025).
  14. IEEE std 1450—IEEE Standard for Standard Test Interface Language (STIL). 1999. Available online: https://grouper.ieee.org/groups/1450/index.html (accessed on 30 March 2025).
  15. IEEE std 1671—IEEE Standard for Automatic Test Markup Language (ATML) for Exchanging Automatic Test Equipment and Test Information via XML. 2010. Available online: https://standards.ieee.org/downloads/ (accessed on 30 March 2025).
  16. AvalonEdit WPF-Based Text Editor. Available online: http://avalonedit.net/ (accessed on 14 February 2025).
  17. OXYPLOT Library. Available online: https://oxyplot.github.io/ (accessed on 14 February 2025).
  18. Tessent Software. Available online: https://eda.sw.siemens.com/en-US/ic/tessent/products/ (accessed on 30 March 2025).
  19. DFTMAX Software. Available online: https://news.synopsys.com/ (accessed on 30 March 2025).
  20. ITC’02 Benchmark Suite. Available online: https://www.hitech-projects.com/itc02socbenchm/ (accessed on 30 March 2025).
  21. AUCOHL/Fault. Available online: https://github.com/AUCOHL/Fault (accessed on 30 March 2025).
  22. Vartziotis, F. TDMS Test Scheduler: An Integrated Framework for Test Scheduling of DVFS-based SoCs with Multiple Voltage Islands. In Proceedings of the IEEE European Test Symposium (ETS), Bruges, Belgium, 24–28 May 2021; pp. 1–2. [Google Scholar]
Figure 1. (a) The TDM scheme. (b) Single- V d d vs. multi- V d d test challenges.
Figure 1. (a) The TDM scheme. (b) Single- V d d vs. multi- V d d test challenges.
Chips 04 00017 g001
Figure 2. (a) TDM proposed test schedule using GRP. (b) TDM proposed test schedule using SA-RP.
Figure 2. (a) TDM proposed test schedule using GRP. (b) TDM proposed test schedule using SA-RP.
Chips 04 00017 g002
Figure 3. (a) Test time per random TAM configuration of virtual SoC. (b) Correlation between test time and lower bound per TAM Configuration using TDM for virtual SoC.
Figure 3. (a) Test time per random TAM configuration of virtual SoC. (b) Correlation between test time and lower bound per TAM Configuration using TDM for virtual SoC.
Chips 04 00017 g003
Figure 4. (a) Test process flow. (b) Test schedule flow. (c) TAM optimization flow.
Figure 4. (a) Test process flow. (b) Test schedule flow. (c) TAM optimization flow.
Chips 04 00017 g004
Figure 5. (a) Toolkit’s GUI. (b) Toolkit’s console.
Figure 5. (a) Toolkit’s GUI. (b) Toolkit’s console.
Chips 04 00017 g005
Figure 6. (a) Annotated core definition H L L file ( c d f ). (b) Annotated test process initialization H L L file ( i n i t 2 D ). (c) SoC H L L file for TDM-based test process. (d) Test Access Mechanism (TAM) definition H L L file.
Figure 6. (a) Annotated core definition H L L file ( c d f ). (b) Annotated test process initialization H L L file ( i n i t 2 D ). (c) SoC H L L file for TDM-based test process. (d) Test Access Mechanism (TAM) definition H L L file.
Chips 04 00017 g006
Figure 7. (a) Test schedule for a virtual SoC with 13 cores, three islands and three voltage levels. (b) Test schedule of virtual SoC with optimized TAM. (c) Test schedule for a large virtual SoC with 57 cores, nine islands and three voltage levels. (d) Test schedule of large virtual SoC with optimized TAM.
Figure 7. (a) Test schedule for a virtual SoC with 13 cores, three islands and three voltage levels. (b) Test schedule of virtual SoC with optimized TAM. (c) Test schedule for a large virtual SoC with 57 cores, nine islands and three voltage levels. (d) Test schedule of large virtual SoC with optimized TAM.
Chips 04 00017 g007
Figure 8. (a) Power-annotated test schedule for the virtual SoC. (b) Thermally-annotated test schedule for the virtual SoC.
Figure 8. (a) Power-annotated test schedule for the virtual SoC. (b) Thermally-annotated test schedule for the virtual SoC.
Chips 04 00017 g008
Figure 9. (a) Power graph per island and for total SoC during the test process. (b) Power graph per core during the test process. (c) Thermal graph per island and for total SoC during the test process (d) Thermal graph per core during the test process.
Figure 9. (a) Power graph per island and for total SoC during the test process. (b) Power graph per core during the test process. (c) Thermal graph per island and for total SoC during the test process (d) Thermal graph per core during the test process.
Chips 04 00017 g009
Figure 10. (a) Virtual SoC’s floorplan as derived from h o t f l o o r p l a n [7]. (b) Power-annotated floorplan movie for virtual SoC. (c) Thermally-annotated floorplan movie for virtual SoC [7] (d) Thermally-annotated floorplan for virtual SoC [7].
Figure 10. (a) Virtual SoC’s floorplan as derived from h o t f l o o r p l a n [7]. (b) Power-annotated floorplan movie for virtual SoC. (c) Thermally-annotated floorplan movie for virtual SoC [7] (d) Thermally-annotated floorplan for virtual SoC [7].
Chips 04 00017 g010
Figure 11. Configuration tab for TAM optimization flow.
Figure 11. Configuration tab for TAM optimization flow.
Chips 04 00017 g011
Table 1. Comparison of HLL, STIL, and ATML.
Table 1. Comparison of HLL, STIL, and ATML.
Feature/AspectHLLSTILATML
StandardizationCustom-defined (within proposed toolkit)IEEE 1450IEEE 1671 family
Primary PurposeTest planning, TAM optimization, power-aware test process modelingTest vector, timing, and waveform descriptionMetadata exchange for test systems
Target Use CaseTDM-based scheduling in multicore SoCs with DVFS and voltage islandsPost-ATPG pattern transfer to ATEInteroperability in automatic test systems
Support for Modular Testing (e.g., IEEE 1500 cores)Full and explicit supportLimited; focused on signal and pattern detailsLimited; metadata-oriented rather than process definition
User-Friendliness and ReadabilityHighly readable and user-friendlyComplex and verbose, limited manual readabilityComplex XML, primarily machine-readable
TAM/Test Scheduling SupportFully supportedNot supportedNot supported
Power/Thermal Constraint ModelingSupportedNot supportedNot supported
Customization and ExtensibilityHighly extensible; designed for experimental researchLimited extensibility; standardizedSchema-based extensibility possible, but practically complex
Integration and Tool SupportIntegrated within the proposed toolkitWidely supported in commercial ATPG and ATE toolsPrimarily supported by specialized military and aerospace tools
Ease of Use for ResearchHigh; designed for fast prototyping, modification, and experimentationModerate to low; steep learning curve and tight industry couplingLow; mostly used for documentation and logistics
Table 2. TDM Test Scheduling and TAM Optimization Toolkit against existing related toolkits.
Table 2. TDM Test Scheduling and TAM Optimization Toolkit against existing related toolkits.
Feature/AspectProposed ToolkitExisting Commercial Tools (E.g., Tessent (Siemens) [18], Synopsys DFTMAX [19])Academic/Research Tools (E.g., ITC’02 Benchmark Suite [20], Fault [21]
Primary ObjectiveTDM-based test scheduling, optimized TAM configuration, and power/thermal-aware schedulingGeneral DfT and ATPG pattern generation; limited test scheduling optimizationBenchmarking, DfT, basic test scheduling, and TAM evaluation, typically not power/thermal-aware
Targeted SoC ComplexityMulticore, multi- V d d , DVFS-enabled SoCs with multiple voltage islandsYesMostly simplified models and standard benchmark circuits; limited multi- V d d support
Scheduling Algorithms SupportedSimulated annealing, heuristic (greedy), TDM-specific algorithmsTypically proprietary heuristic algorithms, not extensively customizableSimple heuristics or manual scheduling; ILP in limited academic contexts
Support for Multi-voltage and Frequency Scaling (DVFS)Explicitly designed for full supportYesGenerally minimal or no direct support
Power and Thermal Constraints IntegrationExplicit and detailed integration (via external tools like Hotspot)Typically limited or indirect consideration (mostly power constraints)Usually minimal or no direct power/thermal constraint modeling
Visualization and Analysis CapabilitiesComprehensive visualization (test schedules, power/thermal graphs, floorplans)Limited visualization of test structures, primarily timing and scan-chain layoutsMinimal visualization; typically textual or basic graphical summaries
Extensibility and CustomizabilityHighly modular, supports experimentation with new algorithms, constraints, and interfacesLimited extensibility; tools generally closed-source and proprietaryModerate extensibility, sometimes open-source but not always easily customizable
User-FriendlinessHigh; HLL, integrated GUI, guided workflow, and automated generationModerate; often complex interfaces designed for professional engineersLow to moderate; frequently command-line or basic interfaces
Availability and CostFreely available for academic and research useCommercial licensing, high costTypically free or open-source, academic use primarily
Educational ApplicabilityHigh; designed explicitly for education, research, and industry-relevant experimentationLow to moderate; steep learning curve, specialized training requiredModerate; designed primarily for benchmarking and basic academic research
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vartziotis, F. TDM Test Scheduler and TAM Optimization Toolkit: An Integrated Framework for Test Processes of DVFS-Based SoCs with Multiple Voltage Islands. Chips 2025, 4, 17. https://doi.org/10.3390/chips4020017

AMA Style

Vartziotis F. TDM Test Scheduler and TAM Optimization Toolkit: An Integrated Framework for Test Processes of DVFS-Based SoCs with Multiple Voltage Islands. Chips. 2025; 4(2):17. https://doi.org/10.3390/chips4020017

Chicago/Turabian Style

Vartziotis, Fotios. 2025. "TDM Test Scheduler and TAM Optimization Toolkit: An Integrated Framework for Test Processes of DVFS-Based SoCs with Multiple Voltage Islands" Chips 4, no. 2: 17. https://doi.org/10.3390/chips4020017

APA Style

Vartziotis, F. (2025). TDM Test Scheduler and TAM Optimization Toolkit: An Integrated Framework for Test Processes of DVFS-Based SoCs with Multiple Voltage Islands. Chips, 4(2), 17. https://doi.org/10.3390/chips4020017

Article Metrics

Back to TopTop