Next Article in Journal
Path Planning for Autonomous Balloon Navigation with Reinforcement Learning
Previous Article in Journal
Statistical Signal Integrity Analysis on DFE with Nonideal Latch Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Low-Latency TTP–TSN Cross-Domain Network Planning Problem

School of Information and Communication Engineering, University of Electronic Science & Technology of China, Chengdu 610054, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(1), 203; https://doi.org/10.3390/electronics14010203
Submission received: 18 November 2024 / Revised: 29 December 2024 / Accepted: 29 December 2024 / Published: 6 January 2025

Abstract

:
With services becoming increasingly complex and network scales expanding, hybrid network architectures that combine bus and switch networks while supporting deterministic transmission will play a crucial role in future networks. Time-Triggered Protocol (TTP) and Time-Sensitive Networking (TSN), as key protocols currently ensuring time determinism in bus and switch networks, respectively, are of significant importance for research on hybrid network architectures and ensuring deterministic communication across protocols. In this paper, firstly, we analyzed the causes of latency in TTP–TSN hybrid networks. To reduce latency, we designed a cross-protocol time-synchronization algorithm. And, based on network-wide time synchronization, we constructed a joint TTP–TSN low-latency scheduling model, using Integer Linear Programming (ILP), which was then solved by an ILP solver. Based on this scheduling model, we proposed a heuristic fast scheduling algorithm and proofs of its schedulability and approximation ratio. Finally, we designed simulation and prototype systems for verification. Our experimental results demonstrate that the time-synchronization algorithm proposed in this paper achieves a synchronization error of no more than 1 µs. Compared to the case without applying the joint scheduling model, the fast heuristic algorithm can reduce end-to-end latency by at least 50%, and shorten the solving time by thousands of times compared to an ILP solver, all while ensuring schedulability.

1. Introduction and Motivation

With the increasing demand for Quality of Service (QoS) in mission-critical fields such as automotive, aerospace, and industrial applications, the requirements for high bandwidth and deterministic communication have become particularly urgent. The current network mostly uses a switched network as the backbone, while bus networks are used for communication within each subdomain and are interconnected via gateways [1]. Ensuring efficient, reliable, and deterministic communication in such a hybrid network architecture has become a critical problem that needs to be addressed.
In the early days, due to low communication demands, the requirements for communication speed were relatively modest. Vehicle or airborne networks typically adopted a bus-based architecture, such as the Controller Area Network (CAN) bus for in-vehicle networks or ARINC 429 for airborne networks [2]. As the complexity of onboard systems increased, the demand for higher communication rates and determinism grew. Consequently, network architectures evolved into hybrid networks comprising both bus-based and switch-based components. Examples include avionics networks composed of MIL-STD-1553 and AFDX [3] or CAN and AFDX [4].
To further enhance the quality of communication services, switch-based networks have gradually evolved into deterministic networks with time-triggered capabilities. NASA’s spacecraft communication network architecture [5] utilizes a Time-Triggered Ethernet (TTE) backbone network, with the MIL-STD-1553 bus serving as the subnetwork for the Remote IO Unit (RIU). Additionally, the avionics system communication architecture presented by X. Zhou et al. [6], known as Distributed Integrated Modular Avionics (DIMA), leverages a TTE backbone network. Meanwhile, X. G et al. [7] have investigated an in-vehicle network, employing TSN as the backbone network and CAN as the domain network.
By employing a switched network topology (e.g., TSN, TTE) as the backbone network and a bus network topology (e.g., CAN, 1553) as subnetworks interconnected through gateways, it is feasible to meet the requirements for deterministic communication. To further enhance determinism, some researchers have begun exploring hybrid network architectures composed of bus networks and switch networks with time-triggered capabilities. L. Zexiong et al. [8] proposed a time-synchronization method for the TTP–TTE hybrid network. TTP and TSN, as key protocols currently ensuring time determinism in bus and switch networks, respectively, are of significant importance for research on hybrid network architectures and ensuring deterministic communication across protocols.
TTP, a deterministic real-time protocol, ensures conflict-free communication and has been widely adopted in critical systems such as automotive and avionics [9,10]. It achieves safety-critical and hard real-time capabilities through mechanisms like time-division multiplexing, time synchronization, membership acknowledgment, fault detection, and redundancy design. TTP has found extensive applications in aerospace systems, including the digital engine control systems of the F-16 and M-346, the cabin pressure control system of the A380, and the environmental control system of the Boeing 787.
The IEEE 802.1 working group formed the TSN task group to develop a protocol suite for reliable, low-latency, low-jitter, and interoperable network communication. The TSN protocol includes technologies like time-aware scheduling (802.1Qbv), credit-based shaper (802.1Qav), and flow filtering (802.1Qci) to ensure deterministic services, with 802.1Qbv minimizing end-to-end latency through time synchronization [11].

Motivation

We simplified the current hybrid network, as shown in Figure 1, mostly using a switched network as the backbone, while bus networks were used for communication within each subdomain and were interconnected via gateways. The significance of studying the TTP–TSN gateway, as in Figure 1, lies in its ability to combine the high determinism of TTP and TSN, providing a more flexible architecture for future mission-critical networks.
The significance of joint planning entails the following:
① What is joint planning and why is it needed? The process of specifying the path and transmission time for each traffic flow is called joint scheduling. The traffic includes communication flows within the TSN domain and between TTP and TSN. As shown in Figure 2, without joint scheduling, conflicts between any two traffic flows in the network are likely to occur, which will increase communication delay and jitter.
② How can joint planning reduce delays and why is time synchronization needed? In the absence of joint scheduling, TTP–TSN traffic may experience additional delays, as shown in Figure 3, highlighting the necessity of joint scheduling methods. This figure illustrates the communication process under the topology shown in Figure 1, where Qbv ensures deterministic delay in the TSN network. For simplicity, multiple TSN switches are represented by a single switch (SW), with orange and red rectangles indicating the duration of the traffic. Due to lower bandwidth in the TTP domain, the duration of traffic for the same service is longer in the TTP domain compared to the TSN domain. The colored dashed lines show the delays, while the segments between the black dashed lines mark the TTP communication slots. Based on the Time Slot (TS) scenarios shown in Figure 3a,b, it is evident that the lack of time synchronization and joint design between TTP and TSN leads to the occurrence of delays ② and ④. The differences in power-on time points between TTP and TSN devices result in these unpredictable delays. However, as illustrated in Figure 3c,d, by adopting a joint design approach, delay ② can be eliminated through proper Time Slot alignment. And delay ④ can be mitigated by arranging the gateway Time Slots during joint scheduling. As shown in Figure 3d, by reasonably allocating routes to distribute the TSN->TTP traffic and assigning gateway Time Slots near the TSN->TTP traffic reception times, the impact of delay ④ can be reduced. It is important to note that delays ① and ③, caused by protocol conversion, cannot be eliminated.
③ Why do we need a fast-solving algorithm? In engineering solutions, margin design is typically used instead of fully utilizing the bandwidth. While the ILP algorithm can provide exact solutions, its solving speed becomes slow when the number of variables increases. Therefore, we propose a fast-solving algorithm based on a greedy strategy and analyze its schedulability lower bound and approximation ratio. Our experiments show that when the planning problem satisfies Theorem 1, the fast algorithm can quickly solve the problem while ensuring schedulability.
The primary contributions of this paper are: (1) A practical TTP–TSN cross-protocol time-synchronization algorithm is proposed; (2) Based on network-wide time synchronization, an integer programming model for the TTP–TSN scheduling problem is established, which can be solved using an ILP solver; (3) To improve solving speed in practice, a fast-solving algorithm is proposed, and its schedulability and approximation ratio are theoretically derived; (4) Simulation experiments and FPGA prototype validation are conducted for the above algorithms.

2. Literature Review

A TTP network differs from CAN or MIL-STD-1553 networks, as it is a time-triggered communication network. This study essentially addresses the problem of matching Time Slots between two time-triggered networks. If this problem is treated as a simple gateway scheduling issue and approached using methods similar to those described in [12], it cannot minimize latency to the greatest extent.
(1) Cross-protocol time synchronization: Berisa et al. [13] studied CAN–TSN gateways and proposed frame encapsulation strategies to reduce protocol-conversion latency, while Xie et al. [7] introduced a scheduling strategy to address congestion in TSN-to-CAN transmissions. Wu et al. [12] focused on message priority in TSN–CAN gateways, improving cross-protocol communication. However, there was no need to study time synchronization between CAN and TSN or methods for other buses. Thus, existing research does not assist in solving cross-protocol time synchronization between TTP and TSN, which remains our focus.
(2) Low-latency network planning: In this paper, ’planning’ refers to both routing and scheduling, where routing selects the sequence of links for a flow and scheduling generates the GCL or MEDL configuration. While scheduling in the TTP–TSN hybrid network remains under-researched, insights from TSN and CAN–TSN networks, such as reviews by P. Yifei et al. [14] and C. Hamzaand [15], are valuable. Researchers typically use ILP models for TSN scheduling [16,17,18], with hardware enhancements suggested by M. Vlk et al. [19]. However, due to TTP’s need for time-synchronized, time-slotted communication, existing TSN or CAN–TSN algorithms cannot be directly applied to the TTP–TSN hybrid network.
More importantly, there may already exist dozens of algorithms for solving TSN network planning problems [15,20,21]. These algorithms encompass various approaches, including ILP solvers, Constraint Programming (CP) solvers, machine learning, evolutionary algorithms, and swarm intelligence. However, to date, no theoretical schedulability bounds have been established. Additionally, some of these algorithms require minutes or even hours to solve large-scale networks with a significant number of flows, which is intolerable in engineering applications, especially during the verification phase.
To address this, this paper proposes a heuristic fast-solving algorithm based on a greedy strategy. Inspired by [22,23], the TTP–TSN scheduling model was formulated as an Edge-Disjoint Path (EDP) problem (as shown in Section 4.3.2), and the schedulability and approximation ratio of the algorithm were evaluated. Experimental results demonstrate that when the characteristics of the flows to be solved satisfy Theorem 1 (as shown in Section 4.3.1), the simple fast-solving algorithm proposed in this paper can complete scheduling within an extremely short period. This is highly valuable for addressing engineering problems, as time-triggered networks are typically applied in mission-critical systems, which usually reserve a certain amount of bandwidth margin for safety considerations. The literature [24] reports that time-triggered traffic in TTE networks only accounts for 27% of the total bandwidth.

3. System Model

Figure 1 shows an example of a TTP–TSN hybrid network. The TTP network is used to carry low-rate sensor traffic and drive-actuator actions. As shown in the figure, TTP Domain 1 is a sensor network that periodically collects sensor data, which are transmitted to the TSN network through the gateway and eventually processed by the central computer (TSN ES1). TTP Domain 2 is an actuator network, where the central computer sends control commands through the TTP–TSN gateway to the specific controlled devices in TTP Domain 2. Actuator networks also typically include some monitoring devices that periodically monitor and report the status of the actuators. To ensure reliability, both the central control computer and the TTP–TSN gateway typically incorporate redundancy (as shown by TSN ES1, Gate3, and Gate4 in the diagram). To ensure reliability, once multiple gateways are adopted in a TTP domain, our planning algorithm ensures that multiple gateways can receive redundant copies at the same time. Below, we firstly provide a basic description of the TSN and TTP protocols.

3.1. Time-Triggered Protocol

As shown in Figure 1 (TTP example area), the network demonstrates the operation process of the TTP network. After system initialization, the TTP network enters the S-phase, during which the gateway sends a start frame, and the other TTP nodes adjust their local clock values based on the start time contained in the frame content to achieve time synchronization. Once time synchronization is complete, the S-phase ends, and the network transitions to the C-phase. During this phase, each node in the TTP domain transmits data according to the Time Slots allocated by the scheduling algorithm and stored in the Message Description List (MEDL), ensuring collision-free transmission.
For example, during the gateway’s Time Slot (such as th G1 Slot), traffic from the TSN is broadcast within the TTP domain, and the specific TTP node that receives this information is controlled by an upper-layer protocol (which is beyond the scope of this article). During the node’s Time Slot (such as the N1 Slot), TTP N1 broadcasts sensor data within TTP Domain 1, and the gateway forwards this data to the TSN network.

3.2. Time-Sensitive Network

In the network depicted in Figure 1, the device with the highest time precision, TSN ES1, acts as the Grandmaster (GM) clock, establishing a time-synchronization tree using the 802.1AS protocol for slave clocks (devices such as Gate1, Gate2, TSN ES1). Then, based on global time synchronization, IEEE 802.1Qbv assigns transmission windows (GCL) to devices for minimal latency. At time T1, the queue gate in TSN ES3 opens, allowing priority 6 data frames to be transmitted through the ES3-SW3 link, with time windows scheduled for SW3-SW2 and SW2-ES1. IEEE 802.1Qbv ensures the lowest end-to-end delay, which is why only the Qbv protocol is considered in this paper [11].

3.3. TTP–TSN Gateway

As shown in Figure 4, the TTP–TSN gateway architecture consists of five parts: the TSN protocol layer, the Ethernet MAC layer, the protocol-conversion part, the TTP protocol layer, and the TTP MAC layer. Finally, the TTP–TSN gateway is implemented.
When the G a t e w a y E n a b l e signal is activated, the TTP–TSN gateway IP functions as a gateway, facilitating various operations within the TSN domain. Specifically, the AS module is responsible for completing time synchronization within the TSN domain and subsequently setting the clock value in the TTP protocol layer. Furthermore, the QBV and QAV modules are engaged in traffic-shaping activities. At the MAC layer, the transmission and reception of Ethernet frames are managed, while the TTP–TSN interface enables data exchange and protocol conversion.
Within the protocol-conversion component, a method similar to [13] is used to achieve protocol conversion. TTP-to-TSN messages are encapsulated in the payload field of TSN frames, and TSN-to-TTP frames are decapsulated accordingly. Additionally, QBV GCL entries are computed based on the joint scheduling algorithm detailed in Section 4.2. The TTP protocol layer configures parameters through the MEDL configuration file. Lastly, the TTP MAC layer is responsible for the transmission and reception of TTP frames. The flow directions of messages from TSN to TTP and from TTP to TSN are visually represented by purple and yellow arrows in the accompanying figure, respectively.

3.4. TTP–TSN Network Model

3.4.1. Base Model

Traffic model: The flow set is defined as F. For a flow, s i F , we describe it by using a 6-tuple < s r c , d s t , T , P , L , D > corresponding to the source node, destination node, period, priority, size, and end-to-end delay constraint, respectively.
Topology model: The network topology is modeled by using an undirected graph G ( V , E ) , where v i V presents a switch or an end system. Assuming there is a full-duplex link between nodes v i and v j , then [ v i , v j ] , [ v j , v i ] E .
Communication model: ① TSN Domain: This paper adopted the No-Wait Packet Schedule Problem (NW-PSP) problem for the TSN network introduced by [25,26], based on the No-Wait Job Schedule Problem (NW-JSP). ② TTP Domain: Refers to the communication between devices within the TTP subdomain, where nodes transmit data during their assigned Time Slots. ③ Between TTP and TSN Domain: Must adhere to the communication constraints of both TSN and TTP, to ensure data transmission between the two domains. Typically, the TTP bus is used to connect sensors or actuators. The data received by the sensors are first passed through the gateway to the central controller for processing, after which the central controller sends commands through the gateway to the actuators for execution. Therefore, this paper did not consider the case of direct communication between one TTP domain and another.

3.4.2. Low-Latency TTP–TSN Scheduling Model

The scheduling problem of TSN is quite complex. The planning algorithm in this paper adopts a no-wait scheduling approach [17,27,28] for TT flows, meaning that frames of TT flows can be transmitted immediately upon arrival at the gate-controlled queue of each node’s corresponding output port. Situations where frames must wait, due to other frames in the queue, do not occur. This approach ensures extremely low latency and jitter for TT flows, greatly enhancing reliability, which is crucial for applications with high time-sensitivity requirements. The following sections detail the scheduling strategy used in this paper. To achieve no-wait scheduling, certain constraints need to be established, to filter out the results that meet the requirements. The constraints are as follows:
(1) Frame constraint: The transmission time of each frame instance must be non-negative and completed within the cycle:
ϕ i j [ v a , v b ] 0
ϕ i j [ v a , v b ] T i l i j [ v a , v b ]
Here, ϕ i j [ v a , v b ] represents the transmission start time of the j-th frame of flow i on path [ v a , v b ] . And l i j [ v a , v b ] denotes the the size of the j-th frame of flow i.
(2) Maximum delay constraint: This constraint specifies the end-to-end delay for real-time traffic. The end-to-end delay is defined as the time interval between the moment the last frame of the message arrives at the receiver and the moment the first frame begins transmission at the sender. This delay must not exceed the maximum end-to-end delay D i that flow s i can tolerate, as shown in Equation (3). For s i F , we have
ϕ i C i [ v ( e n d i 1 ) , v e n d i ] + l i C i [ v ( e n d i 1 ) , v e n d i ] + L h ϕ i 1 [ v 1 i , v 2 i ] D i
C i represents the number of frames contained in each transmission of flow s i ; v n i represents the n-th node of flow i, and v n i 1 represents the previous node of the n-th node of flow i; v e n d i represents the last node of flow i; L h represents the delay of the switch.
(3) Flow transmission constraint: This constraint specifies the timing for frames across each link along the path. In other words, the transmission start time of the same frame on a subsequent link must be no earlier than the end of its transmission on the preceding link, which is the sum of the frame’s transmission start time on the preceding link, the transmission delay, and the propagation delay, as shown in Equation (4). For s i S , { [ v a , v x ] , [ v x , v b ] } P a t h i , we have
ϕ i j [ v x , v b ] ϕ i j [ v a , v x ] + L h
(4) No-conflict constraint: Additionally, for the no-conflict constraint, we converted it into a schedulable model in task scheduling theory [29] by mapping task to flow, task period to flow period, task duration to flow size, and providing new constraints, as in Equation (5):
[ v i , v j ] E , f m , f n F , m n : t d u r , n M ( 2 r [ v i , v j ] , f m r [ v i , v j ] , f n ) ( ϕ m [ v i , v j ] ϕ n [ v i , v j ] ) m o d ( g c d ( T m , T n ) ) g c d ( T m , T n ) t d u r , m + M ( 2 r [ v i , v j ] , f m r [ v i , v j ] , f n )
Here, g c d ( T m , T n ) is the greatest common divisor of the periods of flows m and n; t d u r , m refers to the transmission duration; and r [ v i , v j ] , f m indicates whether flow m is deployed on edge [ v i , v j ] . M is the penalty factor.
(5) TTP–TSN constraints: In addition to satisfying the basic constraints of the NW–PSP model, including no-conflict link constraints, flow transmission constraints, and maximum delay constraints, we introduced three additional constraints:
① The arrival time of flows from the TSN domain to the TTP must be before the TTP gateway Time Slot; p a represents the delay of protocol conversion on the gateway G W :
A T T T P G W ϕ i j [ v x , v G W ] + l i j [ v x , v G W ] + p a
② Flows from the TTP domain to the TSN domain should arrive at the gateway device before the Qbv window opens; p b represents the transmission time of the flow on the TTP bus; v N i represents the i-th communication point of TTP:
ϕ i j [ v N i , v x ] A T T T P N i + p a + p b
③ According to TTP protocol requirements, the total messages received before each TTP Time Slot must not exceed 256 bytes.
(6) Optimization objective:
In addition, we know that the end-to-end delay for TSN→TTP traffic is given by δ TTP TSN , i = AT TTP GW Φ i [ v x , v GW ] , and that the end-to-end delay for TTP→TSN traffic is δ TSN TSN , i = Φ i [ v N i , v x ] AT TTP N i . For TSN→TSN traffic, the end-to-end delay is δ TSN TSN , i = Φ i [ v S , v x ] Φ i [ v x , v E ] , where v S represents the source node and v E represents the destination node.
Therefore, the optimization objective of this paper can be modeled as
minimize i F ( δ TSN TSN , i + δ TTP TSN , i + δ TSN TTP , i )
It should be noted that δ TSN TSN , i , δ TTP TSN , i , δ TSN TTP , i are not always valid. For example, if flow i is TSN→TTP traffic then δ TSN TSN , i = 0 and δ TSN TTP , i = 0 .

3.4.3. Model Consistency Analysis

To construct this scheduling model, we approximated some important delays. First, for the protocol-conversion delay of the gateway and the communication delay of the TTP bus, we used the delay of the longest frame as the basis for determining the values of p a and p b . In the TSN domain, the switch delay includes four components: input delay, forwarding delay, queuing delay, and output delay. Here, we adopted a no-wait scheduling strategy, eliminating the need to consider queuing delays. To ensure that the gate does not open early and to facilitate calculations, L h is set as a fixed value slightly greater than its practical value.
From our measurements, the forwarding delay of the designed switch is approximately 1 μ s, and the transceiver chip delay is about 3 μ s. Considering synchronization errors and transceiver jitter, we set L h to 5 μ s. The advantage of this approach is that in real-world networks, time synchronization is not perfect, and the transceiver exhibits a certain jitter. Setting a fixed value greater than the practical value effectively avoids the impact of synchronization errors and transceiver chip jitter. Additionally, it simplifies the calculation.
However, the limitation is that, due to the pessimistic estimation of delays such as p a , p b , and L h , the start time of the transmission window will be delayed, increasing the end-to-end latency. The limitation, however, can be tolerated in small networks such as those used in automotive and airborne applications (within 7 hops).

4. Research on Low-Latency TTP–TSN Network Planning

4.1. Engineered Time-Synchronization Algorithm

The TTP–TSN gateway operates both the TSN protocol layer (TSN Controller) and the TTP protocol layer (TTP Controller) simultaneously. Upon initialization, the gateway’s TSN protocol layer initiates time synchronization with other network devices in the TSN domain, using the IEEE 802.1AS time-synchronization algorithm. Subsequently, the TSN protocol layer synchronizes the TTP protocol layer, using the method elucidated in Figure 5. Finally, the TTP protocol layer of the TTP–TSN gateway completes the time synchronization of the TTP domain, utilizing the specified algorithm.
TTP–TSN time synchronization is essentially a process of time distribution from the TSN domain to the TTP domain. Upon startup, the TTP clock synchronization function is deactivated, and the t t p S t a r t U p timer is set. During the period before the expiration of the t t p S t a r t U p timer, the TSN protocol stack of the gateway achieves time synchronization with the TSN domain network. After the t t p S t a r t U p timer times out, it waits for the T S N c y c l e S t a r t signal from the TSN protocol stack, which marks the start of the TSN Qbv protocol gate cycle. Upon the condition T S N c y c l e S t a r t = = T r u e , the time value of the TTP protocol stack is reset, and the time synchronization function of the TTP protocol stack is activated. Subsequently, the TTP protocol stack transmits a TTP frame to the TTP domain to accomplish time synchronization within the TTP domain. Furthermore, the TSN protocol stack of the gateway periodically engages in time synchronization within the TSN domain, utilizing the time-correction value of the TSN domain to adjust the time value of the gateway’s TTP.
As shown in Figure 6, the overall steps of the TTP–TSN joint-planning algorithm are illustrated. The algorithm comprises four parts: initialization, TTP Time Slot division, TTP–TSN joint scheduling, and result acquisition. The T T P _ T I M E S L O T _ C A L C phase is only responsible for dividing the Time Slot length for TTP, without specifically binding nodes to Time Slots. The T T P T S N P l a n n i n g phase, indicated by the blue arrows, attempts to reassign routes and retry scheduling when a flow has not been fully scheduled. These two functions are relatively simple and are described indirectly.

4.2. TTP–TSN Low-Latency Planning Algorithm

As shown in Figure 7, a heuristic routing algorithm is proposed in this paper. The F i n d _ K _ p a t h s _ w i t h _ m a x _ H o p ( ) function uses a depth-first search algorithm to traverse and find the set of paths R s e t from the source node to the destination node. After selecting the routing set, each route is evaluated. The evaluation method is given by Equation (11), where m s s refers to Equation (9), the miniature–maximum-scale normalization formula. Note that, if there are multiple redundant gateways in a TTP domain, the algorithm will be called multiple times to find the paths to each gateway separately:
m s s ( x ) = 1 x min ( x ) max ( x ) min ( x )
a r x x p a t h ( route ) = l i n k path link . x x len ( path )
p a t h V a l u e ( s i , p a t h ) = m m s ( h o p s i , p a t h ) + m m s ( a r b u s i , p a t h ) + m m s ( a r l f n s i , p a t h ) + m m s ( a r d f n s i , p a t h )

4.2.1. Routing Algorithm

H o p represents the number of hops, a r b u (average route bandwidth utilization) represents bandwidth utilization. To distribute the traffic as evenly as possible across all links and nodes, we define the parameters a r l f n (average route link flow number) and a r d f n (average route node flow number), which represent the average number of flows per link and per node on the path, respectively. Average route ( a r ) can be calculated as Equation (10); link . x x represents the x x resources already used by the link. For instance, link.bu represents the bandwidth already in use by the link. The system default sets the maximum value, such as 1 Gbps for maximum bandwidth and the total number of flows for the maximum number of flows. The minimum value is established at 0.

4.2.2. Scheduling Algorithm

Table 1 lists the symbols and their corresponding descriptions related to flow scheduling analysis. Each symbol represents a specific network flow parameter or computed value used to analyze schedulability. The first eight parameters are used for Algorithm 1, while the latter parameters are used for the proof of schedulability, with specific definitions provided in Section 4.3.
Algorithm 1: searchSuitablePit Algorithm
Electronics 14 00203 i001
TTP–TSN ILP scheduling algorithm: This basic scheduling model of our ILP algorithm, which will not be elaborated further, is the same as [17,27,28], and we add additional constraints referring to Section 3.4.2, hereafter referred to as O_KSP-ILP.
TTP–TSN fast heuristic scheduling algorithm: Due to the difficulty in solving complex scenarios by using the ILP strategy, we propose a heuristic scheduling strategy, as shown in Figure 8, hereafter referred to as O_KSP-H. Firstly, the flows are sorted; after sorting, the scheduling is performed, using the s e a r c h S u i t a b l e P i t ( ) function. For flows from TSN to the TTP domain, the t t p t s n G W A l l o c ( ) algorithm is additionally called on to perform gateway slot allocation.
As shown in Algorithm 1, Lines 1–9 handle the initialization. The rrp is a rotating pointer used to specify which GCD (Global Clock Division) the algorithm should search for. In Line 12, during the search across different GCDs, we adopt an “as early as possible” strategy. This means that the algorithm attempts to find a suitable Time Slot as early as possible in the current GCD. If the current GCD cannot satisfy the requirements, we first calculate the offset for the next search within the current GCD Lines 18–19; then, the rrp pointer is moved to the next GCD lines 20–23, indicating that the algorithm will continue its search. Finally, Lines 26–28 state that if no suitable Time Slot is found after searching through the entire HyperPeriod, the current flow is considered unschedulable.
As shown in Figure 9, this illustrates a flow-scheduling process based on the rotating pointer (rrp) and hop count (hop). Each flow (f2, f3) is arranged in Time Slots based on its period and the current scheduling state. If a flow cannot be scheduled successfully in the current Time Slot, the system calculates an offset and adjusts the rrp and hop values to retry scheduling. The figure shows the flow placement, scheduling failures (marked with a red ‘X’), and the process of re-arranging by adjusting parameters.
When flow f2 cannot be scheduled successfully at rrp = 0, the algorithm in lines 20–23 updates the rrp, and, after the update, it is found that flow f2 can be scheduled within the current GCD. Then, f2 traverses the nodes in the path (line 11: hop = hop + 1) until it is scheduled successfully. If a suitable Time Slot cannot be found within the entire hyperperiod, the flow is considered unschedulable. If each GCD corresponding to rrp cannot schedule successfully, for flow f3 the algorithm in lines 18–19 calculates the o f f s e t and updates the n e w S t a r t P i t [ r r p ] variable. Thus, when rrp points to 0 again, flow f3 will start searching from the new position.

4.2.3. TTP–TSN Gateway-Allocation Method

The Time Slot-allocation algorithm is mainly aimed at reducing the delay of TSN->TTP traffic waiting for the gateway transmission Time Slot. The algorithm traverses each TTP domain, and when the accumulated TSN->TTP traffic bytes reach the communication limit of a TTP gateway, it assigns a Time Slot to the gateway of the current TTP domain and resets the accumulated value. Finally, if some gateways in a TTP domain remain idle (i.e., when TSN->TTP traffic is relatively low), we insert these idle gateways between two already-assigned gateway Time Slots, further reducing the delay of waiting for the gateway transmission Time Slot. The algorithm ultimately returns the Time Slot allocation results for all gateways in the TTP domains.
As shown in Figure 10, we have demonstrated the gateway-allocation algorithm for different periodic scenarios. In any case, the gateway slot-allocation algorithm will follow the principles of ① ensuring data transmission is completed and ② transmitting data as early as possible.

4.3. Algorithm Analysis

Definition 1
(Flow Length Level). In this paper, the flow length is divided into P different length levels with an interval of Δ F . For example, when Δ F = 100 bytes , frames with lengths in the range of [ 100 bytes , 1500 bytes ] are mapped onto P 1 , 2 , , 1500 bytes Δ F 15 length levels. The maximum duration for each level is denoted as D [ P ] .
Definition 2
(Flow Set Vector). According to the classification method used in this paper, the flows to be scheduled are divided into multiple < T [ i ] , D [ j ] > , which represents the set of all flows with period T i and flow length level j.
Definition 3
(Flow Set Slot Length). Define T S < T [ i ] , D [ j ] > = D [ j ] + h < T [ i ] , D [ j ] > × L h as the slot length corresponding to the flow set < T [ i ] , D [ j ] > .
Definition 4
(Expected Value of Non-Intersecting Paths). Based on the definition, we know that P < T [ i ] , D [ j ] > , m , m + 1 = P < T [ i ] , D [ j ] > , m P < T [ i ] , D [ j ] > , m + 1 . Therefore, E T S < T [ i ] , D [ j ] > = m = 1 M m × P < T i , P j > , m , m + 1 . M represents, at most, M disjoint flow paths.
Definition 5
(Flow Set Maximum Duration). The maximum duration required to complete the scheduling of the flow set < T [ i ] , D [ j ] > is D < T [ i ] , D [ j ] > = N < T [ i ] , P [ j ] > E T S < T [ i ] , D [ j ] > × T S < T [ i ] , D [ j ] > .
Typically, the flow set S contains flows with different periods. We group the flows based on their periods, where T [ i ] represents the set of flows with period T [ i ] . For flows with the same period, we further classify them based on their lengths. For example, if Δ F is 100 bytes, flows of lengths 78 bytes and 89 bytes will be classified together, while a flow of 123 bytes will form a separate group. This classification method is called flow length level, denoted by P. For each flow length level, there is a maximum duration, which we simplify as D [ P ] , representing the maximum duration required by the longest flow. For instance, for a flow set [0, 100 bytes], D = 100 bytes / bandwidth . Thus, the flow set S is divided into smaller sets of the form < T [ i ] , D [ j ] > , which are processed sequentially during scheduling.
Based on the above definitions and explanations, we provide further clarification. As shown in Figure 11, we consider three flows, S 1 , S 2 , and S 3 , with the same period (assumed to be < T [ i ] , D [ P ] > , where P = 100 bytes ). For the flow set S 1 , S 2 , S 3 , the maximum hop count is h < T [ i ] , D [ P ] > = 3 . In the figure, the solid lines represent the actual lengths of the flows, while the dashed lines represent the relaxed lengths, where D [ P ] replaces the actual length of the flows. This is a common engineering practice, and finer granularity can be achieved by adjusting Δ F . Using h < T [ i ] , D [ P ] > and D [ P ] , we can calculate that the slot length for the flows in the set < T [ i ] , D [ P ] > is T S < T [ i ] , D [ P ] > = D [ P ] + h < T [ i ] , D [ P ] > × L h , where L h is the fixed delay for each hop in the switch. For the three flows mentioned above, the probability P < T [ i ] , D [ P ] > , 3 = 0 represents the case where the three paths do not overlap. The probability P < T [ i ] , D [ P ] > , 2 = P < T [ i ] , D [ P ] > , 2 , 3 = 1 / 3 represents the case where two paths do not overlap. Non-overlapping paths mean that two flows can be placed in the same Time Slot.

4.3.1. Schedulability Analysis

Theorem 1.
Under the average performance of the algorithm, the minimum number of flows that can be scheduled by the algorithm is
i = 0 m 1 j = 0 n 1 N < T [ i ] , P [ j ] > + n < T [ m ] , P [ n ] > S . t : GCD i = 0 m j = 0 n D < T [ i ] , D [ j ] > 0 GCD i = 0 m 1 j = 0 n 1 D < T [ i ] , D [ j ] > > 0 n < T [ m ] , P [ n ] > = GCD i = 0 m 1 j = 0 n 1 D < T [ i ] , D [ j ] > T S < T [ m ] , D [ n ] > × E T S < T [ i ] , D [ j ] >
Proof. 
For the flow set < T [ i ] , D [ j ] > , the maximum time required to complete the scheduling of the first < T [ 0 ] , D [ 0 ] > < T [ m 1 ] , D [ n 1 ] > sets is i = 0 m 1 j = 0 n 1 D < T [ i ] , D [ j ] > . For the flow set < T [ m ] , D [ n ] > , although not all flows can be completed, the remaining time δ = GCD i = 0 m 1 j = 0 n 1 D < T [ i ] , D [ j ] > can accommodate n T [ m ] , P [ n ] flows. □
Corollary 1.
i , j , E T S < T [ i ] , D [ j ] > = 1 is the minimum number of flows that can be scheduled by the algorithm.

4.3.2. Approximation Ratio Analysis

Lemma 1.
For each flow-scheduling problem in D T [ i ] , by using the switch’s hop delay L h Δ F as the interval we can create k = D T [ i ] L h Δ F identical topological copies. These k topological copies are arranged in sequence from top to bottom. According to the adjacency relationship, we connect any point in an upper-layer replica to a point in an adjacent lower-layer replica with a directed edge, thereby constructing a Directed Acyclic Graph (DAG). Each node has k replicas in this DAG. Then, we create a virtual node for each node and connect the replica nodes to this virtual node. Additionally, each flow is treated as a flow node, and P virtual source nodes are created for each flow. The flow node is connected to the P (Table 1) virtual source nodes, and the P virtual source nodes are connected to the corresponding starting points in each replica. This transforms the original problem into an EDP selection problem.
Proof. 
As shown in Figure 12, suppose there are three business flows for some T S < T [ i ] , D [ j ] > : f 1 : A S 1 S 2 S 3 C ; f 2 : B S 2 D ; f 3 : D S 2 S 3 C ; assume that f 1 and f 2 belong to D [ 1 ] and that f 3 belongs to D [ 2 ] . Based on the above description, we can obtain the directed acyclic graph shown in the diagram. Then, the flow-scheduling problem for f 1 , f 2 , f 3 is equivalent to finding disjoint paths from the flow nodes to the virtual end node in the graph below. □
Theorem 2.
Assuming that all flows can be scheduled in the optimal solution and that the optimal path set is O , the approximation ratio of the earliest scheduling greedy algorithm described in this paper is
ρ = | O | | A | = k
where k is the maximum path length.
Proof. 
For paths π i , π j O , i j , if edge e i π i and e j π j then e i e j . We apply the scheduling algorithm described in this paper. Suppose we select path π i and add it to set A. Path π i can cover at most | π i | paths from O , where | π i | is the length of path π i . This is because of 1. There are no common edges between any two paths in O ; 2. Path π i contains at most | π i | edges; hence, for each π i added to set A, at most, | π i | paths will be removed from O . Therefore, i = 1 | A | | π i | | O | . If the maximum length of | π i | is k, then k | A | | O | , and so ρ = | O | | A | = k , which completes the proof. □

4.3.3. Algorithm Complexity Analysis

Routing section: The time complexity of the Breadth-First Search (BFS) is O ( V + E ) , and the space complexity is O ( V ) , where V is the number of nodes and E is the number of edges. For the K Shortest Path (KSP) algorithm based on the BFS, the time complexity is O ( K × ( V + E ) ) and the space complexity is O ( K × V ) . After finding K paths, we score these paths, where the scoring process involves traversing each hop. Assuming the maximum hop count is a constant M H , the space complexity is O ( M H × K × ( V + E ) ) . Since we ultimately select only one path, the space complexity reduces to O ( K × V ) .
Scheduling section: We assume the maximum number of GCLs on any edge is W N , giving a space complexity of O ( W N × E ) . When running the actual scheduling algorithm, all edges along the path are traversed, and their windows are summed to obtain the busy status of the entire path. Afterward, suitable free windows are selected, and the worst-case time complexity is O ( ( W N × E ) 2 ) .
Thus, the total time complexity is O ( M H × K × ( V + E ) ) + O ( ( W N × E ) 2 ) , and the total space complexity is O ( K × V ) + O ( W N × E ) .

5. Experiment

5.1. Experimental Environment and Test Stimuli

Figure 13a shows the prototype testbed. The verification utilized self-developed TTP and TSN devices based on Xilinx K7. One TTP node contained two TTP IP cores, meaning a single prototype device simulated multiple TTP nodes. The simulation experiment used Python 3.7 and Cplex 22.10, with an E5 2650 processor. The link rate of the TSN network was 1 Gbps, and the link rate of the TTP network was 5 Mbps.
We designed the experimental scenarios according to Table 2:
For the verification of the scheduling algorithm, we conducted random experiments by using the parameters shown in Table 3. The test network scale included two device cases, with 20 topologies randomly generated. The generation method ensured that the network was a connected graph and then randomly selected two points to connect according to a uniform distribution, ensuring that any two end systems were not directly connected. The periods and lengths of the test traffic were randomly selected from the parameter sets provided in Table 3. Figure 13b and Figure 13c, respectively, show the topologies that we generated by using the above-mentioned random method.

5.2. Simulation Verification

5.2.1. TTP–TSN Routing Verification

As shown in Figure 14, we performed a comparative analysis of the link (maximum) bandwidth (in MBps), the (maximum) number of flows through the link (Arlfmax, in counts), and the (maximum) number of flows through the node (Ardfmax, in counts).
In terms of link bandwidth (Figure 14a,b), the results show that the maximum bandwidth of links using the O_KSP-ILP method was slightly lower than that of the other two algorithms. When looking at the maximum number of flows through the link (Figure 14c,d) and the node (Figure 14e,f), our algorithm demonstrated significant improvements. Both the maximum number of flows passing through the link and the maximum number of flows handled by the node were lower in our routing method compared to the others. This reduction indicates that our algorithm is more efficient, in terms of flow management, leading to a decreased load on both the links and nodes. This ultimately results in better overall network performance by reducing congestion and balancing the flow distribution more evenly across the network. All three conclusions can be easily observed from the purple curve in the figures below:

5.2.2. TTP–TSN Scheduling Verification

As illustrated in Figure 15a,b, we first compared the O_KSP-ILP method with the other ILP methods. From the curves, it can be seen that the schedulability gap between O_KSP-ILP and the other ILP methods was not significant. In fact, for larger-scale networks, as shown in Figure 15b, O_KSP-ILP demonstrated better schedulability when solving for flows. This indicates that our integer linear programming-based scheduling model is reasonable.
Next, we analyzed the O_KSP-H algorithm. From the figure, it can be observed that when the number of flows was relatively small (below 150), the schedulability of O_KSP-H was lower than that of the ILP algorithm, as the ILP method was able to find better solutions. However, as the number of scheduled flows increased, the schedulability of O_KSP-H surpassed that of the ILP algorithm. This was mainly because with up to 600 flows the results obtained from the ILP solver were still some distance from the optimal solution (Mingap). When utilizing the ILP algorithm without any additional steps, it became evident that it was unable to handle more than 250 flows within 600 s, demonstrating the scalability limitations of traditional ILP methods for large networks.
Meanwhile, we also performed a comparison of the execution times of several algorithms, to evaluate their efficiency. As shown in Figure 15c,d, we tested the proposed heuristic scheduling algorithm against other established algorithms. The results demonstrate that the heuristic scheduling algorithm introduced in this paper is highly efficient and can significantly reduce the solving time compared to the other algorithms. With 300 flows, it only took 1–2 s, while the ILP method was unable to produce results within 600 s. Specifically, the heuristic approach minimized the computational overhead by simplifying the scheduling process without compromising the quality of the solution. This reduction in execution time is particularly important for real-time applications, where quick decision making is critical. The experimental results validate that our heuristic method is not only effective in improving schedulability but also in optimizing computational resources, making it suitable for large-scale network scheduling tasks.
The main limitation of the heuristic method proposed in this paper is its poor schedulability in traffic-intensive scenarios. Moreover, since it is based on a greedy approach, it is difficult to obtain the optimal solution. According to Theorem 1, we improve the schedulability of the algorithm by minimizing path overlap during the routing phase. As shown in Figure 14, our routing algorithm can reduce path overlap between flows, resulting in a more balanced load. Therefore, as shown in Figure 15e,f, when compared to the shortest-path algorithm, the shortest-path algorithm has the disadvantage of creating hotspot paths. The scheduling results indicate that reducing the path overlap between flows can indeed improve schedulability.

5.2.3. Schedulability Verification

This subsection provides a comprehensive simulation validation for Theorem 1 and Corollary 1. To assess the accuracy of our theoretical predictions, we conducted a series of simulations and compared the results with the estimates provided by Theorem 1 and Corollary 1. As shown in Figure 16, we examined the discrepancies between the theoretical outcomes and the simulation results under various conditions. Specifically, we tested different combinations of period sets and topology sets, as outlined in Table 3. These combinations represented a variety of network configurations and flow patterns, allowing for a thorough evaluation of the proposed theoretical models in diverse scenarios. The experimental results show that the outcome obtained from Corollary 1 was less accurate, as it evaluated the worst-case scenario of the fast algorithm, whereas the result derived from Theorem 1 was much closer to the actual scheduling result of the algorithm.
For scenarios with the same topology size but different test sets, we compared Figure 16a with Figure 16c and Figure 16b with Figure 16d. Since the periods in PeriodSet2 were 10 times longer than those in PeriodSet1, it can be observed that for the same scheduling rate the number of schedulable flows in PeriodSet2 was around 10 times that in PeriodSet1.
For scenarios with the same test set but different topology sizes, we compared the curves in Figure 16a,b, as well as Figure 16c,d. A larger topology size meant that there were more routing options available, and our routing algorithm minimized the overlap between flow paths as much as possible. From the yellow line, it can be seen that with the same number of flows the schedulability of the algorithm improved as well.

5.2.4. TTP–TSN End-to-End Delay Simulation

As shown in Table 4, we conducted a detailed comparison of the end-to-end delay for TTP->TSN and TSN->TTP traffic with non-sched (Table 4a) and sched (Table 4b). Specifically, we evaluated the performance after applying both the non-joint scheduling and the joint scheduling algorithm described in this paper. The comparison was carried out under various combinations of periodic sets, as outlined in Table 3, which represented a wide range of network configurations and traffic patterns.
Since a no-wait scheduling strategy was adopted in the TSN domain, the impact of different network topology sizes on end-to-end latency in the TSN domain was only related to the path length. Therefore, we only compared the effects of different period sets and whether low-latency scheduling was applied on end-to-end latency.
When low-latency scheduling was not applied, the gateway forwarded data strictly in the order of arrival. In this case, as shown in Table 4a, due to the presence of Wait GCL Delay and Wait Gate Time Slot Delay, the delays in both directions (TTP → TSN and TSN → TTP) were significant. However, when low-latency scheduling was applied, as shown in Table 4b, the scheduling algorithm ensured that the sending time of TTP → TSN traffic aligned perfectly with the opening time of the TSN Qbv gates. Ideally, the TTP → TSN delay was reduced to zero. For TSN → TTP traffic, the delay was mainly caused by the Wait Gate Time Slot Delay. Since the number of gate Time Slots in the TTP gateway was limited, this delay could only be reduced but could not be completely eliminated.
Additionally, for different flow period sets, since the periods in PeriodSet2 were 10 times longer than those in PeriodSet1 it can be observed that in the ideal case, with low-latency scheduling applied, the delay for TSN → TTP in PeriodSet2 was also 10 times that of the corresponding delay in PeriodSet1.
From the table, we observe that the joint scheduling algorithm significantly improved the performance of both traffic flows. For TTP->TSN traffic, the queuing delay, which was a major source of latency in the system, could be completely eliminated after the joint design was applied. This elimination of queuing delay resulted in a more efficient and predictable transmission of TTP traffic over the TSN network. Furthermore, the delay for TSN->TTP traffic was also reduced by 50%, demonstrating the effectiveness of the joint scheduling approach in optimizing the overall end-to-end delay for both directions of communication. This reduction in delay is crucial for time-sensitive applications, where minimizing latency is of paramount importance. The results highlight the advantages of joint scheduling in improving network efficiency and enhancing the performance of TTP–TSN systems.

5.3. TestBed

5.3.1. Time Synchronization Verification

As shown in Figure 17a, the time synchronization between the TSN nodes had an error range of approximately ±50 ns. Figure 17b illustrates the time synchronization between the TSN and TTP nodes, with an error range of about ±100 ns. Figure 17c depicts the time synchronization between nodes in different TTP domains, with an error range of about ±200 ns. All these ranges met the requirement of being below 1 µs.

5.3.2. TTP–TSN End-to-End Delay Comparison

As shown in Table 5, the comparison of delays is presented. Our FPGA uses a 125 MHz clock with a period of 8 ns. As shown in Table 5, under the premise of time synchronization designed in this paper, the low-latency TTP–TSN gateway can effectively reduce end-to-end delay.
Comparing Table 5a,b, with the same period and direction the delay was reduced after using the scheduling algorithm proposed in this paper. For example, when T = 4 ms in the TSN → TTP direction, the end-to-end delay measured using the simplest first-come-first-served method at the gateway was 2865.42 μ s, while after applying the scheduling algorithm the delay was reduced to 812.62 μ s.
Next, we compare the hardware results with the simulation results. Comparing Table 4b and Table 5b, the results obtained from hardware testing were higher than the simulation results (for example, in the TTP → TSN direction). This was because in the simulation we ignored the protocol-conversion delay and the data frame transmission delay in the network. This omission in the simulation was reasonable, on the one hand, as we used a non-waiting scheduling approach, and the above delays were constant. On the other hand, as analyzed in the “Motivation” section, these delays were not the main factors causing the end-to-end delay. In the hardware test, we placed the timestamp in the data frame at the sender; the receiver parsed this timestamp and calculated the delay by subtracting it from the local time. This process accounts for the three delays mentioned above, so the hardware result was slightly higher than the simulation result.
In conclusion, we have completed the verification of the prototype system of the designed TTP–TSN gateway. We have also verified the time-synchronization method, the protocol-conversion method, and the network planning algorithm for the TTP–TSN hybrid network.

6. Conclusions

Integrating TTP and TSN in hybrid networks presents significant application potential in industries such as automotive and aerospace. This paper aimed to propose solutions for the low-latency TTP–TSN planning problem. The foundation for low-latency scheduling is cross-protocol time synchronization, so we first designed an time-synchronization algorithm. Based on time synchronization and ILP theory, we developed a low-latency scheduling model, and we first solved it by using an ILP solver. To further improve the algorithm’s practicality, we proposed a heuristic fast scheduling algorithm. When the network satisfies Theorem 1, this algorithm can shorten the computation time by a factor of thousands while ensuring schedulability. Future work will focus on further exploring the design and implementation of a high-bandwidth TTP–TSN gateway.

Author Contributions

Y.P.: conceptualization, methodology, software, and writing—original draft; T.J.: conceptualization and writing—review and editing; X.T.: project administration and funding acquisition; B.H.: writing—review and editing; D.X.: writing—review and editing; Z.G.: writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. EE Times Asia. Looking Ahead: High-Speed In-Vehicle Display and Sensor Connections. 2021. Available online: https://www.eetasia.com/looking-ahead-high-speed-in-vehicle-display-and-sensor-connections/ (accessed on 15 December 2024).
  2. Olumuyiwa, O.; Chen, Y. Virtual CANBUS and Ethernet Switching in Future Smart Cars Using Hybrid Architecture. Electronics 2022, 11, 3428. [Google Scholar] [CrossRef]
  3. Deredempt, M.-H.; Kollias, V.; Sun, Z.; Canamares, E.; Ricco, P. Spacecraft Data Handling Architecture based on AFDX network. In Proceedings of the Embedded Real Time Software and Systems (ERTS2014), Toulouse, France, 5–7 February 2014. [Google Scholar]
  4. Rejeb, N.; Mhadhbi, I.; Karim, A.; Salem, A.K.; Ben Saoud, S. AFDX-CAN Architecture for Avionics Applications. Int. J. Comput. Sci. Commun. Inf. Technol. 2014, 1, 7–14. [Google Scholar]
  5. Center, N.J.S. On Time-Triggered Ethernet in NASA’s Lunar Gateway. In Avionics Architectures Community of Practice; 2020. Available online: https://ntrs.nasa.gov/api/citations/20205005104/downloads/2020-07-26-AA-CoP.pdf (accessed on 15 December 2024).
  6. Zhou, X.; Xiong, H.; Feng, H. Hybrid partition- and network-level scheduling design for distributed integrated modular avionics systems. Chin. J. Aeronaut. 2020, 33, 308–323. [Google Scholar] [CrossRef]
  7. Xie, G.; Zhang, Y.; Chen, N.; Chang, W. A High-Flexibility CAN-TSN Gateway with a Low-Congestion TSN-to-CAN Scheduler. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2023, 42, 5072–5083. [Google Scholar] [CrossRef]
  8. Zexiong, L. A Method of Synchronizing the TTE Network with the TTP Bus Network. Patent No. CN108809466A, 13 November 2018. [Google Scholar]
  9. Albert, A.; Gmbh, R. Comparison of event-triggered and time-triggered concepts with regard to distributed control systems. Embed. World 2004, 2004, 235–252. [Google Scholar]
  10. Borana, A.; Sonnis, S.; Mohanty, A.; Bhujbal, S.; Roy, D.; Vaidya, U. Design, Simulation and Validation of Fault Tolerant Averaging Algorithm for Clock Synchronization with Custom Time Triggered Deterministic Protocol. IEEE Trans. Ind. Appl. 2022, 58, 5447–5456. [Google Scholar] [CrossRef]
  11. Zhao, L.; Pop, P.; Steinhorst, S. Quantitative Performance Comparison of Various Traffic Shapers in Time-Sensitive Networking. IEEE Trans. Netw. Serv. Manag. 2022, 19, 2899–2928. [Google Scholar] [CrossRef]
  12. Herber, C.; Richter, A.; Wild, T.; Herkersdorf, A. Real-time capable CAN to AVB ethernet gateway using frame aggregation and scheduling. In Proceedings of the 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 9–13 March 2015; pp. 61–66. [Google Scholar] [CrossRef]
  13. Berisa, A.; Ashjaei, M.; Daneshtalab, M. Investigating and Analyzing CAN-to-TSN Gateway Forwarding Techniques. In Proceedings of the 2023 IEEE 26th International Symposium on Real-Time Distributed Computing (ISORC), Nashville, TN, USA, 23–25 May 2023; pp. 136–145. [Google Scholar] [CrossRef]
  14. Peng, Y.; Shi, B.; Jiang, T.; Tu, X.; Xu, D.; Hua, K. A Survey on In-Vehicle Time-Sensitive Networking. IEEE Internet Things J. 2023, 10, 14375–14396. [Google Scholar] [CrossRef]
  15. Chahed, H.; Kassler, A. TSN Network Scheduling—Challenges and Approaches. Network 2023, 3, 585–624. [Google Scholar] [CrossRef]
  16. Kong, W.; Nabi, M.; Goossens, K. Run-Time Recovery and Failure Analysis of Time-Triggered Traffic in Time Sensitive Networks. IEEE Access 2021, 9, 91710–91722. [Google Scholar] [CrossRef]
  17. Wang, J.; Zhou, L.; Tian, L. ILP-based multiperiod flow routing and scheduling method in time-sensitive network. J. Phys. Conf. Ser. 2022, 2384, 012032. [Google Scholar] [CrossRef]
  18. Huang, K.; Wan, X.; Wang, K.; Jiang, X.; Chen, J.; Deng, Q.; Xu, W.; Peng, Y.; Liu, Z. Reliability-Aware Multipath Routing of Time-Triggered Traffic in Time-Sensitive Networks. Electronics 2021, 10, 125. [Google Scholar] [CrossRef]
  19. Vlk, M.; Hanzálek, Z.; Brejchová, K.; Tang, S.; Bhattacharjee, S.; Fu, S. Enhancing Schedulability and Throughput of Time-Triggered Traffic in IEEE 802.1Qbv Time-Sensitive Networks. IEEE Trans. Commun. 2020, 68, 7023–7038. [Google Scholar] [CrossRef]
  20. Stüber, T.; Osswald, L.; Lindner, S.; Menth, M. A Survey of Scheduling Algorithms for the Time-Aware Shaper in Time-Sensitive Networking (TSN). IEEE Access 2023, 11, 61192–61233. [Google Scholar] [CrossRef]
  21. Wang, Z.; Luo, F.; Li, Y.; Gan, H.; Zhu, L. Schedulability Analysis in Time-Sensitive Networking: A Systematic Literature Review. arXiv 2024, arXiv:2407.15031. [Google Scholar]
  22. Baveja, A.; Srinivasan, A. Approximation Algorithms for Disjoint Paths and Related Routing and Packing Problems. ACM Trans. Algorithms 2000, 25, 255–280. [Google Scholar] [CrossRef]
  23. Chekuri., C.; Khanna, S. Edge-Disjoint Paths Revisited. ACM Trans. Algorithms 2007, 3, 46. [Google Scholar] [CrossRef]
  24. Loveless, A. Overview of TTE Applications and Development at NASA/JSC; Technical Report; NASA Johnson Space Center: Houston, TX, USA, 2016. [Google Scholar]
  25. Dürr, F.; Nayak, N.G. No-wait Packet Scheduling for IEEE Time-sensitive Networks (TSN). In Proceedings of the 24th International Conference on Real-Time Networks and Systems, Brest, France, 19–21 October 2016. [Google Scholar]
  26. Wang, J.; Liu, C.; Zhou, L.; Wang, J.; Yu, X. DA-DMPF: Delay-aware differential multi-path forwarding of industrial time-triggered flows in deterministic network. Comput. Commun. 2023, 210, 285–293. [Google Scholar] [CrossRef]
  27. Feng, Z.; Gu, Z.; Yu, H.; Deng, Q.; Niu, L. Online Rerouting and Rescheduling of Time-Triggered Flows for Fault Tolerance in Time-Sensitive Networking. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2022, 41, 4253–4264. [Google Scholar] [CrossRef]
  28. Nie, H.; Li, S.; Liu, Y. An Enhanced Routing and Scheduling Mechanism for Time-Triggered Traffic with Large Period Differences in Time-Sensitive Networking. Appl. Sci. 2022, 12, 4448. [Google Scholar] [CrossRef]
  29. Korst, J. Periodic Multiprocessor Scheduling; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
Figure 1. Example topology.
Figure 1. Example topology.
Electronics 14 00203 g001
Figure 2. Network joint planning requirement.
Figure 2. Network joint planning requirement.
Electronics 14 00203 g002
Figure 3. Communication example diagram.
Figure 3. Communication example diagram.
Electronics 14 00203 g003
Figure 4. Gateway architecture.
Figure 4. Gateway architecture.
Electronics 14 00203 g004
Figure 5. TTP–TSN time synchronization.
Figure 5. TTP–TSN time synchronization.
Electronics 14 00203 g005
Figure 6. TTP–TSN joint-scheduling algorithm framework.
Figure 6. TTP–TSN joint-scheduling algorithm framework.
Electronics 14 00203 g006
Figure 7. Routing algorithm.
Figure 7. Routing algorithm.
Electronics 14 00203 g007
Figure 8. Fast scheduling algorithm.
Figure 8. Fast scheduling algorithm.
Electronics 14 00203 g008
Figure 9. searchSuitablePit.
Figure 9. searchSuitablePit.
Electronics 14 00203 g009
Figure 10. ttptsnGWAlloc.
Figure 10. ttptsnGWAlloc.
Electronics 14 00203 g010
Figure 11. Notation explanation.
Figure 11. Notation explanation.
Electronics 14 00203 g011
Figure 12. DAG example.
Figure 12. DAG example.
Electronics 14 00203 g012
Figure 13. Topology diagram: (a) testbed; (b) sw8es16 example topology; (c) sw16es32 example.
Figure 13. Topology diagram: (a) testbed; (b) sw8es16 example topology; (c) sw16es32 example.
Electronics 14 00203 g013
Figure 14. Comparison of routing algorithm results.
Figure 14. Comparison of routing algorithm results.
Electronics 14 00203 g014
Figure 15. Assessment of schedulability factors.
Figure 15. Assessment of schedulability factors.
Electronics 14 00203 g015
Figure 16. Theoretical comparison.
Figure 16. Theoretical comparison.
Electronics 14 00203 g016
Figure 17. Comparison of time synchronization results.
Figure 17. Comparison of time synchronization results.
Electronics 14 00203 g017
Table 1. Notations.
Table 1. Notations.
SymbolDescription
f l o w O b j Flow object, which contains flow attributes, including Period, Path, and length.
G C D Greatest common divisor of flow periods.
H y p e r P e r i o d Least common multiple of all flow periods.
C a n A s s i g n Feasibility result. True means the current flow can be successfully assigned to Time Slots; False indicates no slots are available for the current flow.
r r p Round-robin point, indicating the current location on the GCD cycle.
c u r W i n Current active time window.
O f f s e t If the current slot is not valid, the offset gives the adjustment from the starting time.
L h Switch delay.
PFlow length level.
Δ F Flow length interval.
D [ P ] Maximum duration for flow length level P.
N < T [ i ] , P [ j ] > = | < T [ i ] , D [ j ] > | Total number of frames in S with period T i and length level j.
h < T [ i ] , D [ j ] > Maximum hop count in the flow set < T [ i ] , D [ j ] > .
T S < T [ i ] , D [ j ] > Slot length corresponding to flow set < T [ i ] , D [ j ] > .
P < T [ i ] , D [ j ] > , m Probability that m selected flows from flow set < T [ i ] , D [ j ] > have no intersecting paths; this probability can be estimated using the Monte Carlo method.
P < T [ i ] , D [ j ] > , m , m + 1 Probability that m selected flows from flow set < T [ i ] , D [ j ] > have no intersecting paths, but selecting m + 1 flows results in at least two intersecting paths.
E T S < T [ i ] , D [ j ] > Expected number of flows in the Time Slot T S < T [ i ] , D [ j ] > with non-intersecting paths.
D < T [ i ] , D [ j ] > Maximum duration required by the flow set < T [ i ] , D [ j ] > .
D T [ i ] Maximum duration required by the flow set < T [ i ] , > .
Table 2. Experiment method description.
Table 2. Experiment method description.
Experimental CasePlatformMethod Description
Planning AlgorithmPythonWe compared our algorithm with two others. Firstly, ref. [19] (2022), which used BFS for routing and ILP for scheduling, referred to as B F S I L P ; secondly, ref. [27] (2022), which used a limited KSP algorithm for routing and ILP for scheduling, referred to as L _ K S P I L P .
Schedulability TheoryPythonEvaluation of actual scheduling results of Theorem 1, Corollary 1, and the fast algorithm through random experiments.
Time SynchronizationFPGAWe used an oscilloscope to verify the time synchronization.
End-to-End Delay (comprehensive experiment)Python and FPGAWe generated 10 flows each for TTP->TSN, TSN->TTP, and TSN->TSN, with flow periods of {1 ms, 2 ms, 4 ms}, and a message length of 20 bytes to simplify the experiment.
Table 3. Scheduling experiment parameter settings.
Table 3. Scheduling experiment parameter settings.
ParameterRange
Network Scale{Set1: [8 sw, 16 es], Set2: [16 sw, 32 es]}
Flow Period RangeSet1: {100 µs, 200 µs, 400 µs, 500 µs}
Set2: {1000 µs, 2000 µs, 4000 µs, 5000 µs}
Packet Length Range{100 bytes, 1500 bytes}
ILP Solving Time600 s
Table 4. TTP–TSN end-to-end delay simulation comparison.
Table 4. TTP–TSN end-to-end delay simulation comparison.
(a) Non-Sched TTP–TSN Communication Delay
DirectionT = 100 μ sT = 200 μ sT = 400 μ s
TTP→TSN49.928 μ s99.769 μ s199.817 μ s
TSN→TTP70.005 μ s140.155 μ s280.097 μ s
DirectionT = 1 msT = 2 msT = 4 ms
TTP→TSN500.766 μ s1000.18 μ s1993.69 μ s
TSN→TTP699.488 μ s1400.13 μ s2799.85 μ s
(b) Sched TTP–TSN Communication Delay
DirectionT = 100 μ sT = 200 μ sT = 400 μ s
TTP→TSN0.0 µs0.0 µs0.0 µs
TSN→TTP31.599 µs71.599 µs111.6 µs
DirectionT = 1 msT = 2 msT = 4 ms
TTP→TSN0.0 μ s0.0 μ s0.0 μ s
TSN→TTP301.599 μ s701.599 μ s1101.6 μ s
Table 5. Worst-case communication delay comparison.
Table 5. Worst-case communication delay comparison.
(a) Non-Sched TTP–TSN Communication Delay
DirectionT = 1 msT = 2 msT = 4 ms
TTP→TSN161.568 µs261.288 µs461.224 µs
TSN→TTP765.416 µs1465.42 µs2865.42 µs
(b) Sched TTP–TSN Communication Delay
DirectionT = 1 msT = 2 msT = 4 ms
TTP→TSN67.08 µs67.08 µs67.08 µs
TSN→TTP217.92 µs413.520 µs812.62 µs
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peng, Y.; Jiang, T.; Tu, X.; Huang, B.; Guo, Z.; Xu, D. Research on Low-Latency TTP–TSN Cross-Domain Network Planning Problem. Electronics 2025, 14, 203. https://doi.org/10.3390/electronics14010203

AMA Style

Peng Y, Jiang T, Tu X, Huang B, Guo Z, Xu D. Research on Low-Latency TTP–TSN Cross-Domain Network Planning Problem. Electronics. 2025; 14(1):203. https://doi.org/10.3390/electronics14010203

Chicago/Turabian Style

Peng, Yifei, Tigang Jiang, Xiaodong Tu, Bolin Huang, Zheng Guo, and Du Xu. 2025. "Research on Low-Latency TTP–TSN Cross-Domain Network Planning Problem" Electronics 14, no. 1: 203. https://doi.org/10.3390/electronics14010203

APA Style

Peng, Y., Jiang, T., Tu, X., Huang, B., Guo, Z., & Xu, D. (2025). Research on Low-Latency TTP–TSN Cross-Domain Network Planning Problem. Electronics, 14(1), 203. https://doi.org/10.3390/electronics14010203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop