Network Calculus-Based Latency for Time-Triggered Trafﬁc under Flexible Window-Overlapping Scheduling (FWOS) in a Time-Sensitive Network (TSN)

: Deterministic latency is an urgent demand to pursue the continuous increase in intelligence in several real-time applications, such as connected vehicles and automation industries. A time-sensitive network (TSN) is a new framework introduced to serve these applications. Several functions are deﬁned in the TSN standard to support time-triggered (TT) requirements, such as IEEE 802.1Qbv and IEEE 802.1Qbu for trafﬁc scheduling and preemption mechanisms, respectively. However, implementing strict timing constraints to support scheduled trafﬁc can miss the needs of unscheduled real-time ﬂows. Accordingly, more relaxed scheduling algorithms are required. In this paper, we introduce the ﬂexible window-overlapping scheduling (FWOS) algorithm that optimizes the overlapping among TT windows by three different metrics: the priority of overlapping, the position of overlapping, and the overlapping ratio (OR). An analytical model for the worst-case end-to-end delay (WCD) is derived using the network calculus (NC) approach considering the relative relationships between window offsets for consecutive nodes and evaluated under a realistic vehicle use case. While guaranteeing latency deadline for TT trafﬁc, the FWOS algorithm deﬁnes the maximum allowable OR that maximizes the bandwidth available for unscheduled transmission. Even under a non-overlapping scenario, less pessimistic latency bounds have been obtained using FWOS than the latest related works.


Introduction
There is an urgent demand for high-speed and deterministic end-to-end communications to integrate several real-time applications. Missing latency deadlines may generate dangerous situations for humans or considerable economic waste in highly time-sensitive applications, such as autonomous vehicles and automation industries. Ethernet networking has sufficient bandwidth with reasonable costs for a wide range of application domains. It was used to build several relevant protocols, such as Audio/Video Bridging Ethernet (AVB-Ethernet) and time-triggered Ethernet (TT-Ethernet) networks. Although these technologies have provided several real-time functions, their capabilities are unable to manage the control flows in safety-critical systems (e.g., autonomous vehicles) and handle the continuous increase in such applications' intelligence.
A time-sensitive network (TSN) is a set of sub-standards that introduce an attractive solution to support safety-critical applications based on several TT-Ethernet extensions [1]. The amendments include timing and access control aspects to guarantee data transport with a deterministic low latency, extremely low delay variation, and zero congestion loss

•
We propose a flexible window-overlapping scheduling (FWOS) algorithm that allows TT windows to overlap each other and considers the relative positional relationships between window offsets for consecutive nodes. The network calculus (NC) approach is used to formulate the worst-case latency bounds. The FWOS algorithm is evaluated under a realistic vehicle use case considering three overlapping metrics: the priority of overlapping, the position of overlapping, and the overlapping ratio (OR). A critical discussion is introduced based on GCL design parameters. • Based on the latency performance evaluation under flexible overlapping scenarios, we create an additional TSN scheduling constraint, leading to more relaxed GCL implementations. For each particular latency deadline, the FWOS algorithm defines the maximum allowable OR that maximizes the solution space for unscheduled traffic, while guaranteeing TT latency requirements at the same time. Additionally, the FWOS algorithm presents an improvement in WCD boundaries, even in a complete isolation scenario among TT windows compared with the latest related works.
based, which cannot express all pessimistic transmission cases and, then, leads to unsafe latency evaluations [37]. As known, critical time applications have hard and soft real-time traffic. Implementing GCLs with strict timing constraints reduces the overall solution space, minimizing the bandwidth available for unscheduled traffic and increasing the possibility of mislaying the required determinism for AVB flows. Moreover, such strict timing scheduling methods complicate the GCL synthesis in large-scale use cases. For these reasons, several proposals have been introduced to support soft time traffic, while simultaneously ensuring the QoS requirements for hard real-time flows.
Gavrilut et al. [9,38] proposed a software algorithm based on the greedy randomized adaptive search procedure (GRASP), including the AVB traffic to be scheduled with the TT traffic. The results confirmed that feasible AVB scheduling was performed using the proposed method under a simple use case. However, AVB schedulability cannot be ensured under heavy loading or more complicated networks. Zhang et al. [6] proposed combining hard and soft real-time flows simultaneously, leading to reduced computation complexity. The presented approach chooses a proper transmission cycle and scheduling unit to improve the selection process and reduce the higher-priority impact on the lowerpriority flows. Others in [7,39,40] have introduced formal latency analysis for AVB traffic based on predefined GCLs. However, all the above algorithms were implemented under the assumption of complete isolation between scheduled traffic windows, resulting in less overall solution space.
Many scheduling proposals have addressed traffic and window isolations in the GCL design. Here, we discuss studies most related to our work and compare their limitations, as briefly presented in Table 1. Craciunas et al. [41] presented a scheduling scheme under the assumption that scheduled and unscheduled flows are entirely isolated. Further, the proposed GCL synthesis separates scheduled flows against one another by implementing fully isolated transmission windows and fully synchronized nodes. In [42], Craciunas et al. used the targeted bounds of end-to-end latencies and the duration of assigned windows for the scheduled traffic to measure the scalability and schedulability of the proposed algorithm. Oliver et al. [43] presented relaxed GCL synthesis by allowing scheduled frames to interfere at the same-priority queue. However, the associated queue gates were still in a mutually exclusive fashion.  [42] Off-line All nodes Isolated Isolated Not considered Deriving scheduling constraints [43] Off-line All nodes Isolated Isolated Not considered Improved scheduling constraints [44] Off-line All nodes Interfered Isolated Not considered Improved scheduling constraints [10,45] Off-line All switches Isolated Isolated Not considered More relaxed schedules [11] Off-line All switches Isolated Overlapped without optimization Not considered Improved solution space [12] Off-line All switches Isolated Overlapped without optimization Not considered Preemption improvements [13] Off-line All switches Isolated Isolated Considered Less pessimistic latencies More relaxed schedules have been presented, in [10][11][12]44], under window-based implementation for each priority class. In [10,44], the algorithms have been designed without the need for end systems to be fully synchronized. However, synchronization is still required for all selected switches in the path. Zhao et al. [11] proved that complete isolation between TT windows would reduce the available solution space for soft real-time streams. They analytically formulated the worst-case end-to-end latency for TT flows, resulting in an enhancement of the overall solution space compared with that in the full isolation scenario, while satisfying TT flow requirements. However, the authors just made the improvement based on a particular overlapping case without considering how to optimize the overlapping for every networking scenario and every targeted latency deadline. Zhang et al. [12] examined the preemption technique's effect on the latency bounds based on the algorithm [11]. However, the proposed scheduling designs, in [10][11][12]44], were implemented under a single node design and did not consider the rational timing difference between the offsets for the same-priority queues in consecutive nodes [13]. These models resulted in high-pessimistic end-to-end latency bounds.
Based on the above, Zhao et al. [13] improved the worst-case latency bounds for TT traffic by considering the relative positional relationships between the same-priority windows. The introduced results realized in a dramatic enhancement of the latency bounds and reduced the degree of pessimism in [11,12]. However, the presented analysis did not include the overlapping situation among TT windows. Moreover, we observe that more latency correctness can be done on that model as the technical switching delay has been considered twice in the overall latency derivations. The first is included indirectly in the maximum waiting time calculation, and the second is added directly in the final end-toend latency expression. Although this delay is considerably slight compared with other latency components, its impact is notable in large-scale network topologies. A summarized comparison between the above algorithms is illustrated in Table 1.
For latency evaluation, analytical models are statically designed to cover all system corner cases, which are incredibly significant for real-time systems [26], especially in strict automated designs and automotive driving systems. Although there are several analytical approaches used to implement latency models, network calculus (NC) theory [45] is preferred in most studies as it induces less pessimistic and safe worst-case latencies compared to other approaches [46,47]. Accordingly, we used the NC approach to calculate WCD bounds.

Relevant Background
This section introduces the basic architecture of a TSN and the mechanism of the IEEE 802.1Qbv protocol briefly.

Basic Architecture of a TSN
As mentioned previously, the TSN is a new extension of the conventional Ethernet network, although both systems' physical components are the same. Precisely, the TSN communication system consists of several end systems (ESs) (information sources and destinations), switches (SWs), and physical links that connect the ESs and SWs through the full-duplex communication technique, as depicted in Figure 1. The network model is an undirected graph G(P, V ), where P = { 1 , . . . , k } represents the data links set, with V = ES ∪ SW, ES = {ES 1 , . . . , ES n }, and SW = {SW 1 , . . . , SW m }. Each dataflow link i corresponds to i = [V a , V b ] where V a and V b ∈ V. Each flow must pass through one or more links to move between source and destination, which correspond to the route or path r i ∈ , where is the set of an ordered sequence of data stream links that connect a single ES to one (unicast) or more (multicast) ESs. Here, we consider the rate of the egressed flows from a node h is denoted as C h .

IEEE 802.1Qbv Protocol
The IEEE 802.1Qbv protocol defines the time-aware shaping (TAS) technique to control traffic transmissions between TSN elements using a gating mechanism [17]. Each switching egress port has eight priority queues, as depicted in Figure 2, each of which has a gate with either an open or a closed states. Changing the gate state is controlled by the GCL under guaranteed synchronization for all selected switches in the transmission path. Only one priority frame from a specific queue can be transmitted during a related open interval by the first-in first-out (FIFO) mechanism. TT gates are prioritized over other gates to send the scheduled frames if they are open simultaneously. However, if the open gates correspond to unscheduled flows, i.e., AVB and BE traffic, the credit-based shaper (CBS) is implemented between the gates. The capable switch described in IEEE 802.1Qbv must simultaneously realize fabric switching, filtering, and traffic policy selection for flows arriving from the ingress ports [19]. Fabric switching and filtering processes redirect the arrival of traffic to the appropriate priority queue, as depicted in Figure 2. Using the GCL timing schedule, the queued frames are selected to the egress port and then to the physical link. The frames experience related delay components during the transmission process to achieve these switching functions, as differentiated in Figure 2.

IEEE 802.1Qbv Protocol
The IEEE 802.1Qbv protocol defines the time-aware shaping (TAS) technique to control traffic transmissions between TSN elements using a gating mechanism [17]. Each switching egress port has eight priority queues, as depicted in Figure 2

IEEE 802.1Qbv Protocol
The IEEE 802.1Qbv protocol defines the time-aware shaping (TAS) technique to control traffic transmissions between TSN elements using a gating mechanism [17]. Each switching egress port has eight priority queues, as depicted in Figure 2  The capable switch described in IEEE 802.1Qbv must simultaneously realize fabric switching, filtering, and traffic policy selection for flows arriving from the ingress ports [19]. Fabric switching and filtering processes redirect the arrival of traffic to the appropriate priority queue, as depicted in Figure 2. Using the GCL timing schedule, the queued frames are selected to the egress port and then to the physical link. The frames experience related delay components during the transmission process to achieve these switching functions, as differentiated in Figure 2

FWOS System Model and Design Decisions
TSN switching considers two dependent traffic isolation mechanisms, i.e., spatial and temporal isolations, to differentiate between ingress flows. Spatial isolation is applied in each switch by selecting incoming frames to one of eight priority queues served by various

…….
……. …. The capable switch described in IEEE 802.1Qbv must simultaneously realize fabric switching, filtering, and traffic policy selection for flows arriving from the ingress ports [19]. Fabric switching and filtering processes redirect the arrival of traffic to the appropriate priority queue, as depicted in Figure 2. Using the GCL timing schedule, the queued frames are selected to the egress port and then to the physical link. The frames experience related delay components during the transmission process to achieve these switching functions, as differentiated in Figure 2.

FWOS System Model and Design Decisions
TSN switching considers two dependent traffic isolation mechanisms, i.e., spatial and temporal isolations, to differentiate between ingress flows. Spatial isolation is applied in each switch by selecting incoming frames to one of eight priority queues served by various forwarding constraints at the egress port. Temporal isolation is provided in the gate control list (GCL) schedules. The GCL is an off-line timetable specifying the open and close events for all associated gates. To protect TT flows from unscheduled traffic transmissions, the preemption technique is applied by allowing the TT flows to interrupt the transmission of non-expressed flows. Against other TT transmissions, complete isolation between TT windows in the GCL timings undoubtedly provides targeted deterministic latency behavior for scheduled (TT) traffic, as proposed in [6]. Nevertheless, complete window isolation wastes the available bandwidth for unscheduled traffic and may not guarantee the determinism for soft critical time traffic (AVB).
There are two ways to process scheduled and unscheduled traffic in a TSN, back-toback and porosity configurations, as depicted in Figure 3, respectively. In both structures, flexible overlapping between scheduled open windows saves more available bandwidth for unscheduled flows. The overlapping can occur from either one edge or two window edges (opening and closing). The more the TT windows overlap, the more the available time intervals for unscheduled traffic increase. It cannot be neglected that flexible overlapping schedules negatively affect the performance of scheduled traffic. Thus, essential evaluations of timing are necessary to cover all overlapping situations.
R PEER REVIEW 7 of 26 forwarding constraints at the egress port. Temporal isolation is provided in the gate control list (GCL) schedules. The GCL is an off-line timetable specifying the open and close events for all associated gates. To protect TT flows from unscheduled traffic transmissions, the preemption technique is applied by allowing the TT flows to interrupt the transmission of non-expressed flows. Against other TT transmissions, complete isolation between TT windows in the GCL timings undoubtedly provides targeted deterministic latency behavior for scheduled (TT) traffic, as proposed in [6]. Nevertheless, complete window isolation wastes the available bandwidth for unscheduled traffic and may not guarantee the determinism for soft critical time traffic (AVB). There are two ways to process scheduled and unscheduled traffic in a TSN, back-toback and porosity configurations, as depicted in Figure 3, respectively. In both structures, flexible overlapping between scheduled open windows saves more available bandwidth for unscheduled flows. The overlapping can occur from either one edge or two window edges (opening and closing). The more the TT windows overlap, the more the available time intervals for unscheduled traffic increase. It cannot be neglected that flexible overlapping schedules negatively affect the performance of scheduled traffic. Thus, essential evaluations of timing are necessary to cover all overlapping situations.  In this work, we are interested in evaluating the performance of the time-aware shaping (TAS) technique under the flexible window-overlapping scheduling (FWOS) algorithm. This model allows TT windows to interfere with one another without missing the TT queues' latency deadlines. Hence, only the hard real-time flows (TT flows) are considered here for latency calculations, assuming that the set of all TT flows is denoted as . Each TT flow ( ∈ ), where is the priority of the flow, consists of a set of frames (ℱ ). Each node ℎ is designed to select the ingress frame to one of TT priority queues ( , … , ), where and represent the highest-and lowest-priority queues, respectively. Each frame is selected to the egress port by its gate ( ) with the openwindow length and the period . Note that the presented scheduling model is proposed assuming guaranteed synchronization for all selected switches.
For performance evaluation, we consider as a targeted queue with an open-window duration of , where 1 ≤ ≤ . The window may overlap with any window, as shown in Figure 4, and the overlapping ratio (OR) between them in the -th cycle represents the length of the time interval ( , , ), when both windows are simultane-  TT TT  TT  TT  TT   TT   TT   TT  TT  TT  TT  TT  TT  TT  TT  TT  TT  TT   TT  TT  TT  TT  In this work, we are interested in evaluating the performance of the time-aware shaping (TAS) technique under the flexible window-overlapping scheduling (FWOS) algorithm. This model allows TT windows to interfere with one another without missing the TT queues' latency deadlines. Hence, only the hard real-time flows (TT flows) are considered here for latency calculations, assuming that the set of all TT flows is denoted as S. Each TT flow (s m ∈ S), where m is the priority of the flow, consists of a set of frames (F m ). Each node h is designed to select the ingress frame to one of N h q TT priority queues (Q h 1 , . . . , Q h N q ), where Q h 1 and Q h N q represent the highest-and lowest-priority queues, respectively. Each Q h m frame is selected to the egress port by its gate (G h m ) with the open-window length W h m and the period T h m . Note that the presented scheduling model is proposed assuming guaranteed synchronization for all selected switches.
For performance evaluation, we consider Q h k as a targeted queue with an open-window duration of W h k , where 1 ≤ k ≤ N q . The Q h k window may overlap with any Q h m window, as shown in Figure 4, and the overlapping ratio (OR) between them in the i-th cycle represents the length of the time interval (L h,i k,m ), when both windows are simultaneously open, divided by the total length of the Q h k window, W h,i k , as i. 2021, 11, x FOR PEER REVIEW As presented in Figure 4, the OR varies from 0 to 1, covering all the expected ove conditions. Note that zero overlapping means that when the gate of the targeted T  Based on the above overlapping formula and the overlapping pattern in Figu implement here the GCL schedule for all selected nodes and then calculate the wo end-to-end latency for the frames using the network calculus (NC) approa that our GCL implementation is off-line based for worst-case analysis and the dy is not considered. Related details for the NC basis and the worst-case analysis sented below in Sections 5 and 6, respectively.

Network Calculus Basics
The network calculus (NC) [48] approach is a powerful tool to represent th timing properties in the domain of communication networks deterministically. Th ing properties include strict upper-and lower-bound computations used for perf evaluations, such as end-to-end latencies, network use, and buffer requirements paper, we use NC to determine the worst-case end-to-end latency for TT flows. propriate curves have to be implemented, the arrival curve and the service curv scribe the characteristics of flows and related nodes' availability. The analysis is provided based on the min-plus, which uses convolution operation, as defined in lowing formulas [48].
where and mean infimum and supremum, i.e., maximal lower bound a imal upper bound, respectively. The notations ⊗ and ⊘ represent the convolu deconvolution of min-plus operations, respectively.
The arrival curve ( ) describes the arrival process ( ) of a stream. This  As presented in Figure 4, the OR varies from 0 to 1, covering all the expected overlapping conditions. Note that zero overlapping means that when the gate of the targeted TT queue (G h k ) is open, all other TT gates are closed (complete isolation), and one means that in each moment when G h k is open, there is at least another TT gate open (complete overlapping), as clearly shown in Figure 4.
Based on the above overlapping formula and the overlapping pattern in Figure 4, we implement here the GCL schedule for all selected nodes and then calculate the worst-case end-to-end latency for the Q h k frames using the network calculus (NC) approach. Note that our GCL implementation is off-line based for worst-case analysis and the dynamism is not considered. Related details for the NC basis and the worst-case analysis are presented below in Sections 5 and 6, respectively.

Network Calculus Basics
The network calculus (NC) [48] approach is a powerful tool to represent the flows' timing properties in the domain of communication networks deterministically. These timing properties include strict upper-and lower-bound computations used for performance evaluations, such as end-to-end latencies, network use, and buffer requirements. In this paper, we use NC to determine the worst-case end-to-end latency for TT flows. Two appropriate curves have to be implemented, the arrival curve and the service curve, to describe the characteristics of flows and related nodes' availability. The analysis is mainly provided based on the min-plus, which uses convolution operation, as defined in the following formulas [48].
where in f and sup mean infimum and supremum, i.e., maximal lower bound and minimal upper bound, respectively. The notations ⊗ and represent the convolution and deconvolution of min-plus operations, respectively. The arrival curve α(t) describes the arrival process R(t) of a stream. This process represents the cumulative function, at the ingress switching port, by counting the bits that reach the node until t, as expressed in this inequality [48]: The arrival curve is modeled using the input traces and the traffic configuration, as combined in the triple p, j, d, where p represents the period of incoming streams, j denotes the jitter, and d is the lowest inter-arrival distance of traffic in the specified flow. Accordingly, the lower and upper arrival curves are formed as follows [48]: The above bounds mean that in any time interval (∆), there arrive at least α l (∆) and at most α u (∆) stream events that are modeled by α(t). The arrival curve α(t) can be represented in a simple schematic diagram presented in Figure 5a. For the worst-case latency computations, the upper bound of the arrival curve is considered.
The above bounds mean that in any time interval (∆), there arrive at least (∆) and at most (∆) stream events that are modeled by ( ). The arrival curve ( ) can be represented in a simple schematic diagram presented in Figure 5a. For the worst-case latency computations, the upper bound of the arrival curve is considered. The service curve ( ) describes the transmission availability of the network resources, as modeled in the departure process * ( ) of a stream. This process represents the cumulative function, at the egress switch port, by counting the bits that can be served in the related node up to time , as expressed in the following inequality [48]: In a given time interval , where the switch can serve the arrived streams by a rate , the service curve's lower and upper bounds are specified depending on network resource availability. Figure 5b shows an example of service curve bounds for a periodic time division multiple access (TDMA) resource. In each node, the traffic is expressed by the arrival curve at the ingress port and served by the egress port's service curve. The worst-case latency boundaries are obtained by considering the maximum distance ( ) between the upper-bound arrival curve ( ( )) and lower-bound service curve ( ( )) [48], as represented in Figure 6. The service curve β(t) describes the transmission availability of the network resources, as modeled in the departure process R * (t) of a stream. This process represents the cumulative function, at the egress switch port, by counting the bits that can be served in the related node up to time t, as expressed in the following inequality [48]: In a given time interval ω, where the switch can serve the arrived streams by a rate C, the service curve's lower and upper bounds are specified depending on network resource availability. Figure 5b shows an example of service curve bounds for a periodic time division multiple access (TDMA) resource.
In each node, the traffic is expressed by the arrival curve at the ingress port and served by the egress port's service curve. The worst-case latency boundaries are obtained by considering the maximum distance (D max ) between the upper-bound arrival curve (α u (t)) and lower-bound service curve (β l (t)) [48], as represented in Figure 6.
Examples of upper and lower bounds: (a) the arrival curve and (b) the service curve for a eriodic time division multiple access (TDMA) system.
In each node, the traffic is expressed by the arrival curve at the ingress port and erved by the egress port's service curve. The worst-case latency boundaries are obtained y considering the maximum distance ( ) between the upper-bound arrival curve ( )) and lower-bound service curve ( ( )) [48], as represented in Figure 6.

Worst-Case Latency Analysis of the FWOS Model
To analyze the worst-case latency performance for a TT frame with a given priority class, the arrival and service curves must be defined. The arrival curve depends on the limitations described in the previous section, and the service curve has to be determined based on the predefined GCL schedule. For the worst-case analysis, the service curve requires one to find the duration of the guaranteed service (contention-free) intervals for the corresponding queue and the maximum waiting time needed to serve that frame using these intervals. This section presents these required calculations considering the proposed scheduling algorithm, which includes all overlapping scenarios between TT open windows. First, we compute the length of the contention-free interval based on all pessimistic cases. Then, the maximum waiting time is determined for every selected node. After that, these calculations are used to define the service curve and, then, analytically implement the worst-case end-to-end latency.

Length of the Contention-Free Interval
Servicing the arrived traffic in each node is based on the open-window interval for the associated priority queue. Under flexible overlapping between the open windows for TT queues, each open window serves the arrived traffic according to more stringent scheduling constraints. In particular, these windows are divided into three different intervals: block, contention-based, and contention-free intervals. (i) Block intervals represent guard band intervals, where the remaining time is not enough to transmit the whole frame. (ii) Contention-based intervals represent time slots' aggregation, where the related window is overlapped with another TT window and the priority or the non-preemption technique may prevent the frame from being served. Although the targeted traffic may obtain the required service during contention-based intervals, safe latencies are formulated based only on contention-free intervals. (iii) Contention-free intervals represent the guaranteed service interval that has to be specified to determine the worst-case latency bounds. To define contention-free intervals, we have to specify all block and contention-based intervals. Block intervals are located at the end of each open window, but contention-based intervals depend on overlapping with other TT windows. Thus, the overlap must be comprehensively analyzed to specify these intervals.
As mentioned, three overlapping aspects have to be considered: priority, position, and the OR. Accordingly, the latency of TT traffic is evaluated and discussed below, under all these aspects to give a clearer view of system performance with a variety of relevant parameters.

Opening-Edge Overlapping
As mentioned, we consider Q h k as a targeted TT queue for latency calculations in any node h. Then, to include the overlapping at the opening edge of the Q h k window, we assume that when its gate ( i. 2021, 11, x FOR PEER REVIEW Figure 7. Opening-edge overlapping with higher-and lower-priority windows. Higher-priority overlapping at the opening edge: The overlapping ratio betw window and the higher-priority window ( ∈ 1, … , − 1 ) at the open can be determined by using Equation (6) as , , , . Thus, the most decisive influe the higher-priority overlapping at the opening edge, as shown in Figure 7, is inc considering only the queue that has the largest overlapping ratio with the as follows: . Opening-edge overlapping with higher-and lower-priority windows.
The associated overlapping ratio is determined as This overlapping may occur from higher-or/and lower-priority queues; the related latency effects are discussed separately.
Higher-priority overlapping at the opening edge: The overlapping ratio between the Q h k window and the higher-priority Q h k − window (k − ∈ {1, . . . , k − 1}) at the opening edge can be determined by using Equation (6) as OR h,B,i k,k − . Thus, the most decisive influence from the higher-priority overlapping at the opening edge, as shown in Figure 7, is included by considering only the queue that has the largest overlapping ratio with the Q h k window, as follows: In the worst case, servicing Q h k frames is not guaranteed in a time slot if there is any higher-priority window open at the same time slot. It contributes to reducing the contention-free interval, which starts at the closing time of the higher-priority window that has the largest overlapping. Thus, the starting time of the guaranteed service window can be given by Lower-priority overlapping at the opening edge: Similarly, based on the window overlap, as shown in Figure 7, the largest overlapping ratio between Q h k and Q h k + windows (k + 1 ≤ k + ≤ N q ) at the opening edge is defined as Although the Q h k frame has the priority to be transmitted before Q h k + frames, it has to wait until completing the transmission of Q h k + frames at the transmission status if the non-preemption technique is considered between TT transmissions. Accordingly, the Q h k frame cannot be transmitted until passing the maximum size of lower-priority frames ( f h,max k + ). Thus, the i-th contention-free interval is reduced at the opening edge by the value (d h,np,i k + ), which can be given as

Closing-Edge Overlapping
Similar to the opening-edge overlapping case and as depicted in Figure 8, the length of the overlapping interval between Q h k and any other TT queue (Q h m ), at the closing edge, can be found easily as Higher-priority overlapping at the closing edge: The strongest influence from hi priority overlapping ( In the same manner as in the opening-edge consideration, as the frame ha priority to be transmitted before the frame, during the overlapping interval, the vice is not guaranteed for frames. Thus, the ending time of the i-th conten free interval under the higher-priority overlapping at the closing edge can be given Contention-free window after closing-edge overlapping . Closing-edge overlapping with higher-and lower-priority windows.
The relevant overlapping ratio between Q h m and Q h k windows is Higher-priority overlapping at the closing edge: The strongest influence from higherpriority overlapping (OR h,E,i k,k − ) (1 ≤ k − < k) at the closing edge, as shown in Figure 8, is included by considering the maximum overlapping ratio, as follows: In the same manner as in the opening-edge consideration, as the Q h k − frame has the priority to be transmitted before the Q h k frame, during the overlapping interval, the service is not guaranteed for Q h k frames. Thus, the ending time of the i-th Q h k contention-free interval under the higher-priority overlapping at the closing edge can be given by Lower-priority overlapping at the closing edge: Similarly, we can determine the largest OR between Q h k and Q h k + windows (k < k + ≤ N q ) at the closing edge as As Q h k has a higher priority than k + queues, the overlapping at the closing edge does not affect the contention-free interval (W h,i k ). However, in a particular case of frame arrival, the lower-priority overlapping can affect the maximum waiting time of the associated frame, as we discuss next.

Overall Effects
Based on the overlapping considerations discussed above, the starting and ending times of the contention-free Q h k window are influenced by several parameters accordingly. By accumulating all these aspects together in worst-case computations, the starting and ending times are constrained by Then, the length of the Q h k contention-free window in the i-th cycle is By considering the i-th open window as a benchmark, as shown in Figure 9, the relative offset of the i-th contention-free window from the opening edge can be given by

Maximum Waiting Time Consideration
The waiting time represents the time delay that the arrived frame experience the arrival instance to the first contention-free window's starting time. The ma waiting time for the frame before getting the service considers the most pess arrival instance. In general, the maximum waiting time until the chance to serve the is bounded by the earliest instance of the frame arrival in the selected , , , In addition, the relative offset of the j-th contention-free window, considering the i-th window as a benchmark, is

Maximum Waiting Time Consideration
The waiting time represents the time delay that the arrived frame experiences from the arrival instance to the first contention-free window's starting time. The maximum waiting time for the Q h k frame before getting the service considers the most pessimistic arrival instance. In general, the maximum waiting time until the chance to serve the frame is bounded by the earliest instance of the frame arrival in the selected node ( t h,arrive k earliest ) and the starting time of the nearest contention-free interval for the associated queue (t h,B k ), as follows: As discussed in [13], the earliest arrival instance depends on the selected node's order. The arrival instance in the first node (source) t FN,arrive k is arbitrary and can be at any moment during the T FN k interval. On the other hand, in the non-first node (selected switch in the route), the frame arrival is bounded by the open-window interval for the corresponding queue in the previous node.
In the first node, the worst arrival instance happens when the first frame arrives at the end of the associated contention-free interval (W

FN,i k
) and the remaining time is not enough to transmit it, as depicted in Figure 10. Thus, this frame waits for the next contention-free window to be transmitted. A more pessimistic arrival case happens when the Q FN k frame arrives at the end of the contention-free interval, and there is a lower-priority frame in the transmission status. Even if there is enough time to transmit the Q FN k frame, it has to wait for that lower-priority frame, as shown in Figure 10. The non-preemption technique prevents the Q FN k frame to interrupt that lower-priority frame. After the transmission of the Q FN k + frame finishes, the remainder of the time is not enough to transmit Q FN k frame. Accordingly, the overlapping between Q For the non-first node ℎ, the arrival instance is bounded by the starting time of th contention-free intervals in ℎ and all preceding nodes connected to ℎ input port ((ℎ − 1) , … , (ℎ − 1) ), as shown in Figure 11, where represents the number of in For the non-first node h, the arrival instance is bounded by the starting time of the contention-free intervals in h and all preceding nodes connected to h input ports ), as shown in Figure 11, where I N h represents the number of input ports in h. In any case, the frame cannot arrive at h out of these boundaries. As depicted in Figure 11, the earliest frame arrives at ( t h,arrive frame to transfer from h − 1 to h. Note that when the frame arrives at the node, a switch processing delay (D h proc ) is experienced through the switch input buffer, switching fabric, and priority filtering at node h. This delay is needed even if the frame arrives through the contention-free interval in h. Thus, it is worth noting that the lowest bound of the maximum waiting time should be limited by the propagation and switch processing delays, as follows: Figure 10. The maximum waiting time at the first node.
For the non-first node ℎ, the arrival instance is bounded by the starting tim contention-free intervals in ℎ and all preceding nodes connected to ℎ inpu ((ℎ − 1) , … , (ℎ − 1) ), as shown in Figure 11, where represents the numb put ports in ℎ. In any case, the frame cannot arrive at ℎ out of these boundaries picted in Figure 11, the earliest frame arrives at ( , ), which equals the time of the first contention-free window in the preceding node (ℎ − 1) scheduled after the previous contention-free interval added to the selection an agation delays. The frame selection delay is the time required to select the larges the frame to the physical link ( , , = , ). The propagatio ( , ) is the time required for a frame to transfer from ℎ − 1 to ℎ. Note th the frame arrives at the node, a switch processing delay ( ) is experienced thro switch input buffer, switching fabric, and priority filtering at node ℎ. This delay is even if the frame arrives through the contention-free interval in ℎ. Thus, it is wort that the lowest bound of the maximum waiting time should be limited by the prop and switch processing delays, as follows:  Note that the switch processing delay is counted even if the maximum wait is greater. Thus, this delay should be removed from the overall latency expressi hough the authors in [13] considered the frame arrival limitations in the non-firs their latency calculation counted the switch processing delay twice, indirectly in t imum waiting time calculations and overall latency expression. Even if this delay  Figure 11. The maximum waiting time at the non-first node in the selected path.
Note that the switch processing delay is counted even if the maximum waiting time is greater. Thus, this delay should be removed from the overall latency expression. Although the authors in [13] considered the frame arrival limitations in the non-first nodes, their latency calculation counted the switch processing delay twice, indirectly in the maximum waiting time calculations and overall latency expression. Even if this delay is a little small and constant, it can result in more pessimistic latencies when the maximum waiting time is larger.

Service Curve Determination
As discussed before, the targeted traffic obtains the required service according to its contention-free intervals in each node. The length of the contention-free interval varies from an open window to another, depending on their related overlapping situations, as shown in Figure 10. However, their configurations are repeated after the hyper-period (T h GCL ), which represents the least common multiple (LCM) of cycles for all TT queues, as follows [13]: Therefore, the number of Q h k contention-free intervals in the hyper-period are given by M = T h GCL /T h k . By taking the i-th cycle as a reference, the upper and lower bounds of the service curve for the Q k traffic can be given by [12] β h,u,i The terms W h,j k and WT h,i k are determined using Equations (22), (27), and (28). Note that β h T,L (t) is formulated for the TDMA protocol [12] as where C h represents the data rate of the corresponding physical link. Depending on the arrival instance of the earliest frame, the service curve can be determined. As depicted in Figure 12, the best servicing (upper service curve) is obtained if the first frame arrives at the node's ingress port when the associated contention-free interval has instantaneously started. On the other hand, the frame faces the worst resource service (lower service curve) if it arrives at the end of the contention-free interval, i.e., there is no guaranteed service for the frame at that open window. However, the aggregated bounds of the service curve obtained in Equations (29) and (30) can be slightly different if we change the reference window in the calculation. Therefore, the upper and lower bounds of the service curve for Q h k traffic are the largest and lowest instantaneous values, respectively, from all possible service curves [12], i.e., An example of the upper and lower service curves is depicted in Figure 12.

Worst-Case End-to-End Latency for Q k Traffic
The network calculus (NC) approach considers the upper bound of the arrival curve and the service curve's lower bound to compute the upper bound end-to-end latency. For every node h in the selected networking path, the upper arrival curve α h,u k (t) and the lower service curve β h,l k (t) have to be defined to calculate the maximum experienced latency in each link. The upper arrival curve at the source node can be found using Equation (3). After that, in every node h from the selected path, the aggregate upper arrival curve at the ingress port for Q h k traffic can be determined using the ingress arrival curve at the previous node, as follows: where WCD h−1 k is the maximum latency experienced by Q h−1 k traffic in h − 1. This latency equals the largest horizontal distance between the upper arrival curve at the ingress port of node (h − 1) and the lower service curve of the related resource, as follows [13]: Then, we can calculate the upper bound of the end-to-end latency as the summation of the maximum latencies experienced in all nodes through the selected path, as (38) where N is the number of nodes that the k-th-priority traffic passed through. According to all mathematical analysis presented in Section 6, the FWOS algorithm can be summarized as follows:

Case Study and Experimental Setup
In this section, we evaluate our proposed model's performance on a TSN system based on a realistic vehicle use case, as shown in Figure 13a. A simplified representative network model for the vehicle is presented in Figure 13b with two switches and four end systems. The targeted end systems can be sensors, cameras, or actuators used in the connected vehicles to collect the relevant information from the surrounding environment.

Case Study and Experimental Setup
In this section, we evaluate our proposed model's performance on a TSN system based on a realistic vehicle use case, as shown in Figure 13a. A simplified representative network model for the vehicle is presented in Figure 13b with two switches and four end systems. The targeted end systems can be sensors, cameras, or actuators used in the connected vehicles to collect the relevant information from the surrounding environment. We assume that physical links connect these network elements with a 1 Gbps data rate in all experiments. As each networking node has to differentiate the traffic to multiple priority queues, the initial GCL for associated queues is listed in Table 2 with full isolation between TT open windows. Note that is considered a targeted queue to evaluate our  Table 3. GCL implementation in one-sided overlapping scenarios with higher-and lowerpriority windows.

Case Link Priority Open−Window Interval (t h,o m ,t h,c m ) (µs)
, , , 185 − . , , Cases 1 and 2: In these cases, the lower-priority overlapping is evaluated by varying the OR from 0 to 1, with different sizes of frames. Figure 14a,b shows the performance of the worst-case latency for traffic under opening-edge and closing-edge overlapping. As expected, the lowest WCD is obtained under full isolation between and windows ( , = 0). In Figure 14a, the WCD increases with an increase in the OR at the opening side because the frame has to wait for the lower-priority frame, which is in the forwarding status. The increase continues until the time required to transmit the largest lower-priority frame is completed, which is assumed to be 200, 300, and 500 bytes, separately. These durations are equivalent to 8%, 12%, and 20% overlapping ratios. If the OR increases more than the largest frame size, the latency is fixed until the guard band interval, equal to 16% OR at the end of the open-window interval, using Equations (1) and (18). In the guard band interval, large lower-priority frame sizes increase the latency with a percentage greater than smaller sizes, as shown in Figure 14a, because the maximum waiting time increases proportionally, as can be noticed from Equations (26) and (27). For closing-edge overlapping, as shown in Figure 14b, the effect of lower-priority overlapping appears after passing the guard band interval, which equals 16% OR with ( , = 400 bytes). After the guard band interval, the latency increases until passing the lower-priority frame's length, and then it remains constant, as depicted in Figure 14b. Cases 3 and 4: In these cases, the influence of the higher-priority overlapping at the opening edge and closing edge is evaluated in Figure 15a,b, respectively, with sizes of frames as 350, 400, and 500 bytes.
Comparing the results in Figure 14a,b with those in Figure 15a,b shows that higherpriority overlapping has a higher impact on latency performance than lower-priority overlapping. In addition, Figure 15a,b proves that the frame experiences higher WCD bounds under opening-edge overlapping than under closing-edge overlapping. This hap- In Figure 14a, the WCD increases with an increase in the OR at the opening side because the Q h 4 frame has to wait for the lower-priority frame, which is in the forwarding status. The increase continues until the time required to transmit the largest lower-priority frame is completed, which is assumed to be 200, 300, and 500 bytes, separately. These durations are equivalent to 8%, 12%, and 20% overlapping ratios. If the OR increases more than the largest frame size, the latency is fixed until the guard band interval, equal to 16% OR at the end of the open-window interval, using Equations (1) and (18). In the guard band interval, large lower-priority frame sizes increase the latency with a percentage greater than smaller sizes, as shown in Figure 14a, because the maximum waiting time increases proportionally, as can be noticed from Equations (26) and (27).
For closing-edge overlapping, as shown in Figure 14b, the effect of lower-priority overlapping appears after passing the guard band interval, which equals 16% OR with ( f h,max 4 = 400 bytes). After the guard band interval, the latency increases until passing the lower-priority frame's length, and then it remains constant, as depicted in Figure 14b.
Cases 3 and 4: In these cases, the influence of the higher-priority overlapping at the opening edge and closing edge is evaluated in Figure 15a,b, respectively, with sizes of Q h 4 frames as 350, 400, and 500 bytes. service is not guaranteed (block interval). As proof for this, it can be observed that both overlapping situations, i.e., opening and closing, have the same behavior with a horizontal shift equal to the guard band duration. For example, with a 500-byte frame size, the latency starts to increase at almost 40% overlapping in the closing-edge case and 20% overlapping (40-20% (guard band duration)) in the opening-edge case. Moreover, the size of the frame plays a significant role in the overall WCD performance, especially at large ORs. For instance, the frame with sizes of 350, 400, and 500 bytes experiences 289.3, 465.5, and 782 µs WCDs, respectively, under 40% OR at the opening edge.

Implementing Overlapping Constraints Based on Critical Time Deadlines
Based on the apparent fluctuation in TT latency bounds under window overlapping situations, we can add a new constraint for the overall TSN scheduling limitations, which was formulated in [42]. As the critical time traffic requires different WCD deadlines, each TT frame should be forwarded to a priority queue, where it should experience an end-toend latency no higher than its deadline. We can differentiate between TT priority queues according to different overlapping ratios between corresponding windows. Moreover, the overlapped queue's priority is a significant issue to determine which maximum allowable overlapping ratio guarantees the targeted latency deadline. Based on the results presented in Figure 16, each WCD bound can be met if the related TT overlapping does not exceed a specific limit. These OR limits are differentiated in Figure 16 concerning the priority of the overlapped queues and the overlapping position.
It can be noticed that lower-priority overlapping causes a small effect on WCD boundaries, with a trivial difference between the opening, closing, and two-sided overlapping cases, as shown in Figure 16. In particular, slightly after the WCD bound in the entire isolation case (282.7 µs), the latency deadline can be met under any lower-priority overlapping. At most, the deadline of 287.7 µs can be met under any lower-priority overlapping. However, large ORs are not acceptable in GCL design as it is considered higher overlapping on the overlapped queue.
In contrast, higher-priority overlapping has the highest impact on the latency, and stricter overlapping constraints should be guaranteed. Opening-edge overlapping has a larger influence than closing-edge overlapping. For example, to meet the 400 µs latency deadline, the highest allowable OR at the opening and closing edges is 38.62% and 54.62%, respectively. Under two-sided higher overlapping, the performance is almost the same as under closing-edge higher overlapping, and the latency deadline of 400 µs is met until 54.31% overlapping. Comparing the results in Figure 14a,b with those in Figure 15a,b shows that higherpriority overlapping has a higher impact on latency performance than lower-priority overlapping. In addition, Figure 15a,b proves that the Q h 4 frame experiences higher WCD bounds under opening-edge overlapping than under closing-edge overlapping. This happens because closing-edge overlapping does not affect the guard band interval, where the service is not guaranteed (block interval). As proof for this, it can be observed that both overlapping situations, i.e., opening and closing, have the same behavior with a horizontal shift equal to the guard band duration. For example, with a 500-byte frame size, the latency starts to increase at almost 40% overlapping in the closing-edge case and 20% overlapping (40-20% (guard band duration)) in the opening-edge case. Moreover, the size of the Q h 4 frame plays a significant role in the overall WCD performance, especially at large ORs. For instance, the Q h 4 frame with sizes of 350, 400, and 500 bytes experiences 289.3, 465.5, and 782 µs WCDs, respectively, under 40% OR at the opening edge.

Implementing Overlapping Constraints Based on Critical Time Deadlines
Based on the apparent fluctuation in TT latency bounds under window overlapping situations, we can add a new constraint for the overall TSN scheduling limitations, which was formulated in [42]. As the critical time traffic requires different WCD deadlines, each TT frame should be forwarded to a priority queue, where it should experience an end-toend latency no higher than its deadline. We can differentiate between TT priority queues according to different overlapping ratios between corresponding windows. Moreover, the overlapped queue's priority is a significant issue to determine which maximum allowable overlapping ratio guarantees the targeted latency deadline. Based on the results presented in Figure 16, each WCD bound can be met if the related TT overlapping does not exceed a specific limit. These OR limits are differentiated in Figure 16 concerning the priority of the overlapped queues and the overlapping position. For mixed overlapping cases, the OR limits are between pure lower and pure higher overlapping. For instance, the latency deadline of 400 µs is guaranteed until 76.84% OR with higher priority at the opening edge and lower priority at the closing edge, and until 85.2% with the opposite.  It can be noticed that lower-priority overlapping causes a small effect on WCD boundaries, with a trivial difference between the opening, closing, and two-sided overlapping cases, as shown in Figure 16. In particular, slightly after the WCD bound in the entire isolation case (282.7 µs), the latency deadline can be met under any lower-priority overlapping. At most, the deadline of 287.7 µs can be met under any lower-priority overlapping. However, large ORs are not acceptable in GCL design as it is considered higher overlapping on the overlapped queue.
In contrast, higher-priority overlapping has the highest impact on the latency, and stricter overlapping constraints should be guaranteed. Opening-edge overlapping has a larger influence than closing-edge overlapping. For example, to meet the 400 µs latency deadline, the highest allowable OR at the opening and closing edges is 38.62% and 54.62%, respectively. Under two-sided higher overlapping, the performance is almost the same as under closing-edge higher overlapping, and the latency deadline of 400 µs is met until 54.31% overlapping.
For mixed overlapping cases, the OR limits are between pure lower and pure higher overlapping. For instance, the latency deadline of 400 µs is guaranteed until 76.84% OR with higher priority at the opening edge and lower priority at the closing edge, and until 85.2% with the opposite.
As noted, the lowest WCD bound that can be achieved is 282.7 µs. This limit is based on specific framing and timing settings. Adjusting these values increases/decreases the WCD bounds accordingly, as examined in the previous results. Thus, it is worth noting that Figure 16 can be considered a reference chart for window overlapping vs. latency deadlines based on the above networking and data settings. For each specific required setting, we can easily implement related overlapping vs. the latency reference chart.
Accordingly, we can implement, in the general case, a relaxed window overlapping constraint that will guarantee meeting latency requirements for TT traffic and can be used to improve the overall solution space, as follows: Constraint definition: By considering Q h k as a targeted TT queue, where k ∈ 1, 2, . . . , N q and h is a node in the selected path (h ∈ {1, 2, . . . , N}), and the lower-priority queue as k + , it is assumed that the largest size of Q where m and n represent the priorities of queues that overlap with the Q h k window in every node h, P(m), P(n) ∈ {L, H} (L lower priority and H higher priority), and edge1, edge2 ∈ {B, E} (B opening edge and E closing edge).

Comparison with Related Works
As we introduce some modifications to the maximum waiting time expression, as presented in [13], the related comparison has to be considered here. The comparison is performed under complete isolation among windows, as shown in Table 1. In [13], the presented algorithm dramatically reduced the worst-case latency bounds compared with the results in [11]. Hence, our proposed model (FWOS) is compared with that in [13] (STNet) and [11] (STNode) based on two different scenarios, as shown in Figure 17a, As we introduce some modifications to the maximum waiting time expression, as presented in [13], the related comparison has to be considered here. The comparison is performed under complete isolation among windows, as shown in Table 1. In [13], the presented algorithm dramatically reduced the worst-case latency bounds compared with the results in [11]. Hence, our proposed model (FWOS) is compared with that in [13] (STNet) and [11] (STNode) based on two different scenarios, as shown in Figure 17a,b.
In Figure 17a, the comparison is performed under different window offsets between adjacent nodes ( , ). FWOS gives the lowest WCDs compared with others. For a three-hop connection, FWOS reduces the WCD by 55% on average and 60.6% maximum compared with STNode and 3.03% on average and 3.4% maximum compared with STNet. The WCD enhancement of the FWOS algorithm increases under large use cases, as illustrated in Figure 17b. Compared with STNet, FWOS reduces the WCD by 9%, 11.5%, and 12.3% for connection with 10, 20, and 30 hops, respectively. The interpretation of that is our latency improvement depends linearly on the number of selected nodes, as can be noticed from Equations (28) and (38). Our model reduces end-to-end latency bound by a value equal to the propagation and technical switch delays in each link. Note that the STNode model is excluded in Figure 17b as it produced non-comparable latencies. The STNode model is implemented based on separate latency computations among nodes without considering the relative positional offsets between adjacent nodes. Undoubtedly, it has colossal WCDs compared with others.

Conclusions
Several mixed-criticality domains, such as automotive and automated industries, require deterministic latency performances. Missing latency deadlines in such applications may generate dangerous situations for humans or considerable economic waste in highly time-sensitive applications. To support such applications, TSN technology is defined based on several TT-Ethernet amendments. Most of these enhancements have been introduced mainly to serve time-triggered traffic targeting zero jitter delay and bounded endto-end latency. However, there is a concern that soft real-time traffic could miss their requirements under strict timing constraints in GCL designs. In Figure 17a, the comparison is performed under different window offsets between adjacent nodes (OS h,h−1 4 ). FWOS gives the lowest WCDs compared with others. For a three-hop connection, FWOS reduces the WCD by 55% on average and 60.6% maximum compared with STNode and 3.03% on average and 3.4% maximum compared with STNet. The WCD enhancement of the FWOS algorithm increases under large use cases, as illustrated in Figure 17b. Compared with STNet, FWOS reduces the WCD by 9%, 11.5%, and 12.3% for connection with 10, 20, and 30 hops, respectively. The interpretation of that is our latency improvement depends linearly on the number of selected nodes, as can be noticed from Equations (28) and (38). Our model reduces end-to-end latency bound by a value equal to the propagation and technical switch delays in each link. Note that the STNode model is excluded in Figure 17b as it produced non-comparable latencies. The STNode model is implemented based on separate latency computations among nodes without considering the relative positional offsets between adjacent nodes. Undoubtedly, it has colossal WCDs compared with others.

Conclusions
Several mixed-criticality domains, such as automotive and automated industries, require deterministic latency performances. Missing latency deadlines in such applications may generate dangerous situations for humans or considerable economic waste in highly time-sensitive applications. To support such applications, TSN technology is defined based on several TT-Ethernet amendments. Most of these enhancements have been introduced mainly to serve time-triggered traffic targeting zero jitter delay and bounded end-to-end latency. However, there is a concern that soft real-time traffic could miss their requirements under strict timing constraints in GCL designs.
This paper proposed a flexible window-overlapping scheduling (FWOS) algorithm with complete timing analysis for the WCD using the network calculus approach considering relative window offsets between adjacent nodes. Our model expresses all overlapping situations between TT windows, leading to a more generic timing analysis. Based on a realistic vehicular use case, the numeric results confirm that the impact of higher-priority overlapping on the WCD is larger than that from lower-priority overlapping. Further, overlapping at the opening edge generates bigger WCDs than at the closing edge. Thus, under guaranteed latency deadlines, the overlapping ratio between TT windows must be bounded accordingly. Based on his, the FWOS model can be applied to optimize the maximum-allowable overlapping ratio (OR) that ensures TT latency requirements and maximizes the bandwidth available for unscheduled transmissions. For example, to meet the 400 µs latency deadline under specific timing assumptions, the highest-allowable OR is 38.62% with higher priority (opening edge), 54.62% with higher priority (closing edge), 54.31% with higher priority (opening and closing edges), 76.84% with higher priority (opening edge) and lower priority (closing edge), 85.2% with lower priority (opening edge) and higher priority (closing edge), and 100% with any case of lower priority. Accordingly, more relaxed GCLs can be implemented to fit the requirements of incoming data from vehicle-related devices, such as sensors, cameras, and actuators. Compared with the previous works, FWOS obtains less pessimistic WCDs, even under fully isolated window scheduling.