Next Article in Journal
VQF-Based Decoupled Navigation Architecture for High-Curvature Maneuvering of Underwater Vehicles
Previous Article in Journal
A Compact Aperture-Slot Antipodal Vivaldi Antenna for GPR Systems
Previous Article in Special Issue
Microwave Fill Level Inspection System for Industrial Packaged Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Dimensional Dynamic Logic Resource Allocation for Scalable RIS Channel Emulation

1
State Key Laboratory of Advanced Rail Autonomous Operation, Beijing Jiaotong University, Beijing 100044, China
2
Nanjing Rongcai Transportation Technology Research Institute Co., Ltd., Nanjing 210012, China
3
School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
4
Space Information Research Institute and Zhejiang Key Laboratory of Space Information Sensing and Transmission, Hangzhou Dianzi University, Hangzhou 310005, China
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(3), 813; https://doi.org/10.3390/s26030813 (registering DOI)
Submission received: 25 November 2025 / Revised: 1 January 2026 / Accepted: 13 January 2026 / Published: 26 January 2026

Abstract

This paper addresses the critical scalability challenge in Hardware-in-the-Loop (HIL) channel emulation for massive RIS-assisted 6G environments. We propose a Two-Dimensional Dynamic Logic Resource Allocation (2D-DLRA) architecture that decouples physical RF ports from baseband processing resources through hierarchical pooling at both the session level and the multipath level. By jointly virtualizing Logical Units (LUs) and Multipath Processing Units (MPUs), the proposed architecture overcomes the dual inefficiency of port underutilization and path-level sparsity inherent in conventional static designs. A rigorous analytical framework combining hierarchical queuing theory and non-cooperative game theory is developed to characterize system capacity, blocking probability, and user contention under heterogeneous workloads. Simulation results demonstrate that, under a strict QoS constraint of 1% blocking probability, the proposed 2D-DLRA architecture achieves a multi-fold increase in supported user capacity compared to static allocation with the same hardware resources. Moreover, for an end-to-end emulation error threshold of 3%, 91.8% of users meet the QoS requirement, compared to only 73.6% in static architectures. The results further show that dynamic pooling enables near-saturated hardware utilization, in contrast to the single-digit utilization typical of static designs in sparse RIS scenarios. These findings confirm that 2D-DLRA provides a scalable and hardware-efficient solution for large-scale RIS channel emulation, offering practical design guidelines for next-generation 6G HIL testing platforms.

1. Introduction

The evolution towards sixth-generation (6G) wireless networks is driving a paradigm shift from merely adapting to the wireless channel to actively controlling it, materialized by the deployment of Reconfigurable Intelligent Surfaces (RISs) [1,2,3,4,5,6,7,8,9,10]. By leveraging massive arrays comprising hundreds or even thousands of low-cost reflecting elements, RISs empower the creation of Smart Radio Environments (SREs) with programmable signal propagation [11,12]. However, this technological leap poses an unprecedented challenge for hardware-in-the-loop (HIL) channel emulation, as the required emulation scale increases dramatically [13,14]. Validating RIS-assisted systems requires emulating channels with a massive number of spatial nodes, yet the physical propagation environment often exhibits sparsity in the angular and delay domains. This unique characteristic—massive ports but sparse effective paths—renders the architectural scalability of channel emulators the primary bottleneck in the design and optimization of next-generation wireless technologies [15,16,17,18,19,20].
However, conventional emulator architectures are predicated on a paradigm of static resource allocation, which leads to a profound, two-dimensional inefficiency when applied to RIS scenarios [21,22,23]. The first dimension arises from the rigid, one-to-one mapping of physical RF ports to dedicated processing chains (Logical Units, LUs) [24], causing costly FPGA resources to be “stranded” in scenarios with sporadic user activity [25,26,27]. As illustrated in Figure 1, given that an RIS may contain thousands of elements, statically allocating a dedicated, power-hungry FPGA chain to each element is prohibitively expensive and physically infeasible. A second, more subtle dimension of inefficiency exists within each LU, where a fixed, worst-case number of Multipath Processing Units (MPUs) is allocated [28], regardless of the variable path requirements of different channel models [29]. However, RIS-assisted channels typically exhibit high sparsity, where each element contributes only to a limited number of propagation paths. This compounding inefficiency, where over 95% of computational resources can be idle, renders traditional architectures economically infeasible for future massive connectivity scenarios [28]. To address this critical scalability challenge, and inspired by theories of socially optimal resource allocation and queue management [30,31], this paper proposes a novel Two-Dimensional Dynamic Logic Resource Allocation (2D-DLRA) architecture tailored for massive and sparse connectivity scenarios like RISs. Parallel to advancements in hardware emulation, the theoretical analysis of these complex systems, particularly for resource management and Quality of Service (QoS) assurance, has become critical [32,33]. The use of queueing theory, in particular, has become a widespread and powerful tool for modeling traffic, optimizing data transfer, and analyzing performance in diverse network environments such as IoT and mobile cloud computing [34,35,36].
While prior works have explored dynamic allocation [37], our approach introduces a hierarchical, resource-pooled paradigm that fundamentally redesigns the allocation mechanism at two distinct granularities. This architecture abandons rigid static mapping to perform on-demand, fine-grained allocation of both session-level Logical Units (LUs) and path-level Multipath Processing Units (MPUs), thereby simultaneously resolving both dimensions of the inefficiency problem. The core contributions of this work are multifaceted and summarized as follows:
  • We propose a novel 2D-DLRA architecture that overcomes the two-dimensional inefficiency of static designs through a hierarchical, resource-pooled paradigm, enabling scalability of the channel emulator [38,39].
  • We develop a rigorous mathematical model for the 2D-DLRA architecture based on hierarchical queueing theory. This framework yields a rich set of analytical formulas for key performance indicators (KPIs), including blocking probability, resource utilization, and system capacity.
  • We introduce a non-cooperative game-theoretic model to analyze the micro-behavioral dynamics of resource contention among heterogeneous users or RIS subarrays, providing deep insights into the system’s Nash Equilibrium and fairness.
The remainder of this paper is organized as follows. Section 2 details the 2D-DLRA architecture and its operational mechanism. Section 3 establishes the performance modeling framework based on hierarchical queueing theory. Section 4 introduces the game-theoretic analysis of resource contention. Section 5 presents the comprehensive joint analysis and experimental validation results. Finally, Section 6 concludes the paper.

2. The 2D-DLRA Architecture and Operational Mechanism

This chapter establishes the foundational concepts of our proposed architecture. We begin by providing a deep and formal analysis of the fundamental scalability challenges inherent in traditional emulator designs, precisely defining the two-dimensional nature of their inefficiency. This analysis serves as the primary motivation for the paradigm shift towards the hierarchical, resource-pooled architecture that will be detailed subsequently.

2.1. The Scalability Challenge and the Static Allocation Bottleneck

2.1.1. The First Dimension of Inefficiency: Stranded Session-Level Resources

The conventional emulator architecture is characterized by a rigid, one-to-one mapping between its N physical Radio Frequency (RF) ports and an equal number of dedicated, independent baseband processing chains, which we term Logical Units (LUs). While simple to implement, this static design creates a fundamental bottleneck we refer to as the stranded session-level resource problem.
Let the state of the i-th RF port be a binary random variable S i { 0 , 1 } , where S i = 1 if the port is active (i.e., a signal is present) and S i = 0 if it is idle. The activity factor, or the probability of the port being active, is p act = P ( S i = 1 ) . The total hardware cost of the emulation subsystem, H total , is directly proportional to the number of physical ports, N, as each port requires a dedicated LU:
H static N
The utilization of the i-th LU, η i , is equivalent to the activity factor of its corresponding port, η i = p act . The overall system resource utilization, η system static , is the average utilization across all LUs. Assuming statistically identical and independent user behavior, the expected number of active links is E S i = N · p act . The overall resource utilization is therefore:
η system static = N act N = p act
where N act is the number of active LUs and N is the total number of LUs.
This simple but powerful result reveals the core inefficiency: the system’s overall hardware utilization is capped by the activity factor of a single user. In typical wireless scenarios like V2X or massive IoT, the activity factor is very low (e.g., p act 0.1 ). This implies that for over 90% of the time, the expensive and power-hungry FPGA resources dedicated to each port are stranded—powered on but performing no useful computation. This directly limits the achievable port density for a given hardware budget and makes scaling to scenarios with thousands of sporadic links prohibitively expensive. This constitutes the first, and most commonly understood, dimension of the static allocation bottleneck.

2.1.2. The Second Dimension of Inefficiency: Stranded Path-Level Resources

A more subtle, yet equally critical, layer of inefficiency exists within each of the statically allocated LUs. An LU is primarily composed of a set of digital signal processing resources responsible for implementing the fading process for each multipath component of a channel model. We term these resources Multipath Processing Units (MPUs). A traditional LU is designed with a fixed, maximum number of MPUs, P, sufficient to handle a worst-case, highly complex channel model (e.g., P = 24 for a standardized 3GPP TDL model).
This fixed internal allocation creates the second dimension of resource stranding. The actual number of multipaths required by a channel model is not a constant; it is a random variable that depends on the propagation environment. Let the number of paths required by the channel model for the i-th link be the random variable p i , where its probability mass function is q p = P ( p i = p ) for p { 1 , 2 , , P } . For example, a Line-of-Sight (LoS) channel might only require p i = 2 paths, while a rich Non-Line-of-Sight (NLOS) channel might require p i = 20 paths.
In a static architecture, even if a link i is active ( S i = 1 ), the utilization of the MPU resources within its dedicated LU is not 100%. The internal MPU utilization for an active link i, η MPU , i , is given by the ratio of required paths to available paths:
η MPU , i = p i P
The overall system-wide MPU utilization, η MPU static , is the expectation of this value, averaged over all LUs and all states (active/idle). Since the link state S i and the path requirement p i are independent random variables:
η MPU static = E S i p i P = E [ S i ] · E [ p i ] P = p act · p ¯ P
where p ¯ = E [ p i ] = p = 1 P p · q p is the average number of multipaths required per channel model.
Equation (4) exposes the compounding inefficiency. The total utilization of the fine-grained computational resources is the product of two factors, both typically much less than 1: the user activity factor ( p act ) and the average multipath requirement ratio ( p ¯ / P ). For a scenario with p act = 0.1 and an average path requirement of p ¯ = 8 out of a maximum of P = 24 , the total MPU utilization would be a mere η MPU static = 0.1 × ( 8 / 24 ) 3.3 % . This means that over 96% of the silicon resources are effectively wasted.
This two-dimensional inefficiency provides the clear and compelling motivation for a paradigm shift. A truly scalable and efficient architecture must not only share resources between users (addressing the first dimension) but must also allow for the fine-grained, on-demand allocation of computational resources based on the specific, variable demands of the channel models being emulated (addressing the second dimension). This is the foundational principle of our 2D-DLRA architecture.

2.2. The 2D-DLRA Architecture: A Paradigm Shift to Hierarchical Pooling

To overcome the multi-dimensional inefficiencies of static allocation, we propose a fundamental paradigm shift in emulator architecture. The Two-Dimensional Dynamic Logic Resource Allocation (2D-DLRA) design moves away from rigid, dedicated hardware chains to a flexible, software-defined model based on the principles of decoupling and hierarchical resource virtualization. This approach allows for the on-demand, fine-grained allocation of computational resources at two distinct granularities, thereby maximizing hardware utilization and enabling unprecedented scalability [32,33,40].
The architecture is formally defined by its two-tiered pooling structure, as illustrated in Figure 2.

2.2.1. The Principle of Decoupling and Virtualization

The core philosophy of 2D-DLRA is the complete separation of the physical layer (the N RF ports) from the computational layer (the processing resources). We formally define the set of physical ports as N = { 1 , 2 , , N } and the set of available Logical Units as M = { 1 , 2 , , M } .
In a static architecture, a fixed, bijective mapping function Φ static : N M (where N = M ) exists. In our 2D-DLRA paradigm, this mapping is dynamic and time-varying. We introduce a time-dependent mapping function Φ L ( t ) that assigns a subset of active ports, N active ( t ) N , to a subset of available logical units.
Φ i ( t ) : N active ( t ) M assigned ( t )
where M assigned ( t ) M and | N active ( t ) | = | M assigned ( t ) | M .
This decoupling enables virtualization: the N physical ports are presented to the user, while the M underlying logical units are abstracted into a shared, fungible pool.

2.2.2. Tier-1 Resource Pool: The Logical Unit (LU) Pool

The first tier of our architecture is the Logical Unit (LU) pool, which consists of M independent, reconfigurable processing engines. The primary purpose of this tier is to serve incoming, active communication links on a one-to-one basis. When a signal appears at an RF port, the system allocates one entire LU from this pool to service the link for its entire duration.
This pooling strategy directly addresses the stranded session-level resource problem described in Section 2.1.1. We can define a Port-to-Resource Ratio (PRR) as a measure of the system’s multiplexing capability:
P R R = N M
A P R R greater than 1 indicates that the system is leveraging statistical multiplexing. For example, a system with N = 32 and M = 16 has a PRR of 2, implying it can support twice as many physical interfaces as it has processing chains, under the assumption that not all interfaces are active simultaneously. The performance of this tier—specifically, the probability that an incoming request finds no available LU ( P B 1 )—is governed by the total offered traffic load, a relationship that will be formally modeled in Section 3.

2.2.3. Tier-2 Resource Pool: The Multipath Processing Unit (MPU) Pool

The second tier of our architecture addresses the more subtle, internal dimension of inefficiency. Each of the M Logical Units contains a set of P reconfigurable Multipath Processing Units (MPUs), which are the fine-grained computational elements (e.g., complex multipliers, adders, interpolators) responsible for realizing individual fading paths.
Instead of treating these MPUs as captive resources within each LU, we introduce the concept of a system-wide virtualized MPU pool. Let the set of MPUs within the j-th LU be denoted by P j , where | P j | = P . The virtualized system pool, C , is the union of all MPU resources across all LUs.
C = j = 1 M P j
The total capacity of this virtualized pool is the total number of MPUs in the system:
C = | C | = M × P
When a request from link i is assigned to an LU, the 2D-DLRA architecture does not statically allocate all P internal MPUs to it. Instead, based on the channel model’s specific requirement of p i paths, the system performs a second-tier dynamic allocation. It assigns a subset of MPUs, C assigned , i C , of size p i to the task.
| C assigned , i | = p i P
This two-tiered, hierarchical pooling strategy ensures that resources are allocated on demand at both a coarse-grained (LU) and a fine-grained (MPU) level. This combats both dimensions of static allocation inefficiency, paving the way for a truly resource-aware and scalable channel emulation architecture. The performance of this second-tier allocation will be rigorously analyzed in Section 3.

2.3. Operational Mechanism of the 2D-DLRA System

The 2D-DLRA architecture is realized through a real-time, event-driven operational mechanism orchestrated by a Central Resource Manager (CRM). This mechanism translates the architectural principles of hierarchical pooling into a dynamic, state-aware workflow that handles the complete lifecycle of a service request, from initial signal detection to final resource de-allocation. The process can be formally described in three distinct stages.

2.3.1. Stage 1: Signal Detection and Requirement Characterization

The operational workflow is initiated by the arrival of a signal at one of the N physical RF ports. Each port, i N , is continuously monitored by a lightweight signal detection module.
  • Signal Detection: The detection process is modeled as a hypothesis test to distinguish a valid signal from background noise. Let x i ( t ) be the digitized signal at the input of port i. The system tests:
    -
    H 0 : Signal absent (only noise is present).
    -
    H 1 : Signal present.
    A common method to decide between these hypotheses is an energy detector, where the decision statistic, E i , is compared against a predefined threshold, γ .
    E i = 1 T obs t T obs t | x i ( τ ) | 2 d τ
    If E i > γ , the system declares the presence of a signal, triggering the characterization stage.
  • Requirement Characterization: Upon detection of a signal, the system must characterize its resource requirements in the two dimensions of our architecture. Let the service request associated with the signal at port i at time t be denoted by R i ( t ) . The characterization engine populates the request with the following parameters:
    -
    Tier-1 Requirement ( r L U ): The requirement for a Logical Unit is binary and implicit. The very existence of a valid request R i ( t ) implies the need for exactly one LU. We can denote this requirement as r L U ( R i ) = 1 .
    -
    Tier-2 Requirement ( r M P U ): The requirement for Multipath Processing Units is more complex and depends on the specific channel model to be emulated for this link. Let the set of all available channel models be CM . The user pre-configures a mapping Ψ : i CM k that associates port i with a specific channel model CM k . Each model has an intrinsic complexity, defined by the number of multipath components it contains. We define a function Ω : CM { 1 , 2 , , P } that returns the number of required MPUs for any given model. The Tier-2 requirement, p i , is therefore determined deterministically.
p i = r MPU ( R i ) = Ω ( Ψ ( i ) )
The fully characterized request, R i ( t ) = { r L U , r M P U } , is then forwarded to the Central Resource Manager.

2.3.2. Stage 2: Hierarchical Resource Allocation by the Central Manager

The Central Resource Manager (CRM) is the core decision-making entity of the 2D-DLRA system. It maintains the real-time state of both resource pools. Let the set of occupied LUs at time t be M occ ( t ) and the set of occupied MPUs be C occ ( t ) . The state of the system is the tuple S ( t ) = | M occ ( t ) | , | C occ ( t ) | . Upon receiving the request R i ( t ) , the CRM executes a two-stage allocation logic.
  • Tier-1 Allocation Attempt (LU Allocation): The CRM first checks the availability of resources in the Tier-1 pool. A request is admitted at this stage if and only if the number of currently occupied LUs is less than the total number of LUs, M.
    Admit at Tier-1 if:
    | M occ ( t ) | < M
    If this condition is not met, the request R i ( t ) is blocked (in a loss system) or placed in a queue (in a waiting system). This corresponds to a Tier-1 Blocking Event.
  • Tier-2 Allocation Attempt (MPU Allocation): If the request is admitted at Tier-1, an LU (say, L U j ) is tentatively assigned to it. The CRM then proceeds to the second stage, checking for resource availability in the system-wide virtualized MPU pool. Admission at this stage requires that the number of available MPUs is sufficient to meet the request’s demand, p i .
    | C occ ( t ) | + p i C
    If this condition is met, the allocation is confirmed, and the system state is updated: | M occ ( t ) | | M occ ( t ) | + 1 and | C occ ( t ) | | C occ ( t ) | + p i . If the condition is not met, the request is blocked. This corresponds to a Tier-2 Blocking Event, and the tentatively assigned LU is immediately released back to the pool.

2.3.3. Stage 3: Dynamic Link Formation and Resource De-Allocation

  • Dynamic Link Formation: Once a request is successfully admitted at both tiers, the CRM instructs the underlying FPGA hardware to form the physical data path. A high-speed, reconfigurable cross-connect within the FPGA fabric is configured to route the digitized data stream from the physical input port i to the newly assigned Logical Unit, L U j . Simultaneously, the internal resources of L U j are configured to instantiate exactly p i active Multipath Processing Units, while the remaining P p i MPUs within that LU remain dormant and conceptually available to the system-wide pool.
  • Resource De-allocation: The CRM continuously monitors the status of the active signal at port i. When the signal terminates (i.e., the energy statistic E i falls below the threshold γ for a specified duration), a de-allocation procedure is initiated.
    | M occ ( t ) |   | M occ ( t ) | 1 , | C occ ( t ) |   | C occ ( t ) | p i .
The FPGA cross-connect is reconfigured to tear down the data path, and both the LU and its associated MPUs are gracefully released back into their respective shared pools, becoming immediately available for subsequent service requests. This completes the lifecycle and ensures the “on-demand” nature of the resource allocation.
Algorithm 1 summarizes the logical operation flow of the CRM rather than a hardware scheduling or optimization algorithm.
Algorithm 1 CRM Operation Procedure in the 2D-DLRA Architecture
Input: Available port resources R p , available multipath resources R m , monitoring period T
Output: Dynamic mapping between active paths and emulation resources
  1: Initialization:
  2:  Initialize port-level resource pool R p
  3:  Initialize multipath-level resource pool R m
  4:  Initialize active path set P =
  5:  Initialize resource occupancy state and mapping table
            System is running
  6:   Wait for the next monitoring period T
  7:  Path Arrival Handling: newly arrived paths p P new  available resources exist in R p and R m
  8:  Allocate one port resource and one multipath resource to p
  9:  Update resource occupancy state
10:  Add p to active path set P
11:  Block or queue path p according to system policy
12:  Path Departure Handling: departed paths p P dep
13:  Release the port and multipath resources occupied by p
14:  Update resource occupancy state
15:  Remove p from active path set P
16:  State Update:
17:  Update system statistics for analytical modeling
It should be noted that Algorithm 1 specifies the operational logic of the CRM rather than a scheduling or optimization algorithm.

3. Performance Modeling via Hierarchical Queueing Theory

Having established the architecture and operational mechanism of the 2D-DLRA system, we now develop a rigorous mathematical framework to analyze its performance. This chapter employs principles from hierarchical queueing theory to derive analytical expressions for a comprehensive set of Key Performance Indicators (KPIs). These KPIs will quantify the system’s efficiency, capacity, and Quality of Service (QoS), providing powerful predictive tools for system design and planning.

3.1. System Model and Formal Definitions

We model the 2D-DLRA system as a hierarchical resource allocation system where service requests arrive and compete for two distinct tiers of resources. The arrival of service requests is modeled as a Poisson process, a widely accepted model for scenarios involving a large number of independent, uncoordinated users.

3.1.1. Formal System and Resource Definitions

The physical and logical resources of the architecture are formally defined as:
  • The set of physical RF ports: N = { 1 , 2 , , N } , with cardinality | N | = N .
  • The Tier-1 pool of Logical Units (LUs): M = { 1 , 2 , , M } , with cardinality | M | = M .
  • The maximum number of Multipath Processing Units (MPUs) per LU: P.
  • The total capacity of the virtualized Tier-2 MPU pool: C = M × P .

3.1.2. Traffic and Workload Model

  • Arrival Process: Service requests arrive at the system according to a Poisson process with a mean aggregate arrival rate of λ (requests per unit time). The total offered traffic load to the system, A, measured in Erlangs, is given by:
    A = λ · E [ S ]
    where E [ S ] = 1 / μ is the mean service time of a request.
  • Workload Model (Multipath Demand Distribution):
    In this work, the multipath number is modeled as a random counting variable representing the number of simultaneously active and resolvable paths within a channel snapshot. From a system-level perspective, this quantity characterizes the instantaneous computational workload rather than the physical propagation mechanism.
    Among commonly used discrete distributions, the Poisson distribution is particularly suitable for this purpose, as it models the number of independent and rare events occurring within a fixed observation window and requires only a single parameter. Alternative distributions such as binomial or negative binomial would require additional assumptions regarding the total number of potential paths or over-dispersion, which are difficult to justify at the architectural level.
    Moreover, the Poisson distribution naturally arises as the limiting case of the sum of a large number of independent Bernoulli trials with small activation probabilities, which aligns with the sparse nature of effective multipath components. The truncation reflects the finite hardware resources available in practical channel emulators.
    The number of MPUs required by an arriving request is a discrete random variable, p, with a probability mass function (PMF) denoted by { q p } . This distribution is particularly relevant for RIS-assisted channels, which exhibit high sparsity. While the number of RIS elements is large, the number of significant propagation paths is typically small and time-varying due to beamforming. We model this distribution using a truncated Poisson distribution. This reflects the real-world observation that most channel models require a moderate number of paths, with very simple (low p) and very complex (high p) models being less frequent.
Truncated Poisson PMF for Multipath Demand:
q p = λ m p p e λ m p p ! k = 1 P λ m p k e λ m p k ! , for p { 1 , 2 , , P }
Here, λ m p is the mean of the underlying (non-truncated) Poisson distribution, representing the average number of paths one might expect in the given emulation scenario (e.g., λ m p = 8 for a typical urban NLOS model). This formula ensures that the probabilities sum to one over the valid range of path requirements, { 1 , , P } . The average number of required paths, p ¯ , can be calculated as:
p ¯ = E [ p ] = p = 1 P p · q p
This formal model, with its defined system parameters and stochastic workload, provides the complete foundation necessary for the detailed performance analysis of the two resource tiers in the subsequent sections.

3.2. Tier-1 Analysis: Logical Unit Blocking Probability

The first tier of the 2D-DLRA system functions as the primary gatekeeper for incoming service requests. It consists of M parallel, identical servers (the Logical Units) that serve a total offered traffic load of A Erlangs. For our initial and primary analysis, we model this tier as a classic loss system, where a request that arrives when all M servers are occupied is blocked and cleared from the system. This corresponds to the M / G / c / c queue in Kendall’s notation, for which the performance is precisely described by the Erlang B formula.
The blocking probability at this tier, denoted as P B 1 , represents the probability that an incoming request finds no available LU and is consequently denied service at the session level. This metric is a fundamental measure of the system’s coarse-grained capacity. It is a function of only two parameters: the total offered load A and the number of LUs M.
The formula is derived from the steady-state solution of the underlying continuous-time Markov chain, where the state represents the number of occupied servers. The probability of being in the state where all M servers are busy is given by:
P BI ( A , M ) = A M M ! k = 0 M A k k !
This equation provides the exact probability of blocking for the first tier of our hierarchical system under the assumption of a Poisson arrival process. It is a cornerstone of our analysis for several reasons:
1.
It quantifies the performance of the first dimension of resource pooling (sharing M LUs among a larger set of N ports).
2.
It is the first component of the total system blocking probability, as will be derived in Section 3.4.
3.
The probability that a request is successfully admitted at Tier-1 is, consequently, ( 1 P B 1 ( A , M ) ) . This term represents the portion of the initial traffic that is “thinned” and passed on to the second tier for MPU allocation, a critical concept for the subsequent analysis.
This formula will serve as the basis for analyzing the system’s session-bound behavior and for the capacity planning detailed in Section 3.5.

3.3. Tier-2 Analysis: Multipath Unit Blocking Probability

The analysis of the Tier-2 MPU pool is substantially more complex than that of the Tier-1 LU pool. This complexity arises because the service requests arriving at this stage have heterogeneous resource demands. While every request requires exactly one LU, each request demands a variable number of MPUs, p, according to the probability distribution { q p } defined in Equation (16). The simple Erlang B formula is insufficient for such multi-rate traffic scenarios.
To model this, we first consider the traffic that is offered to the Tier-2 virtualized pool. This is the “thinned” traffic that was successfully admitted by the Tier-1 system. The total offered load to the Tier-2 pool, A , is therefore:
A = A · ( 1 P B 1 ( A , M ) )
This load, with its heterogeneous MPU demands, is offered to the system-wide virtualized MPU pool of total capacity C = M × P . To calculate the blocking probability for this multi-rate loss system, we employ the powerful and exact Kaufman-Roberts recursion.
This recursive algorithm allows for the precise calculation of the steady-state probabilities of the system. Let π ( j ) be the steady-state probability that exactly j MPUs are currently occupied in the system-wide pool, where j { 0 , 1 , , C } . Let the offered load for requests requiring p MPUs be denoted by a p , where a p = A · q p . The Kaufman-Roberts recursion relates the state probabilities as follows:
j · π ( j ) = p = 1 P a p · p · π ( j p ) , for j = 1 , 2 , , C
This equation is derived from the principle of local balance, stating that the rate of traffic flow out of state j (left side) must equal the rate of traffic flow into state j (right side). The recursion is typically solved by first setting π ( 0 ) = 1 and iteratively computing the unnormalized values for π ( 1 ) , π ( 2 ) , , π ( C ) . The entire set of probabilities is then normalized such that their sum equals one: j = 0 C π ( j ) = 1 .
Once the steady-state probabilities { π ( j ) } are known, we can calculate the blocking probability for any specific type of request. A request that requires p MPUs will be blocked if and only if the number of available MPUs in the system is less than p. This corresponds to the system being in any state j where the number of occupied MPUs is greater than C p .
B p ( A , C , { q p } ) = j = C p + 1 C π ( j )
The overall Tier-2 blocking probability, P B 2 , is the weighted average of the blocking probabilities for all request types, where the weights are the arrival probabilities of each request type, { q p } .
P B 2 ( A , C , { q p } ) = p = 1 P q p · B p
This metric, P B 2 , represents the conditional probability that a request is blocked due to insufficient MPU resources, given that it was successfully admitted at Tier-1. This result is essential for understanding the system’s computation-bound behavior and will be combined with P B 1 in the next section to derive the total system blocking probability.

3.4. Derivation of Overall System Performance Metrics

Having derived the blocking probabilities for both the Tier-1 LU pool ( P B 1 ) and the Tier-2 MPU pool ( P B 2 ), we can now combine these results to formulate the key end-to-end performance metrics for the entire 2D-DLRA system. These metrics provide a holistic view of the system’s performance, accounting for the hierarchical nature of its resource allocation.

3.4.1. Total System Blocking Probability ( P B , total )

A service request is ultimately blocked and rejected by the system if it is either blocked at Tier-1 due to a lack of available LUs, OR if it is successfully admitted at Tier-1 but is subsequently blocked at Tier-2 due to insufficient MPU resources. These are mutually exclusive events that together constitute the total system blocking.
The probability of this total blocking event, P B , total , can be expressed as:
P B , total = P ( Blocked at Tier - 1 ) + P ( Admitted at Tier - 1 ) × P ( Blocked at Tier - 2 Admitted at Tier - 1 )
Using the metrics derived in the previous sections, and assuming that the blocking events at the two tiers are approximately independent, we can formulate the total system blocking probability as follows:
P B , total ( A , M , C , { q p } ) P B 1 ( A , M ) + ( 1 P B 1 ( A , M ) ) × P B 2 ( A , C , { q p } )
where A = A · ( 1 P B 1 ( A , M ) ) is the traffic offered to the second tier. This metric is the single most important indicator of the overall Quality of Service provided by the emulator.

3.4.2. Resource Utilization ( η )

The efficiency of the 2D-DLRA architecture is best understood by analyzing the utilization of its two distinct resource pools.
  • Logical Unit Utilization ( η L U ):
    The utilization of the Tier-1 LU pool is defined as the average number of occupied LUs divided by the total number of LUs, M. The average number of occupied LUs is equivalent to the carried load of the Tier-1 system, which is the offered load minus the blocked load.
    η L U ( A , M ) = A · ( 1 P B 1 ( A , M ) ) M
  • Multipath Unit Utilization ( η M P U ):
    The utilization of the Tier-2 MPU pool is defined as the average number of occupied MPUs divided by the total MPU capacity, C. The average number of occupied MPUs is the total carried load of the entire system (in Erlangs) multiplied by the average number of MPUs required per request, p ¯ .
    η M P U ( A , M , C , { q p } ) = A · ( 1 P B , total ( ) ) · p ¯ C
    where p ¯ is the average multipath demand defined in Equation (17).
These two utilization metrics, η L U and η M P U , are essential for quantifying the economic and operational efficiency of the architecture. They allow a system designer to understand how effectively the coarse-grained (LU) and fine-grained (MPU) hardware resources are being leveraged under different traffic conditions, thereby validating the architecture’s ability to minimize the “stranded resource” problem in both its dimensions.

3.5. System Capacity and Design Planning Analysis

The analytical framework derived in the previous sections not only allows us to predict the performance of a given system configuration but also, more powerfully, enables us to address the inverse problem: system dimensioning and capacity planning. This section details the methodologies for determining the system’s ultimate user capacity and for making informed hardware provisioning decisions based on specific performance requirements.

3.5.1. System Capacity Analysis (Maximum Number of Users, K max )

A key question for any emulation platform is determining the maximum number of users it can support. This capacity is not a fixed number but is a function of the desired Quality of Service (QoS) and the behavioral profile of the users. We define the system capacity, K max , as the maximum number of users the system can support while ensuring the total blocking probability does not exceed a predefined QoS target, P B target .
To calculate this, we must first find the maximum total offered load, A max , that the system can handle without violating the QoS constraint. This requires the numerical inversion of the total blocking probability formula (Equation (24)).
A max P B target , M , C , { q p } = solve for A in : P B , total A , M , C , { q p } = P B target
Since Equation (24) does not have a closed-form inverse, A max must be found using numerical root-finding algorithms (e.g., Newton-Raphson or bisection methods). Once A max is determined, the maximum number of users, K max , can be calculated by dividing this total load by the average traffic generated per user, A u .
K max = A max A u
This metric is critical for understanding and marketing the emulator’s capabilities, as it directly relates the hardware configuration to a tangible performance promise under specific workload conditions (defined by A u ) and for a given level of service reliability (defined by P B target ).

3.5.2. Design Planning Analysis (Required Hardware Resources)

From a system architect’s perspective, a common task is to determine the minimum hardware resources required to meet a specific demand. Our framework can be used to answer questions such as, “How many LUs (M) are needed to support a scenario with a total offered load of A Erlangs while guaranteeing a blocking probability of less than 1%?”
This requires solving for the minimum integer M that satisfies the QoS constraint. A similar process can be applied to dimension the MPU capacity, P.
M req A , P , P B target , { q p } = min m Z + P B , total A , m , m × P , { q p } P B target
This equation provides a direct, quantitative tool for hardware provisioning. It allows a designer to explore the trade-offs between different scaling strategies. For instance, one can compare the cost and capacity implications of “horizontal scaling” (increasing M) versus “vertical scaling” (increasing P). These analyses, derived directly from our theoretical model, are essential for making cost-effective and performance-aware architectural decisions.

3.6. QoS Experience Analysis with Queueing

The preceding analysis was based on a “loss system” model, where requests are blocked and cleared if no resources are immediately available. An alternative and often more practical implementation is a “waiting system”, where incoming requests can be placed in a queue if all servers are busy. This section extends our framework to analyze such a system, focusing on QoS metrics related to delay, which are critical for understanding the user’s interactive experience.
For this analysis, we focus on the Tier-1 LU pool and model it as an M / M / c queueing system. This assumes that the service times for emulation tasks are exponentially distributed, which is a common and tractable assumption for this type of analysis. The key difference is that the system can now support a number of requests (both in service and in queue) greater than M. The service time refers to the lifetime of an active multipath component rather than the computation latency of FPGA hardware.

3.6.1. System Stability Condition

A queueing system is only stable if the long-term average arrival rate is less than the long-term average service rate. Otherwise, the queue length will grow indefinitely. Let the total offered load be A and the number of LUs be M. The system utilization, ρ , also known as the traffic intensity, is defined as:
ρ = A M
The fundamental condition for system stability is:
ρ = A M < 1
All subsequent analysis in this section is valid only when this condition is met.

3.6.2. Probability of Queueing ( P Q )

In a waiting system, the most important QoS metric is the probability that an incoming request will not be served immediately and must enter a waiting queue. This is the probability that an arriving request finds all M LUs occupied. This is given by the well-known Erlang C formula.
First, we must calculate the probability that the system is completely idle (zero requests), P 0 .
P 0 ( A , M ) = k = 0 M 1 A k k ! + A M M ! 1 A M 1
Using this, the probability of queueing, P Q , can be calculated as:
P Q ( A , M ) = A M M ! 1 A M P 0 ( A , M )
This metric is essential for defining the operational region where the system can be considered “responsive”.

3.6.3. Average Waiting Time in Queue ( W q )

For those requests that do have to wait, a critical measure of user experience is the average time they spend in the queue before service begins. This can be derived using Little’s Law. First, we find the average number of requests in the queue, L q .
L q ( A , M ) = P Q ( A , M ) ρ 1 ρ
Applying Little’s Law ( L q = λ W q ), where λ = A μ , we can derive the average waiting time in queue, W q .
W q ( A , M , μ ) = L q λ = P Q ( A , M ) M μ A = P Q ( A , M ) μ ( M A )
This metric is highly sensitive to the system utilization, ρ . As ρ approaches 1, the waiting time increases non-linearly and approaches infinity, a phenomenon known as congestion collapse. This analysis is therefore crucial for setting operational load thresholds to ensure a high-quality, responsive user experience.

4. Micro-Behavioral Analysis via Non-Cooperative Game Theory

The queueing-theoretic analysis in the preceding chapter provides a macroscopic view of the system’s performance under a given aggregate load. However, it assumes that the arrival rate of users is exogenous and independent of the system’s current state. In reality, users are intelligent agents who may adapt their behavior based on the perceived Quality of Service. To model this micro-behavioral dynamic, we introduce a non-cooperative game-theoretic framework [41].
This chapter models the interaction among heterogeneous users as a resource contention game. This approach allows us to analyze the strategic decisions of individual users competing for the shared, finite resources of the 2D-DLRA system and to determine the resulting equilibrium state. This provides a deeper understanding of the system’s performance not just from the operator’s perspective but from the perspective of the users themselves.

4.1. Modeling the System as a Resource Contention Game

We model the scenario as a non-cooperative game, G , where a population of potential users strategically decides whether or not to request service from the emulator. Each user acts selfishly to maximize their own individual benefit. The game is formally defined by a tuple G = K , { S k } , { U k } .
1.
Players ( K ):
The players are the population of potential users who can generate service requests. To make the analysis tractable and insightful, we do not model each individual user. Instead, we classify the user population into L distinct classes based on the complexity of their service requests. Each class, k { 1 , 2 , , L } , is characterized by the number of Multipath Processing Units (MPUs), p k , that its service requests require. For example, “Simple Service” users could be Class 1 ( p 1 = 4 ), while “Complex Service” users could be Class 2 ( p 2 = 16 ). The set of players, K , is therefore the set of these L user classes.
2.
Strategies ( { S k } ):
For each user class k, an individual user’s strategy set is simple and binary: the user can either choose to seek service or not. We model this decision at the aggregate level. For each class k, with a total potential arrival rate of λ ¯ k , the collective strategy is to choose an actual arrival rate, λ k , that they will attempt to send to the system, where 0 λ k λ ¯ k . The strategy space for class k is therefore S k = [ 0 , λ ¯ k ] . The overall strategy profile is the vector of arrival rates from all classes, λ = ( λ 1 , λ 2 , , λ L ) . The total load offered to the system is then A ( λ ) = k = 1 L λ k / μ k .
3.
Payoffs ( { U k } ):
The payoff function, U k , quantifies the net benefit a user of class k receives from choosing to enter the system. A rational user will only choose to enter if their expected payoff is positive. The payoff is composed of two components: the reward for successful service and the cost incurred due to potential blocking or delay.
Let R k be the intrinsic reward or utility that a user of class k gains upon successful completion of their emulation task. This represents the value of the test they are performing. Let P B , k ( λ ) be the probability that a request from class k is blocked, which is a function of the total strategy profile λ . This blocking probability is derived from our hierarchical queueing model.
The expected payoff for an individual user of class k is the probability of successful service multiplied by the reward.
U k ( λ ) = 1 P B , k ( λ ) · R k
This formulation captures the essential tension of the game: each user class k wants to send as much traffic as possible (increase λ k ) to gain more total reward. However, increasing λ k also increases the total system load, A ( λ ) , which in turn increases the blocking probability P B , j for all classes j (including itself), thereby reducing the individual payoff U j for everyone. This negative externality—where one user’s action imposes a cost on all other users—is the core of the resource contention game. The analysis of this game, to find its equilibrium state, will be the subject of the following sections.

4.2. Derivation of the Nash Equilibrium for User Admission Control

To analyze the equilibrium behavior of the resource contention game, we must precisely define the strategies available to the players and the payoff functions that guide their decisions. We model a scenario where users are sensitive not only to being blocked but also to the delay they experience when the system is congested.

4.2.1. User Strategies

We consider a large population of potential users, where each user independently decides whether to join the system to request service. Let the total potential arrival rate of requests be λ ¯ . The collective strategy of this population is to choose an effective arrival rate, λ , that they will actually send to the system, where 0 λ λ ¯ . This effective arrival rate λ determines the total offered load to the system, A = λ / μ .
An individual user’s strategy is binary: “Enter” or “Balk” (i.e., do not enter). A user will choose to “Enter” if and only if the expected payoff from doing so is positive. The equilibrium state is reached when users are indifferent between entering and balking, which in turn determines the equilibrium arrival rate, λ * .

4.2.2. Payoff Functions with Cost of Delay

The payoff for a user who chooses to enter the system is no longer just the reward for service, but it must also account for the cost associated with the time spent waiting for that service. We assume users are delay-sensitive.
Let R be the intrinsic reward a user receives upon successful completion of their emulation task. This represents the value of obtaining the test result. Let c w be the user’s cost per unit of time spent waiting in the queue. This represents the user’s impatience or the opportunity cost of their time.
The total time a user spends in the system, T sys , is the sum of their waiting time in the queue, T q , and their service time, S. The total cost incurred by a user who enters the system is therefore c w · T q .
A request that is blocked upon arrival has a waiting time of zero and receives no reward, so its net payoff is zero. For a request that is admitted (either immediately or after queueing), the payoff is the reward minus the cost of waiting. The expected payoff for a user considering entry into the system is therefore:
E [ Payoff ] = P ( Admitted ) × R E [ Cost | Admitted ]
Let P Q ( A , M ) be the probability that an admitted user must queue, and W q ( A , M , μ ) be the average waiting time for those who must queue. The average waiting time for any admitted user, W ( A , M , μ ) , is the probability of queueing multiplied by the average wait in queue.
W ( A , M , μ ) = P Q ( A , M ) · W q ( A , M , μ )
The expected cost for an admitted user is then c w · W ( A , M , μ ) . Since the probability of being admitted is ( 1 P B ( A , M ) ) , where P B is the blocking probability for a system with a finite queue (an M / M / c / K system), the expected payoff function, U, for a user considering entry can be formally written as:
U ( A ) = 1 P B ( A ) · R c w · W ( A )
where the dependencies on M and μ are implicit.
This refined payoff function captures the essential trade-off faced by a rational user: the potential reward R is discounted by both the probability of being blocked entirely ( P B ) and the expected cost of the delay ( c w · W ) if they are admitted. A user will choose to enter the system only if U ( A ) > 0 . As more users decide to enter, the load A increases, which in turn increases both the blocking probability and the average waiting time, thereby decreasing the payoff. This dynamic leads to a stable equilibrium, which we will analyze in the next section.

5. Joint Analysis and Experimental Validation

This chapter unifies the macroscopic performance analysis from queueing theory with the microscopic behavioral analysis from game theory. To validate the architecture under future massive connectivity constraints, we consider an RIS-assisted communication scenario where the number of reflecting elements varies from 64 to 1024, but the effective multipath clusters remain sparse. We first establish the theoretical feedback loop that links the system’s Quality of Service (QoS) to the strategic behavior of its users, leading to a stable operating equilibrium. We then present a comprehensive set of simulation results that not only validate the individual theoretical models but also provide deep, quantitative insights into the system’s performance, capacity, and the effectiveness of potential optimization mechanisms.

5.1. RIS-Oriented Hardware Emulation Platform Design

To validate the proposed 2D-DLRA architecture, we present a conceptual hardware reference design tailored for RIS-assisted Hardware-in-the-Loop (HIL) emulation. This design translates the theoretical resource pooling into tangible FPGA modules, addressing the specific computational challenges posed by RISs: massive element processing and sparse propagation.

5.1.1. Platform Topology and RIS Integration

The platform is built upon a high-performance FPGA cluster (e.g., Zynq UltraScale+ RFSoC). Unlike general-purpose channel simulators, we implement dynamically allocated resource pools in the digital domain. Power Detection: This module monitors signal amplitude in real time and matches it with user activity information from the host computer. Gating Circuit: Based on the environmental conditions of each channel, different multipath links are set to match the propagation situation of each RIS unit. Resource Pool Mapping: The system allocates different computing units to each path for the final simulation work.

5.1.2. Hardware Mapping of the 2D-DLRA

The resource pools are physically realized as follows: Tier-1 (LUs as Processing Chains): Each Logical Unit (shown as the blue line in the Figure 3) is a reconfigurable data path. For RIS emulation, an LU performs the cascading of the user channel and the RIS reflection matrix. By decoupling these from physical ports, we avoid dedicating FPGA logic to silent RIS elements or inactive users. Tier-2 (MPUs as DSP Slices): The core computational burden of RIS emulation is the summation of multipath signals affected by phase shifts. We map the theoretical “MPUs” to physical DSP48E2 Slices. Exploiting the angular sparsity of RIS channels, the resource manager activates only the specific DSP slices required for the dominant propagation paths, leaving the majority of the silicon dark (power-gated) or available for other tasks. This hardware-level mapping establishes the physical constraints for our validation: the maximum number of LUs is bounded by the FPGA’s logic capacity, and the MPU pool is strictly limited by the total available DSP slices.
The Central Resource Manager dynamically allocates DSP slices (MPUs) to perform the convolution of signals with sparse RIS interaction paths, strictly following the 2D-DLRA logic.
Therefore, the channel models used in the simulations in the following sections reflect the typical sparsity characteristics of RIS environments. This ensures that the analyzed user capacity represents the performance achievable on real-world hardware platforms.

5.2. User Capacity Analysis Under Heterogeneous Workloads

A primary measure of an emulator’s performance is its user capacity: the maximum number of concurrent users it can support. In the 2D-DLRA architecture, this capacity is not a single, fixed number but is highly dependent on the statistical nature of the workload. We analyze this relationship by investigating two key workload parameters: the user activity factor ( A u ), which drives the load on the Tier-1 LU pool, and the average channel complexity ( λ m p ), which drives the load on the Tier-2 MPU pool.
The maximum number of users, K max , is determined by finding the maximum supportable system load, A max , for a given QoS target (1% blocking), and then dividing by the per-user activity factor.
K max ( A u , λ m p ) = A max ( λ m p ) A u
where A max ( λ m p ) is the numerically solved root of the equation:
P B , total A , M , C , q p ( λ m p ) = 0.01
Figure 4 presents a comprehensive, three-dimensional analysis of the system’s user capacity as a joint function of these two critical workload parameters.

5.3. Scalability and Economic Efficiency Analysis

A critical attribute of any scalable architecture is the efficiency with which its capacity grows as more hardware resources are provisioned. An ideal system should exhibit linear scalability, where doubling the resources doubles the capacity, indicating the absence of central bottlenecks. This section provides a rigorous analysis of the 2D-DLRA system’s scalability and evaluates its “returns to scale” from an economic perspective.
We analyze the system’s horizontal scalability by fixing the workload profile (user activity and channel complexity) and progressively increasing the number of Logical Units (M). We then compute two key metrics: the total user capacity ( K max ) and the marginal user capacity ( Δ K max / Δ M ). The latter, representing the number of additional users supported by each newly added LU, is a direct measure of the economic efficiency of scaling.
Marginal Capacity ( M ) = K max ( M ) K max ( M Δ M ) Δ M
Figure 5 provides a multi-faceted analysis of the system’s scalability and the economic efficiency of horizontal resource expansion. The primary y-axis (left) shows the total supported user capacity ( K m a x ) as a function of the number of provisioned Logical Units (M), while the secondary y-axis (right) displays the marginal user capacity, a key metric for evaluating the returns to scale. The actual user capacity (blue curve) exhibits two distinct phases. In the high-efficiency zone ( M < 40 ), the capacity growth is nearly perfectly linear, closely tracking the ideal linear growth reference line (black dashes). This demonstrates that, at small to medium scales, the 2D-DLRA architecture is free from significant central bottlenecks, and each added LU contributes its full potential to the system’s capacity. However, as the system scale increases further into the diminishing returns zone ( M > 40 ), the actual capacity curve begins to show a slight sub-linear trend, deviating downwards from the ideal reference. This subtle but important trend is more clearly captured by the marginal capacity curve (red curve, right axis). While initially stable, the marginal users supported per additional LU begin to slowly decrease at larger system scales. For instance, increasing from 20 to 22 LUs adds approximately 146 users per LU, whereas increasing from 62 to 64 LUs only adds 136 users per LU. This suggests the emergence of minor, second-order bottlenecks at very large scales, possibly due to increased contention for shared resources in a real-world implementation. Overall, the analysis confirms that the 2D-DLRA architecture possesses highly desirable, near-linear scalability. The marginal efficiency analysis further provides a crucial tool for economic planning, indicating that while large-scale deployment is highly effective, the economic benefit of adding each subsequent unit of resource may gradually decrease.

5.4. Mixed-Mode Emulation Capability

To validate the system’s performance under a dynamic, mixed-mode workload, we designed a time-varying simulation scenario. The scenario involves the sequential arrival of two distinct types of services: a high-complexity service (e.g., a Massive MIMO channel requiring a large, contiguous block of MPUs) and a massive burst of low-complexity services (e.g., thousands of IoT devices requiring only a few MPUs each). Figure 6 provides a multi-layered visualization of the system’s dynamic response to this heterogeneous workload.
Figure 6 illustrates the 2D-DLRA system’s dynamic scheduling capability and stability when subjected to a highly heterogeneous, time-varying workload. The upper panel provides a heatmap of the system’s 288 MPUs over time, where dark blue indicates resources allocated to high-complexity tasks and light blue to low-complexity tasks. The lower panel displays the corresponding macroscopic system performance metrics: total resource utilization (blue line, left axis) and the number of queued requests (red line, right axis). The simulation unfolds as follows: At t = 10, several long-duration, high-complexity tasks arrive. The heatmap shows the scheduler instantly allocating large, contiguous blocks of MPU resources (dark blue). This causes a sharp but controlled increase in resource utilization, while the queue length remains at zero, indicating efficient admission. At t = 50, a massive burst of short-duration, low-complexity IoT-like tasks arrives. The heatmap shows the scheduler rapidly filling the remaining idle resource gaps with many small, light blue blocks. This event drives the system to near 100% utilization. Critically, despite this saturation, the queue length exhibits only a brief, minor spike before being quickly cleared as the short tasks complete. This result vividly demonstrates the mixed-mode emulation capability of the 2D-DLRA. The system can concurrently and efficiently handle both resource-intensive and massively parallel tasks, maintaining high throughput and stability. The tight coupling between the microscopic resource allocation shown in the heatmap and the macroscopic performance metrics confirms the architecture’s flexibility and robustness in complex, dynamic deployment scenarios.

5.5. End-to-End Emulation Fidelity Analysis

While the 2D-DLRA architecture demonstrably enhances scalability and efficiency, a critical question arises: does the dynamic, contention-based nature of resource allocation introduce new, time-varying sources of error? This section moves beyond static error analysis to investigate the dynamic fidelity of the emulator, quantifying how the system’s performance under load, as predicted by queueing and game theory, impacts the end-to-end emulation error.

5.5.1. Correlation Between System Load and Dynamic Error

The queueing effects inherent in a shared-resource system, such as scheduling and queuing delays, can impact the timeliness of channel state updates. This experiment aims to visualize and quantify the correlation between instantaneous system load and the resulting time-varying emulation error.
Figure 7 provides a dynamic, time-domain analysis of the relationship between system load and end-to-end emulation fidelity. The background color represents the instantaneous system load, transitioning from cool colors (low load) to hot colors (high load/congestion). Overlaid are the instantaneous emulation error (blue solid line, left axis) and the channel state update latency (green dashed line, right axis). The simulation reveals a strong causal relationship. During periods of low load ( t < 80 ), the update latency remains minimal (approx. 5 ms), and the emulation error is stable at a baseline level of 2% EVM. At t = 80 , a traffic burst occurs, driving the system into a high-load state (red background). This congestion immediately leads to a sharp increase in the queueing and scheduling delay, causing the channel update latency to spike to over 25 ms. Critically, this increased latency means the emulator is applying outdated channel states, causing the instantaneous emulation error to rise significantly, peaking at over 8% EVM. As the burst subsides ( t > 120 ), the latency and error levels promptly recover. This result provides direct evidence that queueing effects in a dynamic allocation system are a significant source of time-varying error, demonstrating that maintaining a low-congestion state is critical not only for responsiveness but also for preserving emulation fidelity.

5.5.2. Error Distribution Under Multi-User Contention

Figure 8 visualizes the distribution of emulation error among a population of heterogeneous users competing for limited resources in a high-load scenario. Each point represents a user, positioned according to their service priority (x-axis) and the complexity of their requested channel model (y-axis). The color of each point indicates the average end-to-end emulation error experienced by that user. The plot reveals a clear and systematic differentiation in the quality of service. A “high error zone” (red/yellow points) emerges for users with low priority and high complexity. These users are the first to suffer from resource contention, experiencing higher scheduling delays and potentially being allocated fewer resources than ideal, leading to a significant degradation in fidelity. Conversely, a “low error zone” (blue/cyan points) is observed for users with high priority and low complexity. The 2D-DLRA system naturally prioritizes these services, providing them with timely and sufficient resources, thus preserving their emulation fidelity. This result demonstrates that the resource contention, when viewed through a game-theoretic lens, is not chaotic but results in a predictable, structured distribution of error, which can be leveraged to implement advanced, priority-aware QoS policies.

5.5.3. Statistical Comparison of End-to-End Error Distributions

This final experiment provides a conclusive, statistical summary of the impact of dynamic scheduling on overall system fidelity by comparing the cumulative distribution functions (CDFs) of the emulation error under different scenarios.
Figure 9 provides a final, statistical comparison of the end-to-end emulation error distribution under three different architectural scenarios. The plot shows the Cumulative Distribution Function (CDF) of the error. A curve that is further to the left indicates a better overall performance. The ideal, resource-unlimited case (black dashed line) serves as a theoretical benchmark. The static allocation architecture (red line) exhibits a generally poor performance, with a median error of approximately 1.5% and a wide error distribution, reflecting its inefficient nature. The proposed 2D-DLRA architecture (blue line) demonstrates a significantly superior performance for the vast majority of users, with its CDF curve shifted far to the left of the static case. For example, to meet a critical error threshold of 3%, 91.8% of users in the 2D-DLRA system achieve this QoS, compared to only 73.6% of users in the static system. However, the 2D-DLRA curve exhibits a “tail” at the higher error percentiles, a direct consequence of the queueing and contention effects analyzed previously. This indicates that while the average performance is greatly enhanced, a small fraction of users in a high-load scenario may experience a higher error than in a static system. This result provides a nuanced and complete picture: the 2D-DLRA trades a slight degradation in worst-case performance for a massive improvement in average-case performance and overall system capacity.

5.6. Statistical Multiplexing Gain Analysis

The ultimate validation of the 2D-DLRA architecture lies in quantifying its core advantage over traditional static designs: the Statistical Multiplexing Gain (SMG). This section provides a direct, quantitative comparison of the user capacity of both architectures under identical Quality of Service (QoS) targets and analyzes how this gain is influenced by the user’s traffic behavior.
We formally define the Statistical Multiplexing Gain as the ratio of the maximum number of users supported by the dynamic 2D-DLRA architecture ( K max dynamic ) to that supported by a static allocation architecture ( K max static ) with an equivalent number of processing resources (M), while maintaining the same QoS target.
S M G = K max dynamic K max static
For a static architecture, the number of supportable users is rigidly fixed by the number of hardware resources, i.e., K max static = M . For our 2D-DLRA, K max dynamic is a function of the user activity factor ( A u ), as derived previously. Figure 10 presents a comprehensive analysis of this crucial metric.

5.7. Implementation Feasibility

Conventional channel emulators, including both commercial platforms and research-oriented FPGA implementations, predominantly adopt a tap-delay-line (TDL) based architecture. In such architectures, the input baseband signal first passes through multiple delay modules, each representing a distinct multipath component. The delayed signals are then multiplied by time-varying channel coefficients to emulate fading and Doppler effects, and finally summed through an adder tree to generate the composite channel output for a given physical port.
In traditional designs, the number of delay modules and the structure of the adder tree are statically configured and permanently bound to each output port. While this approach simplifies hardware control, it leads to substantial resource underutilization when the instantaneous multipath demand varies across ports or over time, which is particularly pronounced in large-scale RIS-assisted scenarios.
The proposed 2D-DLRA architecture preserves the fundamental TDL-based signal processing pipeline, while introducing dynamic logic-level reconfiguration of resource binding. Specifically, dynamic resource allocation is realized through two key mechanisms:
Dynamic Assignment of Delay Modules: Instead of statically associating a fixed set of delay modules with each output port, the proposed framework allows delay modules to be dynamically allocated to ports according to their instantaneous multipath requirements. Each delay module operates independently and can be logically bound to any port through configurable routing logic, without modifying its internal signal processing functionality.
Adaptive Adder Tree Reconfiguration: Since the number of active delay modules per port becomes time-varying, the corresponding summation structure must adapt accordingly. This is achieved by dynamically configuring the adder tree to aggregate only the currently assigned delay-module outputs for each port. From a hardware perspective, this can be implemented using multiplexers and configurable reduction trees, whose control signals are updated by the Channel Resource Manager (CRM).
Importantly, these modifications occur at the control and interconnection level, rather than within the signal processing datapath itself. The delay modules, coefficient multipliers, and basic arithmetic units remain unchanged. As a result, the proposed dynamic allocation mechanism can be integrated into existing channel emulator architectures with minimal impact on critical timing paths.
From an implementation perspective, the dynamic reconfiguration operates on a time scale aligned with channel evolution rather than sample-level processing. Multipath components typically persist for durations ranging from milliseconds to minutes, whereas FPGA signal processing operates at nanosecond clock periods. Therefore, the logic reconfiguration overhead associated with delay-module assignment and adder-tree selection is amortized over long-lived channel states and does not interfere with real-time signal processing.
This separation of time scales ensures that dynamic logic resource allocation can be realized without violating deterministic latency constraints, making the proposed architecture practically feasible for hardware-based channel emulation.

6. Conclusions

In this paper, we addressed the architectural bottleneck impeding the high-fidelity emulation of massive Reconfigurable Intelligent Surfaces (RISs) for 6G networks. We identified that traditional static resource allocation fails due to a two-dimensional inefficiency: the inability to scale to thousands of RIS elements (Dimension 1) and the waste of computational resources on physically sparse propagation paths (Dimension 2). To overcome this, we proposed and modeled the Two-Dimensional Dynamic Logic Resource Allocation (2D-DLRA) framework. This architecture introduces a paradigm shift towards hierarchical resource pooling, decoupling physical ports from computational logic. We developed a rigorous hierarchical queueing theory model to quantify the system’s performance, yielding a rich set of analytical metrics for capacity planning and bottleneck analysis. Additionally, a game-theoretic analysis provided novel insights into the equilibrium states of resource contention under heterogeneous workloads. The comprehensive validation results unequivocally demonstrate that the 2D-DLRA architecture achieves near-linear scalability and maximizes hardware utilization by effectively exploiting the sparsity of RIS channels. This work provides a viable, cost-effective path for the large-scale hardware-in-the-loop testing required to validate the next generation of smart radio environments.

Author Contributions

Methodology, D.F., H.Z. (Haobo Zhang), C.C., H.Z. (Hao Zhou), P.Z., G.W., C.L., J.Z. and B.A.; Resources, Z.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Fundamental Research Funds for Central Universities under grant 2024JBGP006; the National Natural Science Foundation of China under Grants 62341127 and 62221001; the State Key Laboratory of Advanced Rail Autonomous Operation Research Funds (Contract No. RAO2023ZZ004); the Fundamental Research Funds for the Natural Science Foundation of Jiangsu Province, Major Project under Grant BK20212002; and the Key Research and Development Program of Zhejiang Province, China, under grant number 2023C01003.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

Generative AI was used to provide drawing ideas and sentence correction for the graph.

Conflicts of Interest

Author Dan Fei was employed by the company Nanjing Rongcai Transportation Technology Research Institute Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations and symbols are used in this manuscript:
RISReconfigurable Intelligent Surface
2D-DLRATwo-Dimensional Dynamic Logic Resource Allocation
CRMChannel Resource Manager
MPUMultipath Processing Unit
TDLTap Delay Line
NLOSNon-Line-of-Sight
LOSLine-of-Sight
FPGAField-Programmable Gate Array
CSIChannel State Information
PDPPower Delay Profile
R p Set of available port-level emulation resources
R m Set of available multipath-level emulation resources
P Set of active multipath components
kNumber of simultaneously active multipath components
λ m p Mean arrival rate of multipath components
μ m p Service (departure) rate of multipath components
TMonitoring period of the CRM control loop
SService time (lifetime) of a multipath component
ρ Resource utilization ratio

References

  1. Saad, W.; Bennis, M.; Chen, M. A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems. IEEE Netw. 2020, 34, 134–142. [Google Scholar] [CrossRef]
  2. Telagam, N.; Kandasamy, N.; Manoharan, A.K.; Anandhi, P.; Atchudan, R. Beyond 5G: Exploring key enabling technologies, use cases, and future prospects of 6 G communication. Nano Commun. Netw. 2025, 43, 100560. [Google Scholar] [CrossRef]
  3. Yang, H.; Zhang, A.; Sun, Y.; Li, J.; Liu, P. Regularized Sparse Bayesian Learning Based Channel Estimation for RIS-Assisted Wireless Communication System. IEEE Commun. Lett. 2024, 28, 1412–1416. [Google Scholar] [CrossRef]
  4. Li, W.; Lin, Z.; Guo, Q.; Vucetic, B. Exploiting structured sparsity with low complexity sparse bayesian learning for RIS-assisted MIMO mmWave channel estimation. IEEE Trans. Veh. Technol. 2023, 73, 6752–6764. [Google Scholar] [CrossRef]
  5. Amri, M.M.; Tran, N.M.; Park, J.H.; Kim, D.I.; Choi, K.W. Sparsity-Aware Channel Estimation for Fully Passive RIS-Based Wireless Communications: Theory to Experiments. IEEE Internet Things J. 2023, 10, 8046–8067. [Google Scholar] [CrossRef]
  6. Zhang, X.; Shao, X.; Guo, Y.; Lu, Y.; Cheng, L. Sparsity-Structured Tensor-Aided Channel Estimation for RIS-Assisted MIMO Communications. IEEE Commun. Lett. 2022, 26, 2460–2464. [Google Scholar] [CrossRef]
  7. Chen, P.; Chen, Z.; Miao, P.; Chen, Y. RIS-ADMM: A RIS and ADMM-Based Passive and Sparse Sensing Method with Interference Removal. IEEE Commun. Lett. 2024, 28, 867–871. [Google Scholar] [CrossRef]
  8. Abdallah, A.; Celik, A.; Mansour, M.M.; Eltawil, A.M. RIS-Aided mmWave MIMO Channel Estimation Using Deep Learning and Compressive Sensing. IEEE Trans. Wirel. Commun. 2023, 22, 3503–3521. [Google Scholar] [CrossRef]
  9. Dong, H.; Ji, C.; Zhou, L.; Dai, J.; Ye, Z. Sparse Channel Estimation with Surface Clustering for IRS-Assisted OFDM Systems. IEEE Trans. Commun. 2023, 71, 1083–1095. [Google Scholar] [CrossRef]
  10. Zhao, J.; Dong, X.; Qiu, J.; Luo, J.; Sun, M.; Wang, Y.; Zhang, X. Sparse RIS-aided DOA Estimation for NLOS scenario: A Pseudo-inverse Vectorization Perspective. IEEE Trans. Veh. Technol. 2025; early access. [Google Scholar] [CrossRef]
  11. Guo, F.; Yu, F.R.; Zhang, H.; Li, X.; Ji, H.; Leung, V.C.M. Enabling Massive IoT Toward 6G: A Comprehensive Survey. IEEE Internet Things J. 2021, 8, 11891–11915. [Google Scholar] [CrossRef]
  12. Garcia, M.H.C.; Molina-Galan, A.; Boban, M.; Gozalvez, J.; Coll-Perales, B.; Şahin, T.; Kousaridas, A. A Tutorial on 5G NR V2X Communications. IEEE Commun. Surv. Tutor. 2021, 23, 1972–2026. [Google Scholar] [CrossRef]
  13. Chaudhari, A.; Braun, M. A Scalable FPGA Architecture for Flexible, Large-Scale, Real-Time RF Channel Emulation. In Proceedings of the 2018 13th International Symposium on Reconfigurable Communication-Centric Systems-on-Chip (ReCoSoC), Lille, France, 9–11 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  14. Hamidi-Sepehr, F.; Pfister, H.D.; Chamberland, J.F. Delay-Sensitive Communication Over Fading Channels: Queueing Behavior and Code Parameter Selection. IEEE Trans. Veh. Technol. 2015, 64, 3957–3970. [Google Scholar] [CrossRef]
  15. Balapuwaduge, I.A.M.; Jiao, L.; Pla, V.; Li, F.Y. Channel Assembling with Priority-Based Queues in Cognitive Radio Networks: Strategies and Performance Evaluation. IEEE Trans. Wirel. Commun. 2014, 13, 630–645. [Google Scholar] [CrossRef]
  16. Wang, Z.; Mathiopoulos, P. On the performance analysis of dynamic channel allocation with FIFO handover queuing in LEO-MSS. IEEE Trans. Commun. 2005, 53, 1443–1446. [Google Scholar] [CrossRef]
  17. Zhu, Z.; You, B.; Li, Z.; Mu, J.; Yang, S.; Liu, P.; Lee, I. Integrated Sensing and Covert Communication Systems in Near-Field Transmission. IEEE Trans. Cogn. Commun. Netw. 2025, 12, 3422–3435. [Google Scholar] [CrossRef]
  18. Zhu, Z.; Guo, K.; Chu, Z.; Mi, D.; Mu, J.; Muhaidat, S.; Wong, K.K. Unlocking Integrated Wireless Powered Sensing and Communication Networks Using Reconfigurable Intelligent Surface. IEEE Trans. Wirel. Commun. 2025, 24, 8985–9000. [Google Scholar] [CrossRef]
  19. Zhu, Z.; Gong, M.; Sun, G.; Liu, P.; Mi, D. AI-Enabled STAR-RIS Aided MISO ISAC Secure Communications. Tsinghua Sci. Technol. 2025, 30, 998–1011. [Google Scholar] [CrossRef]
  20. Zhu, Z.; Li, Z.; Chu, Z.; Guan, Y.; Wu, Q.; Xiao, P.; Renzo, M.D.; Lee, I. Intelligent Reflecting Surface Assisted mmWave Integrated Sensing and Communication Systems. IEEE Internet Things J. 2024, 11, 29427–29437. [Google Scholar] [CrossRef]
  21. Hofer, M.; Xu, Z.; Vlastaras, D.; Schrenk, B.; Löschenbrand, D.; Tufvesson, F.; Zemen, T. Real-Time Geometry-Based Wireless Channel Emulation. IEEE Trans. Veh. Technol. 2019, 68, 1631–1645. [Google Scholar] [CrossRef]
  22. Zhu, Q.; Li, H.; Fu, Y.; Wang, C.X.; Tan, Y.; Chen, X.; Wu, Q. A Novel 3D Non-Stationary Wireless MIMO Channel Simulator and Hardware Emulator. IEEE Trans. Commun. 2018, 66, 3865–3878. [Google Scholar] [CrossRef]
  23. Wang, H.; Li, Y.; Li, C.; Guo, Y.; Zhang, X. Channel Emulation for Dynamic Channel Models with Programmable Turntable in OTA Testing. IEEE Antennas Wirel. Propag. Lett. 2025, 24, 1099–1103. [Google Scholar] [CrossRef]
  24. Araniti, G.; Campolo, C.; Condoluci, M.; Iera, A.; Molinaro, A. LTE for vehicular networking: A survey. IEEE Commun. Mag. 2013, 51, 148–157. [Google Scholar] [CrossRef]
  25. Tadayon, N.; Aïssa, S. Modeling and Analysis Framework for Multi-Interface Multi-Channel Cognitive Radio Networks. IEEE Trans. Wirel. Commun. 2015, 14, 935–947. [Google Scholar] [CrossRef]
  26. Cao, H.; Tian, H.; Cai, J.; Alfa, A.S.; Huang, S. Dynamic Load-Balancing Spectrum Decision for Heterogeneous Services Provisioning in Multi-Channel Cognitive Radio Networks. IEEE Trans. Wirel. Commun. 2017, 16, 5911–5924. [Google Scholar] [CrossRef]
  27. Yang, J.; Ran, Q.; Ma, J. Queuing Delay Analysis for Wavelength Routing Optical Satellite Networks Over Dual-Layer Constellation. IEEE Photonics J. 2024, 16, 7301508. [Google Scholar] [CrossRef]
  28. Chen, M.; An, W.; Liu, Y.; Dong, C.; Xu, X.; Han, B.; Zhang, P. Modeling and Performance Analysis of Single-Server Database Over Quasi-Static Rayleigh Fading Channel. IEEE Trans. Veh. Technol. 2023, 72, 9602–9607. [Google Scholar] [CrossRef]
  29. Agrawal, S.; Rana, V.; Jagannatham, A.K. Queuing Analysis for Multiple-Antenna Cognitive Radio Wireless Networks with Beamforming. IEEE Signal Process. Lett. 2017, 24, 334–338. [Google Scholar] [CrossRef]
  30. Li, H.; Han, Z. Socially Optimal Queuing Control in Cognitive Radio Networks Subject to Service Interruptions: To Queue or Not to Queue? IEEE Trans. Wirel. Commun. 2011, 10, 1656–1666. [Google Scholar] [CrossRef]
  31. Das, D.; Abouzeid, A.A. Spatial–Temporal Queuing Theoretic Modeling of Opportunistic Multihop Wireless Networks with and Without Cooperation. IEEE Trans. Wirel. Commun. 2015, 14, 5209–5224. [Google Scholar] [CrossRef]
  32. Mehta, R. Discrete-time simulation for performance modelling of FIFO single-server queuing system. Int. J. Syst. Control Commun. 2022, 13, 112–132. [Google Scholar] [CrossRef]
  33. Shudrenko, Y.; Timm-Giel, A. Modeling end-to-end delays in TSCH wireless sensor networks using queuing theory and combinatorics. Computing 2024, 106, 2923–2947. [Google Scholar] [CrossRef]
  34. Sun, B.; Jiang, Y.; Wu, Y.; Ye, Q.; Tsang, D.H.K. Performance Analysis of Mobile Cloud Computing with Bursty Demand: A Tandem Queue Model. IEEE Trans. Veh. Technol. 2022, 71, 9951–9966. [Google Scholar] [CrossRef]
  35. Li, J.; Zhang, S.; Yu, T.; Pan, G.; Huang, K.; Chen, X.; Sun, Y.; Liu, J.; Li, J.; Ng, D.W.K. A Unified QoS-Aware Multiplexing Framework for Next-Generation Immersive Communication with Legacy Wireless Applications. IEEE Internet Things J. 2025, 12, 28582–28597. [Google Scholar] [CrossRef]
  36. Sahu, P.K.; Pati, H.K.; Pradhan, S.K. Application of adaptive guard channel reservation under hard handoff constraint in wireless cellular network. Int. J. Wirel. Mob. Comput. 2025, 28, 170–185. [Google Scholar] [CrossRef]
  37. Stahlbuhk, T.; Shrader, B.; Modiano, E. Learning Algorithms for Minimizing Queue Length Regret. arXiv 2020, arXiv:2005.05206. [Google Scholar] [CrossRef]
  38. Foukas, X.; Marina, M.K.; Kontovasilis, K. Orion: RAN Slicing for a Flexible and Cost-Effective Multi-Service Mobile Network Architecture. In Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking, New York, NY, USA, 16–20 October 2017; pp. 127–140. [Google Scholar] [CrossRef]
  39. Jiang, M.; Condoluci, M.; Mahmoodi, T. Network slicing management & prioritization in 5G mobile systems. In Proceedings of the European Wireless 2016; 22th European Wireless Conference, Oulu, Finland, 18–20 May 2016; pp. 1–6. [Google Scholar]
  40. Ghazikor, M.; Roach, K.; Cheung, K.; Hashemi, M. Channel-aware distributed transmission control and video streaming in UAV networks. arXiv 2024, arXiv:2408.01885. [Google Scholar] [CrossRef]
  41. Shrivastava, T.; Pandey, S.; Mishra, P.; Verma, S. A Non-cooperative Game Theoretic Approach for Resource Allocation in D2D Communication. In Advances in Artificial Intelligence and Data Engineering; Springer: Singapore, 2021. [Google Scholar] [CrossRef]
Figure 1. Wireless channel emulation system reproduces the real environment with high fidelity.
Figure 1. Wireless channel emulation system reproduces the real environment with high fidelity.
Sensors 26 00813 g001
Figure 2. Architecture of the Two-Dimensional Dynamic Logic Resource Allocation (2D-DLRA) System.
Figure 2. Architecture of the Two-Dimensional Dynamic Logic Resource Allocation (2D-DLRA) System.
Sensors 26 00813 g002
Figure 3. Stream-Based Pooled Architecture.
Figure 3. Stream-Based Pooled Architecture.
Sensors 26 00813 g003
Figure 4. User Capacity vs. User Activity and Channel Complexity.
Figure 4. User Capacity vs. User Activity and Channel Complexity.
Sensors 26 00813 g004
Figure 5. System Scalability and Economic Efficiency Analysis.
Figure 5. System Scalability and Economic Efficiency Analysis.
Sensors 26 00813 g005
Figure 6. Dynamic Resource Allocation Heatmap under Mixed-Mode Workload. & System Performance Metrics Over Time.
Figure 6. Dynamic Resource Allocation Heatmap under Mixed-Mode Workload. & System Performance Metrics Over Time.
Sensors 26 00813 g006
Figure 7. Relationship between System Load, Update Latency, and Dynamic Emulation Error.
Figure 7. Relationship between System Load, Update Latency, and Dynamic Emulation Error.
Sensors 26 00813 g007
Figure 8. Emulation Error Distribution under Heterogeneous User Competition.
Figure 8. Emulation Error Distribution under Heterogeneous User Competition.
Sensors 26 00813 g008
Figure 9. CDF of End-to-End Emulation Error under Different Architectures.
Figure 9. CDF of End-to-End Emulation Error under Different Architectures.
Sensors 26 00813 g009
Figure 10. User Capacity Comparison. & Gain Factor vs. User Activity.
Figure 10. User Capacity Comparison. & Gain Factor vs. User Activity.
Sensors 26 00813 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fei, D.; Zhang, H.; Chen, C.; Zhou, H.; Zheng, P.; Wang, G.; Li, C.; Zhang, J.; Song, Z.; Ai, B. Two-Dimensional Dynamic Logic Resource Allocation for Scalable RIS Channel Emulation. Sensors 2026, 26, 813. https://doi.org/10.3390/s26030813

AMA Style

Fei D, Zhang H, Chen C, Zhou H, Zheng P, Wang G, Li C, Zhang J, Song Z, Ai B. Two-Dimensional Dynamic Logic Resource Allocation for Scalable RIS Channel Emulation. Sensors. 2026; 26(3):813. https://doi.org/10.3390/s26030813

Chicago/Turabian Style

Fei, Dan, Haobo Zhang, Chen Chen, Hao Zhou, Peng Zheng, Guoyu Wang, Cheng Li, Jiayi Zhang, Zhaohui Song, and Bo Ai. 2026. "Two-Dimensional Dynamic Logic Resource Allocation for Scalable RIS Channel Emulation" Sensors 26, no. 3: 813. https://doi.org/10.3390/s26030813

APA Style

Fei, D., Zhang, H., Chen, C., Zhou, H., Zheng, P., Wang, G., Li, C., Zhang, J., Song, Z., & Ai, B. (2026). Two-Dimensional Dynamic Logic Resource Allocation for Scalable RIS Channel Emulation. Sensors, 26(3), 813. https://doi.org/10.3390/s26030813

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop