Next Article in Journal
Framework for Groove Rating in Exercise-Enhancing Music Based on a CNN–TCN Architecture with Integrated Entropy Regularization and Pooling
Previous Article in Journal
What Is Ontic and What Is Epistemic in the Quantum Mechanics of Spin?
Previous Article in Special Issue
Clustered Distributed Data Storage Repairing Multiple Failures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Centralized Hierarchical Coded Caching Scheme for Two-Layer Network

1
Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin 541004, China
2
School of Mathematics and Statistics, Guangxi Normal University, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(3), 316; https://doi.org/10.3390/e27030316
Submission received: 23 February 2025 / Revised: 14 March 2025 / Accepted: 16 March 2025 / Published: 18 March 2025
(This article belongs to the Special Issue Network Information Theory and Its Applications)

Abstract

:
This paper considers a two-layer hierarchical network, where a server containing N files is connected to K 1 mirrors and each mirror is connected to K 2 users. Each mirror and each user has a cache memory of size M 1 and M 2 files, respectively. The server can only broadcast to the mirrors, and each mirror can only broadcast to its connected users. For such a network, we propose a novel coded caching scheme based on two known placement delivery arrays (PDAs). To fully utilize the cache memory of both the mirrors and users, we first treat the mirrors and users as cache nodes of the same type; i.e., the cache memory of each mirror is regarded as an additional part of the connected users’ cache, then the server broadcasts messages to all mirrors according to a K 1 K 2 -user PDA in the first layer. In the second layer, each mirror first cancels useless file packets (if any) in the received useful messages and forwards them to the connected users, such that each user can decode the requested packets not cached by the mirror, then broadcasts coded subpackets to the connected users according to a K 2 -user PDA, such that each user can decode the requested packets cached by the mirror. The proposed scheme is extended to a heterogeneous two-layer hierarchical network, where the number of users connected to different mirrors may be different. Numerical comparison showed that the proposed scheme achieved lower coding delays compared to existing hierarchical coded caching schemes at most memory ratio points.

1. Introduction

With the development of the Internet, video and social media traffic occupy the main part of the network, leading to serious congestion. Due to the unbalanced distribution of network traffic over time, caching technology has been proposed to alleviate network congestion. In [1], Maddah-Ali and Niesen (MN) proposed a coded caching scheme (referred to as the MN scheme) for the ( K , M , N ) caching system: a server containing N files is connected to K users through an error-free shared link, and each user has an isolated cache of size M files. A coded caching scheme operates in two phases. In the placement phase, each file is divided into F packets, some of which are stored in each user’s cache according to a certain strategy. The quantity F is referred to as the subpacketization. In the delivery phase, network coding techniques are utilized to combine the packets requested by multiple users into multicast messages for transmission, such that each user can rebuild their desired file. The worst-case normalized transmission amount in the delivery phase is referred to as the transmission load R. Coded caching realizes multicast transmission for multiple users with different demands, which can effectively reduce the transmission load compared to uncoded caching. The MN scheme is optimal under uncoded placement when K N [2,3,4], and is order optimal within a factor of 2 with respect to the information-theoretic lower bound [5]. However, the subpacketization of the MN scheme increases exponentially with the number of users. In order to reduce the subpacketization, Shanmugam et al. [6] proposed a grouping method, which divides the users into several groups and applies the MN scheme to each group. Yan et al. [7] proposed a combinatorial structure called a placement delivery array (PDA) to represent coded caching schemes. The MN scheme can be represented by a special class of PDA, which is referred to as the MN PDA. Based on the concept of PDA, several coded caching schemes were proposed in [7,8,9,10,11,12]. Other combinatorial structures, including linear block codes [13], special ( 6 , 3 ) -free hypergraphs [14], ( r , t ) Ruzsa-Szeméredi graphs [15], projective geometry [16], combinatorial designs [17], and rainbow structures [18], have also been utilized in the design of coded caching schemes.

1.1. Two-Layer Hierarchical Network Model and Related Works

In practical applications, most caching systems consist of multiple layers. This paper considers the ( K 1 , K 2 , M 1 , M 2 , N ) two-layer hierarchical network proposed in [19], as shown in Figure 1. There is a single server connected to K 1 mirrors through an error-free broadcast link. Likewise, each mirror is connected to K 2 users, so there are K = K 1 K 2 users in total. The server contains N equal-size files denoted by W 1 , , W N . Each mirror and each user have a cache memory of size M 1 and M 2 files, respectively. The j-th user attached to the i-th mirror is denoted by user (i, j), where i { 1 , 2 , , K 1 } , j { 1 , 2 , , K 2 } .
The network operates in two phases:
  • Placement Phase: Each file W n is split into F packets with equal size, i.e., W n = { W n , f f { 1 , 2 , , F } } ; then, the mirrors and users cache some packets of each file. Denote the content cached by the i-th mirror and user ( i , j ) by M i and U ( i , j ) , respectively. In this phase, the server has no knowledge of the users’ future requests.
  • Delivery Phase: Each user randomly requests a file from the server, assuming that user ( i , j ) requests the file W d ( i , j ) , the request is denoted by d = ( d ( 1 , 1 ) ,   d ( 1 , 2 ) , ,   d ( 1 , K 2 ) , ,   d ( K 1 , K 2 ) ) . The server broadcasts coded messages of total size R 1 files to all mirrors, and each mirror broadcasts coded messages of total size R 2 files to all its attached users, such that each user can rebuild their desired file. R 1 is referred to as the transmission load from the server to the mirrors, and R 2 is called the transmission load from each mirror to its attached users.
Notice that a parallel transmission may exist between the server and mirrors due to the orthogonal links between the two layers. Thus, in a two-layer hierarchical network, if the server and all mirrors concurrently send symbols through all transmission slots, then the corresponding coding delay is
T m a x = max { R 1 , R 2 } .
If there exists a mirror which starts transmission after the server finishes its transmission, the coding delay is
T s u m = R 1 + R 2 .
The goal is to design coded caching schemes with the coding delay T m a x or T s u m as small as possible.
For the two-layer hierarchical network, Nikhil Karamchandani et al., in [19], divided each file in the server into an α fraction and ( 1 α ) fraction, and divided each user cache memory into a β fraction and ( 1 β ) fraction. By applying the decentralized MN scheme to the two parts of the files separately, the achieved transmission loads are as follows:
R 1 = α · K 2 · r d M 1 α , K 1 + ( 1 α ) · r d ( 1 β ) M 2 ( 1 α ) , K 1 K 2
R 2 = α · r d β M 2 α , K 2 + ( 1 α ) · r d ( 1 β ) M 2 ( 1 α ) , K 2
where
r d ( M , K ) = ( N M 1 ) 1 ( 1 M N ) K if 0 < M N < 1 , K if M N = 0 , 0 otherwise .
It was shown that R 1 and R 2 are simultaneously approximately minimized when α = α * and β = β * , where
( α * , β * ) = ( M 1 N , M 1 N ) if M 1 + M 2 K 2 N ,   0 M 1 N 4 , ( M 1 M 1 + M 2 K 2 , 0 ) if M 1 + M 2 K 2 < N , ( M 1 N , 1 4 ) if M 1 + M 2 K 2 N ,   N 4 < M 1 N .
When M 1 N { 0 , 1 K 1 , , K 1 1 K 1 , 1 } and M 2 N { 0 , 1 K 2 , , K 2 1 K 2 , 1 } , Wang et al. [20] proposed a centralized coded caching scheme that leverages idle transmission time resources by constructing concurrent transmissions between the two layers, achieving the following maximum coding delay:
T m a x = α R s 1 + ( 1 α ) R s 2 ,
where
R s 1 = r c ( M 1 α , K 1 ) r c ( β M 2 α , K 2 ) + min ( M 1 α N , 1 ) r c ( β M 2 α , K 2 ) ,
R s 2 = r c ( ( 1 β ) M 2 1 α , K 1 K 2 ) ,
and r c ( M , K ) = K ( 1 M N ) 1 + K M N . It was shown that T m a x is approximately optimal when α = α c and β = β c , where
α c = β c = min ( 2 K 1 K 2 M 1 M 2 + M 1 N + K 1 M 1 N 2 K 1 K 2 M 2 N 2 K 2 M 2 N , 1 ) .
Zhang et al. [21] improved the scheme in [19] by jointly designing the data placement and delivery in two layers, effectively avoiding the transmission of any content already stored by the users. On this basis, Kong et al. [22] defined a combinatorial structure called a hierarchical placement delivery array (HPDA) to design hierarchical coded caching schemes, and proposed a transformation from two PDAs to an HPDA. When the two PDAs are chosen from the MN PDA, the resulting scheme is shown in Lemma 1, which aligns with the centralized scheme in [23], and achieves a smaller R 1 and the same R 2 compared to the scheme in [21]. Based on the definition of HPDA, Rashid Ummer et al. [24] designed two hierarchical coded caching schemes via t-designs. In addition, Pandey et al. [25] considered a hierarchical coded caching problem with coded placement. A wireless scenario was considered in [26,27], where each mirror connects to users via a wireless channel.
Lemma 1
([22]). When M 1 N { 0 , 1 K 1 , , K 1 1 K 1 , 1 } and M 2 N { 0 , 1 K 2 , , K 2 1 K 2 , 1 } , there exists a ( K 1 , K 2 , M 1 , M 2 , N ) hierarchical coded caching scheme with transmission loads R 1 = r c ( M 1 , K 1 ) · r c ( M 2 , K 2 ) ,   R 2 = r c ( M 2 , K 2 ) , where r c ( M , K ) = K ( 1 M N ) 1 + K M N .

1.2. Contribution and Organization

In this paper, we consider the ( K 1 , K 2 , M 1 , M 2 , N ) two-layer hierarchical network proposed in [19]. Most existing schemes focus on creating as many multicast opportunities as possible among users connected to the same mirror, thus reducing the load from the mirror to its connected users (i.e., R 2 ), without fully leveraging the mirror and user caches to create multicast opportunities among all users. We aim to design a coded caching scheme for a two-layer hierarchical network to further reduce the coding delays T m a x and T s u m . To this end, we first treat the mirrors and users as cache nodes of the same type, i.e., the cache memory of each mirror is regarded as an additional part of its connected users’ cache, then the server broadcasts messages to all mirrors according to a K 1 K 2 -user MN PDA in the first layer. In the second layer, each mirror first cancels the useless packets (if any) in the received useful messages and forwards them to its connected users, such that each user can decode the requested packets not cached by the mirror, then broadcasts coded subpackets to its connected users according to a K 2 -user MN PDA, such that each user can decode the requested packets cached by the mirror. The proposed scheme is extended to a heterogeneous two-layer hierarchical network, where the number of users connected to each mirror is different. Performance analysis showed that both the achieved coding delays T m a x and T s u m of the proposed scheme are lower than those of existing schemes.
The rest of this paper is organized as follows: Section 2 introduces the definition of PDA and related results. Section 3 shows the main results of this paper. Section 4 presents an illustrative example. Performance analysis is provided in Section 5, and Section 6 concludes this paper.
Notations:
  • For any integers a and b with a < b , we define [ a , b ] { a , a + 1 , , b } and [ a , b ) { a , a + 1 , , b 1 } .
  • For any positive integers c, we define [ c ] { 1 , 2 , , c } .
  • For two integers x , y , if 0 y x , x y is the binomial coefficient defined as x y = x ! y ! ( x y ) ! , and we let x y = 0 if x 0 or x < y or y < 0 .
  • For any F × K array P , P ( i , j ) represents the element in the i-th row and j-th column of P , where i [ F ] and j [ K ] .
To improve readability, we add Table 1 to summarize frequently used symbols.

2. Placement Delivery Array

This section reviews the definition of PDA and the relationship between a PDA and a coded caching scheme.
Definition 1
([7]). For any positive integers K, F, Z, S, an F × K array P with alphabet { } [ S ] is called a ( K , F , Z , S ) placement delivery array (PDA) if it satisfies the following conditions:
C1. 
The symbol “∗” appears Z times in each column;
C2. 
Each integer s [ S ] occurs at least once in the array;
C3. 
For any two distinct entries P ( j 1 , k 1 ) and P ( j 2 , k 2 ) , if P ( j 1 , k 1 ) = P ( j 2 , k 2 ) = s , then j 1 j 2 , k 1 k 2 and P ( j 1 , k 2 ) = P ( j 2 ,   k 1 ) = .
If each integer occurs exactly g times in a PDA, the PDA is called a g-regular PDA or g-PDA.
In a PDA, the symbol “∗” in the j-th row and k-th column indicates that the k-th user stores the j-th packet of all files in the placement phase. The integer in the j-th row and k-th column indicates that the k-th user does not store the j-th packet of all files. The requested packets represented by the same integer in the PDA are sent to the users after the xoring operation in the delivery phase. Condition C1 implies that the memory ratio of each user is Z / F . Condition C3 ensures that each user can decode its requested packet, since it has cached all the other packets in the received coded message. Condition C2 implies that the number of coded messages broadcast by the server is exactly S, each of size 1 / F file, so the transmission load is R = S / F .
Lemma 2
([7]). Given a ( K , F , Z , S ) PDA, there exists a ( K , M , N ) coded caching scheme with user memory ratio M N = Z F , subpacketization F and transmission load R = S F .
When t = K M N [ 0 , K ) , the MN PDA corresponding to the MN scheme is defined as follows: for any k [ K ] and for any T [ K ] with | T | = t ,
P ( T , k ) = if k T , f t + 1 ( T { k } ) if k T ,
where f t + 1 is an injection from { S | S [ K ] , | S | = t + 1 } to K t + 1 . For example, when K = 4 and M N = 1 2 , the MN PDA is as follows:
P = 1 2 1 3 2 3 1 4 2 4 3 4 .
Lemma 3
([7]). For any positive integers K and t with t < K , the MN PDA is a ( t + 1 ) - K , K t , K 1 t 1 , K t + 1 PDA.

3. Main Results

In this section, we propose a coded caching scheme for a two-layer hierarchical network using two MN PDAs. The first MN PDA is for all the K 1 K 2 users, which fully utilizes the cache memory of all mirrors and users to create multicast opportunities among all users. The second MN PDA is for each group of K 2 users connected to the same mirror, which fully utilizes user caches to create multicast opportunities within each group. The main result is as follows:
Theorem 1.
In the ( K 1 , K 2 , M 1 , M 2 , N ) two-layer hierarchical network, let K = K 1 K 2 , for any μ [ K 2 , K ] , t [ 0 , K 2 ] , there exists a coded caching scheme with
M 1 N = K K 2 μ K 2 K μ , M 2 N = μ K ( 1 t K 2 ) K K 2 μ K 2 K μ ,
R 1 = K μ μ + 1 , R 2 = K μ + 1 K K 2 μ + 1 K μ + K K 2 μ K 2 K μ K 2 t t + 1 .
Proof. 
For any μ [ K 2 , K ] and t [ 0 , K 2 ] , there is a ( μ + 1 ) - ( K , F , Z , S ) MN PDA G and a ( t + 1 ) - ( K 2 , F , Z , S ) MN PDA P , where F = K μ , Z = K 1 μ 1 , S = K μ + 1 , F = K 2 t , Z = K 2 1 t 1 , S = K 2 t + 1 . The PDA G is divided into K 1 subarrays, i.e., G = [ G 1 , , G K 1 ] , where G k 1 contains K 2 columns for each k 1 [ K 1 ] . The hierarchical coded caching scheme in Theorem 1 is generated based on the PDA G and P as follows:
  • Placement Phase: Each file W i is split into F packets, i.e., W i = { W i , j | j [ F ] } . The j-th packet of each file is cached by the k 1 -th mirror if each element of the j-th row in the subarray G k 1 is “∗”, i.e.,
    M k 1 = { W i , j | k 2 [ K 2 ] , G k 1 ( j , k 2 ) = , i [ N ] } .
    Since the number of rows in each G k 1 consisting entirely of “∗”s is K K 2 μ K 2 , the memory ratio of each mirror is
    M 1 N = K K 2 μ K 2 F = K K 2 μ K 2 K μ .
    The cached content of user ( k 1 , k 2 ) consists of two parts, i.e.,
    U ( k 1 , k 2 ) = U ( k 1 , k 2 ) 1 U ( k 1 , k 2 ) 2 ,
    where U ( k 1 , k 2 ) 1 is not cached by the k 1 -th mirror and U ( k 1 , k 2 ) 2 is a subset of the content cached by the k 1 -th mirror. Specifically, the j-th packet of each file is cached by user ( k 1 , k 2 ) if it is not cached by the k 1 -th mirror and the element at the j-th row and k 2 -th column of the subarray G k 1 is “∗”, i.e.,
    U ( k 1 , k 2 ) 1 = { W i , j | G k 1 ( j , k 2 ) = ,   W i , j M k 1 ,   i [ N ] } .
    Each packet cached by the k 1 -th mirror is further divided into F subpackets, i.e., for any W i , j M k 1 , we have W i , j = { W i , j , h | h [ F ] } . The subpacket W i , j , h is cached by user ( k 1 , k 2 ) if the corresponding packet W i , j is cached by the k 1 -th mirror and the element at the h-th row and k 2 -th column of P is “∗”, i.e.,
    U ( k 1 , k 2 ) 2 = { W i , j , h | W i , j M k 1 ,   P ( h , k 2 ) = ,   i [ N ] } .
    Hence, the memory ratio of each user is
    M 2 N = Z K K 2 μ K 2 F + M 1 N · Z F = μ K ( 1 t K 2 ) K K 2 μ K 2 K μ .
  • Delivery Phase: Each user requests a file from the server, assuming that user ( k 1 , k 2 ) requests the file W d ( k 1 , k 2 ) . The transmission from the server to the mirrors is according to the PDA G . Specifically, for any integer s [ S ] , the server sends
    X s = G ( j , k ) = s , j [ F ] , k [ K ] , k 1 = k K 2 , k 2 = k ( k 1 1 ) · K 2 W d ( k 1 , k 2 ) , j
    to the mirrors. Therefore, the transmission load from the server to the mirrors is
    R 1 = S F = K μ μ + 1 .
    The transmission from each mirror to the attached users consists of two parts. First, each mirror cancels useless packets (if any) in the received useful messages using the cached content, then forwards them to the attached users. The number of messages forwarded by each mirror is K μ + 1 K K 2 μ + 1 . Second, each mirror transmits its cached contents to its attached users according to the PDA P . Specifically, for any packet index j satisfying that the j-th packet of each file is cached by the k 1 -th mirror, i.e., W i , j M k 1 where i [ N ] , for any integer s [ S ] , the k 1 -th mirror sends
    Y s , j k 1 = P ( h , k 2 ) = s , h [ F ] , k 2 [ K 2 ] , W d ( k 1 , k 2 ) , j , h
    to all its attached users. Hence, the transmission load from each mirror to its attached users is
    R 2 = K μ + 1 K K 2 μ + 1 F + K K 2 μ K 2 S F · F = K μ + 1 K K 2 μ + 1 K μ + K K 2 μ K 2 K μ K 2 t t + 1 .
By using the messages forwarded by the k 1 -th mirror, user ( k 1 , k 2 ) can recover each requested packet not cached by the k 1 -th mirror, since G is a PDA. By using the messages in (8), user ( k 1 , k 2 ) can recover each requested packet cached by the k 1 -th mirror, since P is also a PDA. Hence, each user can recover its desired file. □
The scheme in Theorem 1 can be extended to a heterogeneous scenario where the number of users connected to each mirror may vary. Precisely, there is a central server containing N files connected to A mirrors. For any i [ A ] , there are L i users connected to the i-th mirror. For any j [ L i ] , the j-th user connected to the i-th mirror (denoted by user ( i , j ) ) has a cache memory of size M ( i , j ) files and the i-th mirror has a cache memory of size M i files. This scenario is called a heterogeneous two-layer hierarchical network.
For the heterogeneous two-layer hierarchical network, we choose a ( K , F , Z , S ) MN PDA G , where K = i = 1 A L i , F = K μ , Z = K 1 μ 1 , S = K μ + 1 , and μ [ max i [ A ] ( L i ) , K ] . G is divided into A subarrays, i.e., G = [ G 1 , , G A ] , where G i contains L i columns. For any i [ A ] , we choose an ( L i , F i , Z i , S i ) MN PDA P i where F i = L i t i , Z i = L i 1 t i 1 , S i = L i t i + 1 and t i [ 0 , L i ] .
In the placement phase, each file W n is split into F packets, i.e., W n = { W n , f | f [ F ] } . The f-th packet of each file is cached by the i-th mirror if the f-th row of the subarray G i consists entirely of “∗”s. Then, the memory ratio of the i-th mirror is
M i N = K L i μ L i K μ , i [ A ] .
The cached content of user ( i , j ) includes two parts. The first part is completely non-cached by the i-th mirror. That is, the f-th packet of each file is cached by user ( i , j ) if it is not cached by the i-th mirror and G i ( f , j ) = . The second part is a subset of the content cached by the i-th mirror. That is, each packet W n , f cached by the i-th mirror is further divided into F i subpackets, i.e., W n , f = { W n , f , h | h [ F i ] } , and the h-th subpacket of W n , f is stored by user ( i , j ) if P i ( h , j ) = . Therefore, the total memory ratio of user ( i , j ) is
M ( i , j ) N = K 1 μ 1 K L i μ L i K μ + K L i μ L i K μ · t i L i = μ K ( 1 t i L i ) K L i μ L i K μ .
In the delivery phase, the server broadcasts S coded messages to the mirrors according to the PDA G , thus the transmission load from the server to the mirrors is
R 1 = K μ μ + 1 .
For any i [ A ] , the i-th mirror first cancels useless packets (if any) in the received useful messages by using cached content and forwards them to the attached users. The number of messages forwarded by the i-th mirror is K μ + 1 K L i μ + 1 . Then, the i-th mirror broadcasts S i coded subpackets to the attached users according to P i . Hence, the total transmission load from the i-th mirror to its connected users is
R 2 i = K μ + 1 K L i μ + 1 K μ + K L i μ L i K μ L i t i t i + 1 .
Since G and P i ( i [ A ] ) are all PDAs, each user can decode its desired file. The following result is obtained:
Theorem 2.
For the heterogeneous two-layer hierarchical network, there exists a coded caching scheme with the memory ratio of the i-th mirror
M i N = K L i μ L i K μ ,
the memory ratio of the j-th user connected to the i-th mirror
M ( i , j ) N = μ K ( 1 t i L i ) K L i μ L i K μ ,
the transmission load from the server to the mirrors
R 1 = K μ μ + 1 ,
and the transmission load from the i-th mirror to the attached users
R 2 i = K μ + 1 K L i μ + 1 K μ + K L i μ L i K μ L i t i t i + 1 ,
where i [ A ] , j [ L i ] , K = i = 1 A L i , μ [ max i [ A ] ( L i ) , K ] and t i [ 0 , L i ] .
Note that when A = K 1 , L 1 = L 2 = = L A = K 2 and t 1 = t 2 = = t A = t , the scheme in Theorem 2 reduces to the scheme in Theorem 1.

4. An Illustrative Example for Theorem 1

For the ( K 1 , K 2 , M 1 , M 2 , N ) two-layer hierarchical network where K 1 = 2 ,   K 2 = 2 ,   M 1 / N = 1 / 6 ,   M 2 / N = 5 / 12 , we have K = K 1 K 2 = 4 . By choosing μ = 2 , the ( K , F , Z , S ) MN PDA G is shown in (9), where F = 6 , Z = 3 , and S = 4 . G is divided equally into K 1 = 2 subarrays, each with K 2 = 2 columns, i.e., G = [ G 1 , G 2 ] .
G = G 1 G 2 1 2 1 3 2 3 1 4 2 4 3 4 .
By choosing t = 1 , the ( K 2 , F , Z , S ) MN PDA P is shown in (10), where F = 2 , Z = 1 , and S = 1 .
P = 1 1 .
The placement and delivery phases of the hierarchical coded caching scheme are generated by the PDA G in (9) and P in (10), as follows:
  • Placement Phase: Each file W i is split into F = 6 packets, i.e., W i = { W i , j | j [ 6 ] } . The j-th packet of each file is cached by the k 1 -th mirror if each element of the j-th row in the subarray G k 1 is “∗”. From (3) and (9), the cached content of each mirror is as follows:
    M 1 = { W i , 1 | i [ N ] } , M 2 = { W i , 6 | i [ N ] } .
    thus, the memory ratio of each mirror is M 1 N = 1 6 .
    The cached content of user ( k 1 , k 2 ) consists of two parts, i.e., U ( k 1 , k 2 ) = U ( k 1 , k 2 ) 1 U ( k 1 , k 2 ) 2 , where the first part U ( k 1 , k 2 ) 1 is not cached by the k 1 -th mirror, and the second part U ( k 1 , k 2 ) 2 is a subset of the content cached by the k 1 -th mirror. From (5) and (10), the first part of the cached content of each user is as follows:
    U ( 1 , 1 ) 1 = { W i , 2 , W i , 3 | i [ N ] } , U ( 1 , 2 ) 1 = { W i , 4 , W i , 5 | i [ N ] } , U ( 2 , 1 ) 1 = { W i , 2 , W i , 4 | i [ N ] } , U ( 2 , 2 ) 1 = { W i , 3 , W i , 5 | i [ N ] } .
    Each packet cached by the k 1 -th mirror is further divided into F = 2 subpackets, i.e., for any W i , j M k 1 , we have W i , j = { W i , j , h | h [ 2 ] } . From (6), (10) and (11), the second part of the cached content of each user is as follows:
    U ( 1 , 1 ) 2 = { W i , 1 , 1 | i [ N ] } , U ( 1 , 2 ) 2 = { W i , 1 , 2 | i [ N ] } , U ( 2 , 1 ) 2 = { W i , 6 , 1 | i [ N ] } , U ( 2 , 2 ) 2 = { W i , 6 , 2 | i [ N ] } .
    Hence, the memory ratio of each user is M 2 N = 2 6 + 1 6 · 1 2 = 5 12 .
  • Delivery Phase: Each user requests a file from the server, assuming the request vector is
    d = ( d ( 1 , 1 ) , d ( 1 , 2 ) , d ( 2 , 1 ) , d ( 2 , 2 ) ) = ( 1 , 2 , 3 , 4 ) ,
    i.e., user ( 1 , 1 ) , ( 1 , 2 ) , ( 2 , 1 ) , ( 2 , 2 ) request W 1 , W 2 , W 3 , W 4 respectively. The transmission from the server to the mirrors is according to the PDA G in (9). Specifically, the server sends
    X 1 = W 1 , 4 W 2 , 2 W 3 , 1 , X 2 = W 1 , 5 W 2 , 3 W 4 , 1 , X 3 = W 1 , 6 W 3 , 3 W 4 , 2 , X 4 = W 2 , 6 W 3 , 5 W 4 , 4
    to the mirrors from (7) and (9). Therefore, the transmission load from the server to the mirrors is R 1 = 4 6 = 2 3 .
    The transmission from each mirror to the attached users consists of two parts. First, each mirror cancels useless packets (if any) in the received useful messages by using the cached content, then forwards them to the attached users. Specifically, the first mirror cancels W 3 , 1 in X 1 to obtain X 1 1 = W 1 , 4 W 2 , 2 , cancels W 4 , 1 in X 2 to obtain X 2 1 = W 1 , 5 W 2 , 3 , then forwards X 1 1 , X 2 1 , X 3 , X 4 to user ( 1 , 1 ) and ( 1 , 2 ) . The second mirror cancels W 1 , 6 in X 3 to obtain X 3 2 = W 3 , 3 W 4 , 2 , cancels W 2 , 6 in X 4 to obtain X 4 2 = W 3 , 5 W 4 , 4 , then forwards X 1 , X 2 , X 3 2 , X 4 2 to user ( 2 , 1 ) and ( 2 , 2 ) . Second, each mirror transmits coded subpackets to the attached users according to the PDA P in (10). Specifically, the first mirror sends
    Y 1 , 1 1 = W 1 , 1 , 2 W 2 , 1 , 1
    to its attached users, and the second mirror sends
    Y 1 , 6 2 = W 3 , 6 , 2 W 4 , 6 , 1
    to its attached users from (8), (10) and (11). Thus, the transmission load from each mirror to its attached users is R 2 = 4 6 + 1 12 = 5 12 .
After receiving the messages from the connected mirror, each user can recover their desired file. For example, let us consider user ( 1 , 1 ) who requests the file W 1 = { W 1 , j | j [ 6 ] } . First, it can decode W 1 , 4 from X 1 1 , since it caches W 2 , 2 . It can decode W 1 , 5 from X 2 1 , since it caches W 2 , 3 . It can decode W 1 , 6 from X 3 , since it caches W 3 , 3 and W 4 , 2 . Second, it can obtain subpacket W 1 , 1 , 2 from Y 1 , 1 1 , since it caches W i , 1 , 1 where i [ N ] , thus obtaining the packet W 1 , 1 . The remaining packets W 1 , 2 and W 1 , 3 are cached by user ( 1 , 1 ) . Thus, user ( 1 , 1 ) can recover the desired file W 1 . The decodability for other users is similar.

5. Performance Analysis

In this section, we compare the scheme in Theorem 1 with the schemes in [19,20,22], since the scheme in [24] does not provide an exact expression for R 2 and the scheme in [22] (which is consistent with the centralized scheme in [23]) outperforms the scheme in [21]. When K 1 = 8 > K 2 = 3 , the tradeoffs of the coding delays T m a x , T s u m with the memory ratios of each mirror and user, i.e., M 1 N and M 2 N , are shown in Figure 2 and Figure 3, respectively. Note that the scheme in [20] only focuses on the maximum coding delay T m a x . The scheme in [19] is applicable to arbitrary memory ratios, while the scheme proposed in Theorem 1 and the schemes in [20,22] are applicable to specific memory ratios, which are shown in Figure 4. It can be seen from Figure 2 and Figure 3 that the scheme in Theorem 1 achieves lower coding delays T m a x and T s u m than the schemes in [19,20,22]. To see this more clearly, we restrict the scheme in [19] to the memory ratio points applicable to the scheme in Theorem 1, and the obtained tradeoffs of the coding delays T m a x and T s u m with the sum of memory ratios M 1 N + M 2 N are shown in Figure 5. It can be seen that under the same sum of memory ratios, both the coding delays T m a x and T s u m of the scheme in Theorem 1 are lower than those of the schemes in [19,20,22]. Moreover, when the sum of memory ratios is relatively large or small, the two coding delays of the scheme in Theorem 1 are significantly reduced compared to the scheme in [19]. This is primarily due to the fact that, in the first layer of communication, we have fully leveraged the caches of both the mirrors and the users to create as many multicast opportunities as possible among all users; in the second layer of communication, we once again make full use of the users’ caches to maximize multicast opportunities within each group of users connected to the same mirror.
Similarly, when K 1 = 3 < K 2 = 8 or K 1 = K 2 = 10 , by restricting the scheme in [19] to the memory ratio points that are compatible with the scheme in Theorem 1, the tradeoffs of the coding delays T m a x and T s u m with the sum of memory ratios M 1 N + M 2 N are as shown in Figure 6 and Figure 7, respectively. In can be seen that, regardless of the relative sizes of K 1 and K 2 , our proposed scheme is capable of achieving reduced coding delays across a majority of memory ratio points.

6. Conclusions

In this paper, we proposed a novel coded caching scheme for the two-layer hierarchical network by utilizing two MN PDAs. The first MN PDA is for all users, which fully utilizes the cache memory of all mirrors and users to create as many multicast opportunities as possible. The second MN PDA is for a group of users connected to the same mirror, which fully utilizes the user’s cache to create multicast opportunities within the group. Moreover, the proposed scheme was extended to a heterogeneous two-layer hierarchical network. Numerical comparisons showed that the proposed scheme achieved lower coding delays T m a x and T s u m than existing schemes in [19,20,22] at most memory ratio points.

Author Contributions

Conceptualization, M.C.; Methodology, M.C.; Validation, J.W.; Formal analysis, K.Z.; Data curation, J.W.; Writing—original draft, K.Z.; Writing—review and editing, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Guangxi Natural Science Foundation (2022GXNSFBA035616) and the National Natural Science Foundation of China (NSFC62361005).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Maddah-Ali, M.A.; Niesen, U. Fundamental Limits of Caching. IEEE Trans. Inf. Theory 2014, 60, 2856–2867. [Google Scholar] [CrossRef]
  2. Wan, K.; Tuninetti, D.; Piantanida, P. On the optimality of uncoded cache placement. In Proceedings of the 2016 IEEE Information Theory Workshop (ITW), Cambridge, UK, 11–14 September 2016; pp. 161–165. [Google Scholar] [CrossRef]
  3. Wan, K.; Tuninetti, D.; Piantanida, P. An Index Coding Approach to Caching with Uncoded Cache Placement. IEEE Trans. Inf. Theory 2020, 66, 1318–1332. [Google Scholar] [CrossRef]
  4. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. The Exact Rate-Memory Tradeoff for Caching with Uncoded Prefetching. IEEE Trans. Inf. Theory 2018, 64, 1281–1296. [Google Scholar] [CrossRef]
  5. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. Characterizing the Rate-Memory Tradeoff in Cache Networks Within a Factor of 2. IEEE Trans. Inf. Theory 2019, 65, 647–663. [Google Scholar] [CrossRef]
  6. Shanmugam, K.; Ji, M.; Tulino, A.M.; Llorca, J.; Dimakis, A.G. Finite-Length Analysis of Caching-Aided Coded Multicasting. IEEE Trans. Inf. Theory 2016, 62, 5524–5537. [Google Scholar] [CrossRef]
  7. Yan, Q.; Cheng, M.; Tang, X.; Chen, Q. On the Placement Delivery Array Design for Centralized Coded Caching Scheme. IEEE Trans. Inf. Theory 2017, 63, 5821–5833. [Google Scholar] [CrossRef]
  8. Yan, Q.; Tang, X.; Chen, Q.; Cheng, M. Placement Delivery Array Design Through Strong Edge Coloring of Bipartite Graphs. IEEE Commun. Lett. 2018, 22, 236–239. [Google Scholar] [CrossRef]
  9. Cheng, M.; Wang, J.; Zhong, X.; Wang, Q. A Framework of Constructing Placement Delivery Arrays for Centralized Coded Caching. IEEE Trans. Inf. Theory 2021, 67, 7121–7131. [Google Scholar] [CrossRef]
  10. Wang, J.; Cheng, M.; Wan, K.; Caire, G. Placement Delivery Array Construction via Cartesian Product for Coded Caching. IEEE Trans. Inf. Theory 2023, 69, 7602–7626. [Google Scholar] [CrossRef]
  11. Aravind, V.R.; Sarvepalli, P.K.; Thangaraj, A. Lifting constructions of PDAs for coded caching with linear subpacketization. IEEE Trans. Commun. 2022, 70, 7817–7829. [Google Scholar] [CrossRef]
  12. Michel, J.; Wang, Q. Placement Delivery Arrays From Combinations of Strong Edge Colorings. IEEE Trans. Commun. 2020, 68, 5953–5964. [Google Scholar] [CrossRef]
  13. Tang, L.; Ramamoorthy, A. Coded Caching Schemes with Reduced Subpacketization from Linear Block Codes. IEEE Trans. Inf. Theory 2018, 64, 3099–3120. [Google Scholar] [CrossRef]
  14. Shangguan, C.; Zhang, Y.; Ge, G. Centralized Coded Caching Schemes: A Hypergraph Theoretical Approach. IEEE Trans. Inf. Theory 2018, 64, 5755–5766. [Google Scholar] [CrossRef]
  15. Shanmugam, K.; Tulino, A.M.; Dimakis, A.G. Coded caching with linear subpacketization is possible using Ruzsa-Szeméredi graphs. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 1237–1241. [Google Scholar] [CrossRef]
  16. Chittoor, H.H.S.; Krishnan, P.; Sree, K.V.S.; Mamillapalli, B. Subexponential and Linear Subpacketization Coded Caching via Projective Geometry. IEEE Trans. Inf. Theory 2021, 67, 6193–6222. [Google Scholar] [CrossRef]
  17. Agrawal, S.; Sushena Sree, K.V.; Krishnan, P. Coded Caching based on Combinatorial Designs. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 1227–1231. [Google Scholar] [CrossRef]
  18. Xu, M.; Xu, Z.; Ge, G.; Liu, M.Q. A Rainbow Framework for Coded Caching and Its Applications. IEEE Trans. Inf. Theory 2024, 70, 1738–1752. [Google Scholar] [CrossRef]
  19. Karamchandani, N.; Niesen, U.; Maddah-Ali, M.A.; Diggavi, S.N. Hierarchical Coded Caching. IEEE Trans. Inf. Theory 2016, 62, 3212–3229. [Google Scholar] [CrossRef]
  20. Wang, K.; Wu, Y.; Chen, J.; Yin, H. Reduce Transmission Delay for Caching-Aided Two-Layer Networks. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 2019–2023. [Google Scholar] [CrossRef]
  21. Zhang, L.; Wang, Z.; Xiao, M.; Wu, G.; Li, S. Centralized caching in two-layer networks: Algorithms and limits. In Proceedings of the 2016 IEEE 12th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), New York, NY, USA, 17–19 October 2016; pp. 1–5. [Google Scholar] [CrossRef]
  22. Kong, Y.; Wu, Y.; Cheng, M. Combinatorial Designs for Coded Caching on Hierarchical Networks. In Proceedings of the 2023 IEEE Wireless Communications and Networking Conference (WCNC), Glasgow, UK, 26–29 March 2023; pp. 1–6. [Google Scholar] [CrossRef]
  23. Wu, Y.; Song, X.; Wang, K.; Cao, S.; Chen, J.; Ma, S. Communication-Efficient Centralized and Decentralized Coded Caching for Hierarchical Cache-Aided Networks. IEEE Trans. Cogn. Commun. Netw. 2025. early access. [Google Scholar] [CrossRef]
  24. Rashid Ummer N, T.; Sundar Rajan, B. Optimal Placement Delivery Arrays from t-Designs with Application to Hierarchical Coded Caching. arXiv 2024, arXiv:2402.07188. [Google Scholar]
  25. Pandey, R.; Rajput, C.; Rajan, B.S. Coded Caching for Hierarchical Two-Layer Networks with Coded Placement. In Proceedings of the 2024 IEEE International Symposium on Information Theory (ISIT), Athens, Greece, 7–12 July 2024; pp. 2222–2227. [Google Scholar] [CrossRef]
  26. Xu, F.; Tao, M.; Liu, K. Fundamental Tradeoff Between Storage and Latency in Cache-Aided Wireless Interference Networks. IEEE Trans. Inf. Theory 2017, 63, 7464–7491. [Google Scholar] [CrossRef]
  27. Sengupta, A.; Tandon, R.; Simeone, O. Fog-Aided Wireless Networks for Content Delivery: Fundamental Latency Tradeoffs. IEEE Trans. Inf. Theory 2017, 63, 6650–6678. [Google Scholar] [CrossRef]
Figure 1. The two-layer hierarchical network from [19].
Figure 1. The two-layer hierarchical network from [19].
Entropy 27 00316 g001
Figure 2. The tradeoff of the coding delay T m a x with the memory ratios of each mirror and user of the proposed scheme and the schemes in [19,20,22] when K 1 = 8 , K 2 = 3 .
Figure 2. The tradeoff of the coding delay T m a x with the memory ratios of each mirror and user of the proposed scheme and the schemes in [19,20,22] when K 1 = 8 , K 2 = 3 .
Entropy 27 00316 g002
Figure 3. The tradeoff of the coding delay T s u m with the memory ratios of each mirror and user of the proposed scheme and the schemes in [19,22] when K 1 = 8 , K 2 = 3 .
Figure 3. The tradeoff of the coding delay T s u m with the memory ratios of each mirror and user of the proposed scheme and the schemes in [19,22] when K 1 = 8 , K 2 = 3 .
Entropy 27 00316 g003
Figure 4. The applicable memory ratio points for the schemes in [20,22] and the scheme in Theorem 1 when K 1 = 8 , K 2 = 3 .
Figure 4. The applicable memory ratio points for the schemes in [20,22] and the scheme in Theorem 1 when K 1 = 8 , K 2 = 3 .
Entropy 27 00316 g004
Figure 5. The tradeoffs of the coding delays with the sum of memory ratios of the proposed scheme and the schemes in [19,20,22] when K 1 = 8 and K 2 = 3 .
Figure 5. The tradeoffs of the coding delays with the sum of memory ratios of the proposed scheme and the schemes in [19,20,22] when K 1 = 8 and K 2 = 3 .
Entropy 27 00316 g005
Figure 6. The tradeoffs of the coding delays with the sum of memory ratios of the proposed scheme and the schemes in [19,20,22] when K 1 = 3 and K 2 = 8 .
Figure 6. The tradeoffs of the coding delays with the sum of memory ratios of the proposed scheme and the schemes in [19,20,22] when K 1 = 3 and K 2 = 8 .
Entropy 27 00316 g006
Figure 7. The tradeoffs of the coding delays with the sum of memory ratios of the proposed scheme and the schemes in [19,20,22] when K 1 = 10 and K 2 = 10 .
Figure 7. The tradeoffs of the coding delays with the sum of memory ratios of the proposed scheme and the schemes in [19,20,22] when K 1 = 10 and K 2 = 10 .
Entropy 27 00316 g007
Table 1. Table of parameters.
Table 1. Table of parameters.
Notations or AcronymsMeaning
K 1 The number of mirrors
K 2 The number of users connected to each mirror
K = K 1 K 2 The total number of users
NThe number of files
M 1 The cache size of each mirror
M 2 The cache size of each user
R 1 The communication load from the server to the mirrors
R 2 The communication load from each mirror
to the attached users
T m a x = m a x ( R 1 , R 2 ) The maximum coding delay
T s u m = R 1 + R 2 The sum coding delay
W i The i-th file
W i , j The j-th packet of the i-th file
W i , j , h The h-th subpacket of the packet W i , j
M k 1 The cached content of the k 1 -th mirror
( k 1 , k 2 ) The k 2 -th user connected to the k 1 -th mirror
W d ( k 1 , k 2 ) The file requested by user ( k 1 , k 2 )
U ( k 1 , k 2 ) The cached content of user ( k 1 , k 2 )
U ( k 1 , k 2 ) 1 The cached content of user ( k 1 , k 2 )
which is not cached by the k 1 -th mirror
U ( k 1 , k 2 ) 2 The cached content of user ( k 1 , k 2 )
which is cached by the k 1 -th mirror
PDAPlacement delivery array
MN PDAThe PDA corresponding to the MN scheme in [1]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, K.; Wang, J.; Cheng, M. Centralized Hierarchical Coded Caching Scheme for Two-Layer Network. Entropy 2025, 27, 316. https://doi.org/10.3390/e27030316

AMA Style

Zhao K, Wang J, Cheng M. Centralized Hierarchical Coded Caching Scheme for Two-Layer Network. Entropy. 2025; 27(3):316. https://doi.org/10.3390/e27030316

Chicago/Turabian Style

Zhao, Kun, Jinyu Wang, and Minquan Cheng. 2025. "Centralized Hierarchical Coded Caching Scheme for Two-Layer Network" Entropy 27, no. 3: 316. https://doi.org/10.3390/e27030316

APA Style

Zhao, K., Wang, J., & Cheng, M. (2025). Centralized Hierarchical Coded Caching Scheme for Two-Layer Network. Entropy, 27(3), 316. https://doi.org/10.3390/e27030316

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop