Next Article in Journal
A Two-Step Machine Learning Approach Integrating GNSS-Derived PWV for Improved Precipitation Forecasting
Previous Article in Journal
Security of Quantum Key Distribution with One-Time-Pad-Protected Error Correction and Its Performance Benefits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Security, Privacy, and Linear Function Retrieval in Combinatorial Multi-Access Coded Caching with Private Caches †

by
Mallikharjuna Chinnapadamala
and
B. Sundar Rajan
*
Department of Electrical Communication Engineering, IISc, Bengaluru 560012, India
*
Author to whom correspondence should be addressed.
Part of the content of this manuscript appears in proceedings of IEEE Information Theory Workshop (ITW), Shenzhen, China, 23–24 November 2024.
Entropy 2025, 27(10), 1033; https://doi.org/10.3390/e27101033
Submission received: 26 July 2025 / Revised: 24 September 2025 / Accepted: 25 September 2025 / Published: 1 October 2025

Abstract

We consider combinatorial multi-access coded caching with private caches, where users are connected to two types of caches: private caches and multi-access caches. Each user has its own private cache, while multi-access caches are connected in the same way as caches are connected in a combinatorial topology. A scheme is proposed that satisfies the following three requirements simultaneously: (a) Linear Function Retrieval (LFR), (b) content security against an eavesdropper, and (c) demand privacy against a colluding set of users. It is shown that the private caches included in this work enable the proposed scheme to provide privacy against colluding users. For the same rate, our scheme requires less total memory accessed by each user and less total system memory than the existing scheme for multi-access combinatorial topology (no private caches) in the literature. We derive a cut-set lower bound and prove optimality when r C 1 . For r < C 1 , we show a constant gap of 5 under certain conditions. Finally, the proposed scheme is extended to a more general setup where different users are connected to different numbers of multi-access caches, and multiple users are connected to the same subset of multi-access caches.

1. Introduction

Coded caching is a promising technique to reduce network congestion during peak traffic hours by duplicating part of the contents at the users’ caches. The model consists of a single server connected to multiple users through an error-free shared link. The coded caching scheme was first proposed by Maddah-Ali and Niesen (MAN) [1], in which each user has its own dedicated/private cache. It operates in two phases: a placement phase, where each user’s cache is populated to its size, and a delivery phase, where users reveal their demands, and the server has to satisfy them. In this phase, the server exploits the content of the caches to reduce network traffic. It was shown that achieving “global caching gain” is possible by serving several users simultaneously with a single transmission. The number of files the server sends to satisfy the demands of all the users is called the rate of the system. In the MAN scheme, the subpacketization level, which refers to the number of subfiles each file is divided into, increases exponentially with the number of users. To overcome this limitation, coded caching schemes that require much lower subpacketization were proposed using Placement Delivery Arrays (PDAs) in [2].
Multi-access networks (the networks where each user has access to multiple caches) are considered in [3,4,5,6,7]. A multi-access network where each user is assigned a unique set of r caches out of C caches is introduced in [3]. For every set of r caches, there is a user. This network is referred to as combinatorial topology [8]. The scheme given in [3] was shown to be optimal under the assumption of uncoded placement in [8]. A more generalized setup to combinatorial topology was introduced in [8] where different users are connected to a different number r [ 0 : C ] of caches, and any one set of r caches is uniquely assigned to K r users, and this holds for every r [ 0 : C ] . A multi-access network with K users and K caches, with each user having access to r caches in a cyclic wrap-around way, was considered in [4,5,6,7].
The problem of scalar linear function retrieval, where users are interested in retrieving a scalar linear function of files and not just a single file, was considered in [9]. It is shown that the rate for linear function retrieval depends on the number of linearly independent functions that are demanded. When all the demanded functions are linearly independent, the rate achieved for linear function retrieval is the same as that of the MAN scheme. The concept of demand privacy, where a user should not gain any information about the index of the file demanded by another user, was considered for FR in [10,11,12]. The LFR, along with demand privacy against colluding users, i.e., any subset of users cannot obtain any information about the demands of other users, even if they exchange their cache contents, was considered in [13]. The key idea is to use privacy keys, which are formed as random linear combinations of the files. Content security for FR was studied in [14], where the system is required to protect the library content against an eavesdropper who observes the signal transmitted by the server during the delivery phase. To achieve this, security keys shared among the users are used to secure the transmitted signal from an external eavesdropper. The possibility of providing demand privacy and security simultaneously using the idea of key superposition was shown in [15]. The schemes where each user is allowed to demand an arbitrary file from the library are referred to as File Retrieval (FR) schemes [15], and the schemes where each user demands an arbitrary linear combination of the files are referred to as Linear Function Retrieval (LFR) schemes [15]. Demand for privacy for the multi-access networks where users have access to r caches in a cyclic wrap-around way [6] was considered in [16]. Security, privacy, and LFR for the combinatorial topology were studied in [17] using Shamir’s secret sharing scheme [18]. In this paper, we extend our previous work [19], which addressed security and privacy in the combinatorial topology with private caches.
Edge caching has emerged as an essential solution for delivering content with low latency and high efficiency in IoT (Internet of Things) networks [20]. Integrating mobile edge computing with IoT applications has been shown to reduce energy consumption and latency by offloading computation tasks to network-edge servers [21]. Edge caching helps minimize service latency and satellite bandwidth usage in applications such as hybrid satellite-IoT networks [22]. In the network model considered in this work, each user with a private cache is also connected to multiple multi-access caches, which are shared among multiple users, so that content delivery in distributed IoT environments, such as smart cities and industrial automation systems, can be done efficiently. Multi-layered cache structures are commonly employed in various IoT environments, and they are discussed in [23,24,25]. Recently, medical IoT devices and Wireless Medical Sensor Networks (WMSN) have become increasingly important. However, medical data are at high risk from malware and human interference for financial exploitation [26]. The WMSN focuses on protecting the network from unauthorized users and eavesdroppers [27]. Key management and user authentication in IoT environments were explored in [28]. The protection of medical data in Implantable Medical Devices (IMD) such as insulin pumps was discussed in [29]. In this work, we consider content security and demand privacy, which can be applied in a medical IoT environment.
Our work is closely related to the information-theoretic foundations of distributed storage systems (DSS). Distributed storage systems (DSS) store data redundantly across multiple nodes to ensure reliability and efficiency [30]. In our model (Figure 1), the file library W [ N ] is stored in the server and later, during the placement phase, is distributed across the caches. In DSS, users access a subset of nodes to retrieve content, whereas in coded caching, users access a subset of the cache and retrieve content using the cache content and also the transmissions made by the server. The concept of demand privacy in our model is analogous to private information retrieval (PIR) [31,32,33], where the user retrieves data without revealing its index to the servers. The security aspect considered in this work is similar to the security in the wiretap channel II [34].

1.1. System Model

The system model considered in this work is shown in Figure 1. We refer to it as combinatorial topology with private caches. The model consists of a server connected to K users through an error-free shared link. There are two types of caches: (1) private caches and (2) multi-access caches. Each user has a private cache and also has access to a unique set of r out of C multi-access caches. For every set of r multi-access caches, there is a user. So, the total number of users is K = C r . The size of each multi-access cache is M M files, and that of each private cache is M P files. An external eavesdropper E is assumed to observe the signals transmitted by the server.

1.2. Contributions and Paper Organization

We consider security, privacy, and LFR for the combinatorial topology with the private caches model shown in Figure 1. Without private caches, we have reported security, demand privacy, and LFR in [17]. The technical contributions in this work are summarized as follows:
  • We propose a scheme for the combinatorial topology with private caches that provides security, privacy, and LFR.
  • The scheme in [17] does not provide privacy against colluding users, but the private caches included in this work enable the proposed scheme to provide privacy against colluding users.
  • It is shown that to achieve the same rate both in [17] and in the proposed scheme, the total memory accessed by each user is less in the proposed scheme.
  • Next, we compare the global cache, which is the total memory size of all the caches used in the entire system. It turns out that the global cache memory requirement is also less for the system considered in this work compared to that in [17].
  • We derive a lower bound on the rate using cut-set arguments, and we prove that the proposed scheme is optimal when r C 1 .
  • When r < C 1 and at M M = N C , M P = 1 r C the proposed scheme is within a multiplicative gap of 5 from the optimal when K 5 ( r + 1 ) .
  • As a special case, when r = 1 , the proposed scheme recovers the MAN-PDA-based Secure Private LFR (SP-LFR) scheme in [15].
  • The proposed scheme is extended to a more general setup where different users are connected to different numbers of multi-access caches, and multiple users are connected to the same subset of multi-access caches.
  • We show numerical plots to compare the performance of the proposed scheme with the SP-LFR scheme in [17] and the MAN-PDA-based SP-LFR scheme in [15].
The paper is organized as follows: Section 2 introduces the problem setup and presents some preliminaries. A motivating example and criteria on the minimum cache size for security are provided in Section 3. Section 4 contains the proposed scheme, and the extension to a more generalized setup is presented in Section 5. Section 6 contains the main results, and a discussion of our results using numerical evaluations is presented in Section 7. Finally, Section 8 concludes the paper.

1.3. Notations and Information-Theoretic Preliminaries

The set { 1 , 2 , 3 , n } is denoted as [ n ] . | X | denotes cardinality of the set X . For two non-negative integers l , m such that l m , we use [ l : m ] to denote the set { l , l + 1 , m } . x denotes the largest integer less than or equal to x. A finite field of size q is represented as F q . For a set A , we define X A : = { X i : i A } . For two sets A and B , A B denotes the elements in A but not in B . For two non-negative integers n , m , we have n m = n ! ( n m ) ! m ! , if n m , and n m = 0 if n < m .
In addition to the notations, we briefly recall the main information-theoretic tools used in this work. For a random variable X, H ( X ) denotes its entropy, and for two random variables X and Y, I ( X ; Y ) denotes their mutual information. The files in the library are assumed to be independent and uniformly distributed over F 2 F .

2. Problem Setup and Preliminaries

In this section, we introduce the problem setup and discuss some preliminaries. We begin by formally defining the problem setting, followed by a brief overview of the MAN scheme for multi-access coded caching [3] and the generalized combinatorial topology framework [8].
Recall the system model shown in Figure 1. The server S consists of a library of N files, W [ N ] = { W 1 , W 2 , , W N } , each of size F bits. Let K denote the set of K users that are connected to the server through an error-free shared link. There are C caches of size M M files, which we refer to as multi-access caches, and K caches of size M P files, which we refer to as private caches. Each user is connected to a unique set of r caches out of C multi-access caches. Let the set of all r-sized subsets of [ C ] be denoted by
Ω r { G : G [ C ] , | G | = r } .
For every set of r multi-access caches, there is a user. So, we have K = C r users and these are represented as U G , G Ω r . Each user is also connected to a unique private cache. The system operates in two phases: the placement phase and the delivery phase. Each phase is described in the following subsections.

2.1. Placement Phase

During this phase, all the multi-access and private caches are filled by the server. The server privately generates a randomness P from some finite alphabet P and fills the private cache of each user using the cache function
ψ G : P × ( F 2 ) N F ( F 2 ) M P F , G Ω r .
The cached content of the private cache for user U G , G Ω r , is denoted by
Z G P : = ψ G ( P , W [ N ] ) .
The cached content of multi-access cache c [ C ] is denoted by
Z c M : = ζ c ( W [ N ] ) ,
where ζ c : ( F 2 ) N F ( F 2 ) M M F .
A user U G has access to the multi-access cache content Z c M if c G . The total multi-access cache content available to user U G is denoted as
Z G M : = i G Z i M .
User U G also has access to the content of the private cache Z G P . The total cache content accessible to user U G is given by
Z G : = Z G M Z G P .

2.2. Delivery Phase

The demand vector of a user U G is denoted by d G = ( d G , 1 , , d G , N ) F 2 N . Thus, user U G is interested in retrieving the linear combination
W d G d G , 1 W 1 d G , N W N .
In order to satisfy the demands of all the users, the server transmits
X : = ϕ ( P , d Ω r , W [ N ] ) ,
where ϕ : P × ( F 2 ) K N × ( F 2 ) N F ( F 2 ) R F , and the quantity R is referred to as the rate of the system.
The file library W [ N ] , the randomness P, and the set of demands { d G , G Ω r } are mutually independent. The following conditions must be satisfied by the delivery scheme:
  • Correctness: Each user should be able to recover its demanded function:
    H ( W d G X , d G , Z G ) = 0 , G Ω r .
  • Security: An external eavesdropper observing the server’s transmissions should learn nothing about the file library:
    I ( W [ N ] ; X ) = 0 .
  • Privacy: Any set of colluding users should not know anything about the demands of other users:
    I ( d Ω r H ; X , d H , Z H ) = 0 , H Ω r , H .
Definition 1.
We say that the triplet ( M M , M P , R ) is achievable if there exists a scheme that satisfies the conditions in (8)–(10) with rate R and memory pair ( M M , M P ) . The optimal rate for the given setting is defined as
R * = inf { R : ( M M , M P , R ) i s a c h i e v a b l e } .
Now, we discuss some preliminaries in the following sub-sections.

2.3. MAN Scheme for Multi-Access Coded Caching [3]

A multi-access network consisting of C caches and K users, where each user has access to a unique set of r caches, was considered in [3]. For every distinct set of r caches, a user is associated. This setup supports a large number of users while maintaining low subpacketization levels. The network is referred to as combinatorial topology [8], and the scheme proposed in [3] was proven to be optimal under the assumption of uncoded placement in [8]. The widely known Maddah-Ali–Niesen (MAN) scheme is obtained as a special case of the scheme in [3] when r = 1 .
The problem setup is as follows: A server containing N files, denoted by W 1 , W 2 , , W N , each of size F bits, is connected to K users through an error-free shared link. Each user is connected to a unique set of r out of the C caches. Each cache is of size M files, and the content of cache c [ C ] is denoted by Z c .
In the placement phase, each file is divided into C t subfiles, where t = C M N . The c th cache is filled as follows:
Z c = { W i , T : c T , T [ C ] , | T | = t , i [ N ] } .
Let d U denote the demand of the user connected to the set U [ C ] with | U | = r . For each subset S [ C ] such that | S | = t + r , the server makes the following transmission:
U S | U | = r W d U , S U .
The achieved rate is given by
R = C t + r C t .

2.4. Genralized Combinatorial Topology (GCT) [8]

A more generalized multi-access setup known as Generalized Combinatorial Topology (GCT) was introduced in [8]. In this setup, every set of r caches out of C caches are connected to K r users for every r [ 0 : C ] . So, the total number of users in the system is
K = r = 0 C K r C r .
In the generalized combinatorial topology, K r users are connected to any one set of r caches. There are K r C r users that are connected to exactly r caches. This generalized combinatorial topology can be described using a C + 1 length vector K G C T = ( K 0 , K C ) . For example, in GCT with C = 4 caches and K G C T = ( 0 , 0 , 2 , 3 , 0 ) , every set of 2 caches is connected to 2 users, and every set of 3 caches is connected to 3 users. No user is connected to all the 4 caches and also to only one cache. There is no user who is not connected to any cache. As a special case, when K G C T = ( 0 , 0 , . , 1 , 0 , . . 0 ) , where 1 is in the ( r + 1 ) th position, GCT recovers the combinatorial topology. For t [ 0 : C ] , the placement for GCT is the same as that of combinatorial topology. The delivery procedure is also the same. For every r [ 0 : C ] , the delivery procedure for the combinatorial topology given in [3] is repeated K r times. So, the rate achieved is
R G = r = 0 C K r C t + r C t .

3. Motivating Example and Minimum Cache Size Criteria for Security

In this section, we present an example and derive the minimum cache size criteria required for security.
Example 1.
Consider C = 3 , r = 2 , N = 3 . Each file is divided into 3 subfiles. Thus, W i = { W i , 1 , W i , 2 , W i , 3 } i [ 3 ] . Number of users, K = C r = 3 2 = 3 . The 3 users are { U { 1 , 2 } , U { 1 , 3 } , U { 2 , 3 } } . The server generates a random variable V { 1 , 2 , 3 } independently and uniformly from F 2 F / 3 . This is called a security key. Now, the server generates K = 3 random vectors as follows:
p G ( p G , 1 , p G , 2 , p G , 3 ) T U n i f { F 2 3 } , G Ω 2 .
Using these three random vectors, the server generates privacy keys as follows:
W p { 1 , 2 } , 3 = p { 1 , 2 } , 1 W 1 , 3 p { 1 , 2 } , 2 W 2 , 3 p { 1 , 2 } , 3 W 3 , 3 , W p { 1 , 3 } , 2 = p { 1 , 3 } , 1 W 1 , 2 p { 1 , 3 } , 2 W 2 , 2 p { 1 , 3 } , 3 W 3 , 2 , W p { 2 , 3 } , 1 = p { 2 , 3 } , 1 W 1 , 1 p { 2 , 3 } , 2 W 2 , 1 p { 2 , 3 } , 3 W 3 , 1 .
Let,
D { 1 , 2 } , 3 : = V { 1 , 2 , 3 } W p { 1 , 2 } , 3 , D { 1 , 3 } , 2 : = V { 1 , 2 , 3 } W p { 1 , 3 } , 2 , D { 2 , 3 } , 1 : = V { 1 , 2 , 3 } W p { 2 , 3 } , 1 .
Now, let us look at the placement. The multi-access caches are filled as follows:
Z 1 M = { W 1 , 1 , W 2 , 1 , W 3 , 1 } , Z 2 M = { W 1 , 2 , W 2 , 2 , W 3 , 2 } , Z 3 M = { W 1 , 3 , W 2 , 3 , W 3 , 3 } .
Based on the above placement, the size of each multi-access cache is 1. The private caches are filled as follows:
Z { 1 , 2 } P = { D { 1 , 2 } , 3 } , Z { 1 , 3 } P = { D { 1 , 3 } , 2 } , Z { 2 , 3 } P = { D { 2 , 3 } , 1 } .
Based on the above placement, the size of each private cache is 1 / 3 . Let user U { 1 , 2 } request W 1 , user U { 1 , 3 } request W 2 , and user U { 2 , 3 } request W 3 . To satisfy these demands, the server transmits the following:
X = V { 1 , 2 , 3 } W 1 , 3 W p { 1 , 2 } , 3 W 2 , 2 W p { 1 , 3 } , 2 W 3 , 1 W p { 2 , 3 } , 1 , q G = p G d G G Ω 2 .
By the above transmissions, each user will be able to get their demands. Now, consider the user U { 1 , 2 } . It has access to D { 1 , 2 } , 3 from its private cache. It can get W 2 , 2 W p { 1 , 3 } , 2 = q { 1 , 3 } , 1 W 1 , 2 q { 1 , 3 } , 2 W 2 , 2 q { 1 , 3 } , 3 W 3 , 2 as q { 1 , 3 } = ( q { 1 , 3 } , 1 , q { 1 , 3 } , 2 , q { 1 , 3 } , 3 ) is sent by the server, and W 1 , 2 , W 2 , 2 , W 3 , 2 are accessible to it from cache 1. Similarly, it can get W 3 , 1 W p { 2 , 3 } , 1 . Thus, it can get the required subfile W 1 , 3 . In the same way, the other users can also get the files they want. The transmission is protected by a security key. So, it is secure from any external eavesdropper. Since q G for all G Ω 2 is uniformly distributed over F 2 3 , no user can know the demands of the other users. The rate achieved is 1 3 . This completes Example 1.
The security keys are stored in the caches to provide security. So, there is a minimum memory required to achieve security. Now, we show that for secure delivery, the multi-access cache memory size M M and the private cache memory size M P should satisfy the condition C M M + K M P K . The cache memory sizes in Example 1 satisfy this condition.
Proposition 1.
For a combinatorial topology with private caches, the system satisfies the security condition in (9) when N K , if the multi-access cache memory size M M and the private cache memory size M P satisfy the following inequality:
C M M + K M P K .
Proof. 
The detailed proof is given in Appendix A. □
Remark 1.
If M M = 0 , then M P should be at least one. This is the same condition given for the dedicated cache setup in [14]. If M P = 0 , then M M should be at least K / C . This is the condition for the combinatorial topology.

4. The Proposed Scheme

In this section, we present a scheme that satisfies the conditions in (8)–(10). The procedure is given below.
The server divides the n files W 1 , W 2 , W N in the following way for t [ 0 : C r ]
W i = { W i , T : T Ω t } , i [ N ] .
For any vector a = ( a 1 , a 2 , , a n ) , we define the following:
W a , T : = n [ N ] a n W n , T T Ω t .

4.1. Placement Phase

During this phase, the server generates privacy and security keys to provide demand privacy and content security, respectively. The number of security keys generated is C t + r . The security keys { V S : S Ω t + r } are generated independently and uniformly from F 2 F / C t . As there are C r users, the server generates C r random vectors { p G : G Ω r } as follows:
p G ( p G , 1 , p G , N ) T U n i f { F 2 N } , G Ω r .
By using the above random vectors, C r C r t privacy keys, denoted by { W p G , T : G Ω r , T Ω t , G T = } are generated as
W p G , T = n [ N ] p G , n . W n , T .
Let us define D G , T as follows:
D G , T W p G , T V G T , G Ω r , T Ω t , G T = .
The placement of the mutli-access caches and the private caches is done as follows:
Z c M = { W i , T : c T , T Ω t , i [ N ] } ,
Z G P = { D G , T : G Ω r , T Ω t , G T = } .
Cache Memory: Based on the above placement, the cache sizes are given as follows:
M M = N t C , M P = C r t C t .

4.2. Delivery Phase

During this phase, user U G , G Ω r demands W d G , for some d G F 2 N . In order to satisfy the demands, the server transmits X = [ Y S , q G ] , S Ω t + r , G Ω r , where
q G = d G p G , G Ω r ,
Y S = V S G S | G | = r W q G , S G S Ω t + r .
Correctness: Each user should be able to retrieve its demanded function. Consider a transmission Y S , S Ω t + r , and a user U B such that B S . The transmission Y S can be written as
Y S = V S W d B , S B W p B , S B G S | G | = r G B W d G , S G W p G , S G .
The user U B can get D B , S B = W p B , S B V S from the content of its private cache Z B p . Now, consider the term W d G , S G W p G , S G , and it can be written in the following way
( p G , 1 d G , 1 ) W 1 , S G ( p G , 1 d G , N ) W N , S G .
The coefficients of W i , S G , i [ N ] are known to the user as q G , G Ω r are sent by the server and the user U B has access to the subfiles W i , S G , G S , | G | = r , G B i [ N ] . So, it can calculate the entire term. Thus, the user will be able to get W d B , S B . Similarly, from all the transmissions corresponding to such ( t + r ) -sized subsets S , with B S , the user U B gets the missing subfiles of its demand. Similarly, any user U G , G C , | G | = r can recover the missing subfiles of its demanded file.
Privacy: We now prove the condition for privacy given in (10). Each user U G , G Ω r has access to a private cache and r multi-access caches. Let Z H , H Ω r denote the content of the caches of all colluding users. We have
I ( d Ω r H ; X , d H , Z H )
I ( d Ω r H ; X , d H , Z H , W [ N ] )
= I ( d Ω r H ; q Ω r , { Y S } S Ω t + r , d H , Z H , W [ N ] )
I ( d Ω r H ; V Ω t + r , q Ω r , d H , Z H , W [ N ] )
= 0 ,
where (24d) comes from the fact that { Y S } S Ω t + r is determined by q Ω r , V Ω t + r , W [ N ] and (24e) comes from the fact that d Ω r H = q Ω r H p Ω r H is independent of V Ω t + r , q Ω r , d H , Z H , W [ N ] as p Ω r / H is independently and uniformly distributed over F 2 N .
Security: We prove the security condition in (9) as follows.
I ( W [ N ] ; X ) = I ( W [ N ] ; q Ω r , { Y S } S Ω t + r )
= I ( W [ N ] ; q Ω r ) + I ( W [ N ] ; { Y S } S Ω t + r | q Ω r )
= 0 ,
where (25c) comes from the fact that q Ω r is independent of W [ N ] and { Y S } S Ω t + r is independent of W [ N ] , q Ω r because the random variables V Ω t + r are independently and uniformly generated from F 2 F / C t .
Remark 2
(Connection to Wiretap Channel II). The use of independent security keys in our scheme is closely related to the coset coding technique used in the Wiretap Channel II (WTC-II) model [34]. In WTC-II, the message is embedded in a coset of a linear code, and a random vector (key) selects the specific codeword within the coset. The legitimate receiver, who observes all transmitted symbols, can uniquely identify both the coset and the message, while an eavesdropper observing fewer than d symbols (where d is the minimum distance of the dual code) cannot distinguish between different cosets, thereby learning nothing about the message. Analogously, in our scheme, each multicast transmission is masked by an independent random key stored in the private caches of legitimate users. These keys serve the same role as coset vectors in WTC-II: they ensure that an eavesdropper, even if it observes the entire broadcast, obtains no information about the library, while legitimate users can recover the desired content by combining the transmissions with the keys stored in private caches.
Remark 3.
When r = 1 , our setup reduces to a dedicated cache setting, and the proposed scheme coincides with the MAN-PDA-based SP-LFR scheme in [15]. Since the security condition cannot be satisfied when M M = 0 and M P = 0 , the first achievable point at t = 0 yields M M = 0 and M P = 1 . However, if security is compromised, demand privacy alone can be ensured with M M = 0 and M P = 0 by simply transmitting all N files.
Example 2.
Consider C = 5 , r = 3 , t = 2 , and N = 10 . The number of users is K = 5 3 = 10 , and each file is divided into 5 2 = 10 subfiles.
The users are:
U { 1 , 2 , 3 } , U { 1 , 2 , 4 } , U { 1 , 2 , 5 } , U { 1 , 3 , 4 } , U { 1 , 3 , 5 } , U { 1 , 4 , 5 } , U { 2 , 3 , 4 } , U { 2 , 3 , 5 } , U { 2 , 4 , 5 } , U { 3 , 4 , 5 } .
Each file W i , i [ 10 ] is divided into the following subfiles:
W i , { 1 , 2 } , W i , { 1 , 3 } , W i , { 1 , 4 } , W i , { 1 , 5 } , W i , { 2 , 3 } , W i , { 2 , 4 } , W i , { 2 , 5 } , W i , { 3 , 4 } , W i , { 3 , 5 } , W i , { 4 , 5 } .
Placement Phase: The server first generates 5 5 = 1 security key V { 1 , 2 , 3 , 4 , 5 } independently and uniformly from F 2 F / 10 . Then, it generates K = 10 random vectors as follows:
p G = ( p G , 1 , , p G , 10 ) T Unif { F 2 10 } , G Ω 3 .
The privacy keys, denoted by { W p G , T : G Ω 3 , T Ω 2 , G T = } , are generated as
W p G , T = n [ 10 ] p G , n · W n , T .
Let,
D G , T = W p G , T V G T , G Ω 3 , T Ω 2 , G T = .
These are called keys. The placement step stores subfiles in multi-access caches and keys in private caches, so that security and privacy are guaranteed in the delivery phase.
The multi-access cache placement is done as follows:
Z 1 M = { W i , { 1 , 2 } , W i , { 1 , 3 } , W i , { 1 , 4 } , W i , { 1 , 5 } i [ 10 ] } ,
Z 2 M = { W i , { 1 , 2 } , W i , { 2 , 3 } , W i , { 2 , 4 } , W i , { 2 , 5 } i [ 10 ] } ,
Z 3 M = { W i , { 1 , 3 } , W i , { 2 , 3 } , W i , { 3 , 4 } , W i , { 3 , 5 } i [ 10 ] } ,
Z 4 M = { W i , { 1 , 4 } , W i , { 2 , 4 } , W i , { 3 , 4 } , W i , { 3 , 5 } i [ 10 ] } ,
Z 5 M = { W i , { 1 , 5 } , W i , { 2 , 5 } , W i , { 3 , 5 } , W i , { 4 , 5 } i [ 10 ] } .
The private cache placement is performed as follows:
Z { 1 , 2 , 3 } P = { D { 1 , 2 , 3 } , { 4 , 5 } } , Z { 1 , 2 , 4 } P = { D { 1 , 2 , 4 } , { 3 , 5 } } , Z { 1 , 2 , 5 } P = { D { 1 , 2 , 5 } , { 3 , 4 } } , Z { 1 , 3 , 4 } P = { D { 1 , 3 , 4 } , { 2 , 5 } } , Z { 1 , 3 , 5 } P = { D { 1 , 3 , 5 } , { 2 , 4 } } , Z { 1 , 4 , 5 } P = { D { 1 , 4 , 5 } , { 2 , 3 } } , Z { 2 , 3 , 4 } P = { D { 2 , 3 , 4 } , { 1 , 5 } } , Z { 2 , 3 , 5 } P = { D { 2 , 3 , 5 } , { 1 , 4 } } , Z { 2 , 4 , 5 } P = { D { 2 , 4 , 5 } , { 1 , 3 } } , Z { 3 , 4 , 5 } P = { D { 3 , 4 , 5 } , { 1 , 2 } } .
The memory size of the multi-access cache is 4, and the size of the private cache is 0.1 . The cache memory sizes satisfy the memory condition provided in Proposition 1.
Delivery Phase:After obtaining the demand vectors, the server transmits
X = [ Y { 1 , 2 , 3 , 4 , 5 } , q G ] G Ω 3 5 ,
where
Y { 1 , 2 , 3 , 4 , 5 } = V { 1 , 2 , 3 , 4 , 5 } W q { 1 , 2 , 3 } , { 4 , 5 } W q { 1 , 2 , 4 } , { 3 , 5 } W q { 1 , 2 , 5 } , { 3 , 4 } W q { 1 , 3 , 4 } , { 2 , 5 } W q { 1 , 3 , 5 } , { 2 , 4 } W q { 1 , 4 , 5 } , { 2 , 3 } W q { 2 , 3 , 4 } , { 1 , 5 } W q { 2 , 3 , 5 } , { 1 , 4 } W q { 2 , 4 , 5 } , { 1 , 3 } W q { 3 , 4 , 5 } , { 1 , 2 } ,
and
q G = p G d G , G Ω 3 .
Here, the server constructs a single coded transmission Y { 1 , 2 , 3 , 4 , 5 } , which contains linear combinations of subfiles and a security key V { 1 , 2 , 3 , 4 , 5 } , which ensures security. The vectors q G are generated using p G and the demand vector d G . The linear combinations are constructed using q G .By the above transmissions, each user will be able to get their demands. Consider the user U { 1 , 2 , 3 } . The term W q { 1 , 2 , 4 } , { 3 , 5 } can be written as i [ 10 ] q { 1 , 2 , 4 } , i W i , { 3 , 5 } . As the user knows q { 1 , 2 , 4 } = ( q { 1 , 2 , 4 } , 1 , q { 1 , 2 , 4 } , 10 ) and has access to the 3rd cache, it can calculate the term. Similarly, the user U { 1 , 2 , 3 } can calculate the terms W q G , S G G { 1 , 2 , 3 , 4 , 5 } , | G | = 3 , G { 1 , 2 , 3 } . Moreover, the user U { 1 , 2 , 3 } can get the key V { 1 , 2 , 3 , 4 , 5 } W p { 1 , 2 , 3 } , { 4 , 5 } from its private cache Z { 1 , 2 , 3 } P . So, the user U { 1 , 2 , 3 } can decode W d { 1 , 2 , 3 } , { 4 , 5 } . Similarly, all the other users can decode the files demanded by them.
In summary, each user combines (i) the broadcast transmission, (ii) the subfiles stored in its accessible multi-access caches, and (iii) its key in the private cache to decode the demanded file. Security is ensured because every transmission includes the security key V { 1 , 2 , 3 , 4 , 5 } , and demand privacy is guaranteed because the vectors q G are independent uniform random vectors. As there is only one broadcast transmission, the achieved rate is 1 10 .

5. Extension to a More Generalized Setup

In this section, we extend our proposed scheme to a more generalized setup.
In the combinatorial topology introduced in [3], each user is connected to r unique caches out of C caches. For every set of r caches, there is one user. A generalized setup, referred to as Generalized Combinatorial Topology (GCT), was introduced in [8]. In GCT, the server having N files is connected to multiple users through an error-free shared link. There are C caches, and different users are connected to a different number r [ 0 : C ] of caches. Any one set of r caches is uniquely assigned to K r users, and this holds for every r [ 0 : C ] . So, the total number of users is K = r = 0 C K r C r . In the problem setup described in Section 2, out of C multi-access caches, every user is connected to r unique caches. For every set of r multi-access caches, there is a user. So, the multi-access caches are connected in the same way as the caches in the combinatorial topology. Now, we consider a more generalized setup where the multi-access caches are connected in the way caches are connected in the generalized combinatorial topology. So, there are users for every value of r [ 0 : C ] , and for the same set of r multi-access caches, there are K r users. So, let us define the vector K G C T : = ( K 0 , K 1 , K C ) . In addition, every user has a private cache. So, we refer to this extended setup as GCT with private caches. The users are represented as { U r , G l : r [ 0 : C ] , G Ω r , l [ K r , } . Let d r , G l be the demand vector of the user U r , G l and Z r , G l be the total content accessible to the user U r , G l . The size of each multi-access cache is M M . The size of the private cache of a user that is connected to the r multi-access cache, where r [ 0 : C ] , is M P ( r ) . The combinatorial topology with private caches discussed in Section 4 is obtained as a special case of the GCT with private caches by choosing K r = 1 for some r [ 0 : C ] and K s = 0 , s r . The correctness, security, and privacy conditions for the generalized combinatorial topology with private caches are as follows:
  • Correctness: Each user should be able to retrieve its demanded function,
    H ( W d r , G l | X , d r , G l , Z r , G l ) = 0 , r [ 0 : C ] , l [ K r ] , G Ω r .
  • Security: An external eavesdropper who is observing the signals sent by the server should not know anything about the content of the file library,
    I ( W [ N ] ; X ) = 0 .
  • Privacy: Any set of colluding users should not know anything about the demands of the other users. Let D = { d r , G l : r [ 0 : C ] , G Ω r , l [ K r ] } represent the demand vectors of all the users, D ˜ D , D ˜ be the set that represents the demands of the colluding users and Z ˜ represent the content of all the colluding users. Then,
    I ( D D ˜ ; X , D ˜ , Z ˜ ) = 0 .
The proposed scheme in Section 4 is extended to the GCT with private caches in the following way.
For t [ 0 : C ] , each file is divided into C t subfiles as in (13). The server generates, for t [ 0 : C ] , r = 0 r = C K r C t + r security keys { V r , S l : r [ 0 : C ] , l [ K r ] , S Ω t + r } independently and uniformly from F 2 F / C t . Then the server generates K random vectors { p r , G l : r [ 0 : C ] , l [ K r ] , G Ω r } as follows:
p r , G l ( p r , G , 1 l , p r , G , 1 l ) T U n i f { F 2 N } , r [ 0 : C ] , l [ K r ] , G Ω r .
By using the above random vectors, the privacy keys { W p r , G l , T : r [ 0 : C ] , l [ K r ] , G Ω r , T Ω t , G T = } are generated as in (15). Let us define D r , G , T l as follows:
D r , G , T l W p r , G l , T V r , G T l , r [ 0 : C ] , l [ K r ] , G Ω r , T Ω t , G T = .
The content of multi-access cache c [ C ] is represented as Z c M . The content of the private cache of user U r , G l is denoted as Z r , l , G P . The placement of the caches is done as follows.
Z c M = { W i , T : c T , T Ω t i [ N ] } ,
Z r , l , G P = { D r , G , T l : G Ω r , T Ω t , G T = } .
Cache Memory: By the above placement, the sizes of the caches are as follows:
M M = N t C , M P ( r ) = C r t C t .
The size of a user’s private cache depends on r, the total number of multi-access caches accessible to the user, as the number of required keys depends on r. During the delivery phase, the users reveal their demand vectors. Let d r , G l be the demand vector of the user U r , G l where r [ 0 : C ] , l [ K r ] , G Ω r . So, in order to satisfy the demands, the server transmits r [ 0 : C ] , l [ K r ] , G Ω r , S Ω t + r
q r , G l = d r , G l p r , G l ,
Y r , S l = V r , S l G S | G | = r W q r , G l , S G .
The transmissions given above are essentially the same transmissions made by the server in the scheme given in Section 4 repeated for every r [ 0 : C ] , l [ K r ] . Consider any user connected to j [ 0 : C ] caches and let that user be the i t h user connected to j caches. So, the user is U j , H i , H Ω j . Thus, by using the transmissions Y j , E i , E Ω t + j , H E , the user can reconstruct its demanded file. The security condition is satisfied, as each transmission is protected with a security key. The demand privacy condition is also satisfied as the vectors q r , G l , r [ 0 : C ] , l [ K r ] , G Ω t + r are independently and uniformly distributed over F 2 N . So, correctness, security, and demand for privacy are guaranteed. The rate achieved is R G = r = 0 C K r C t + r C t .
Example 3.
Consider C = 5 , K GCT = ( 0 , 0 , 0 , 2 , 0 , 0 ) , t = 2 , and N = 20 . The total number of users is K = 2 C r = 2 5 3 = 20 , and each file is divided into C t = 5 2 = 10 subfiles.
The 20 users are denoted as
U 3 , { 1 , 2 , 3 } l , U 3 , { 1 , 2 , 4 } l , U 3 , { 1 , 2 , 5 } l , U 3 , { 1 , 3 , 4 } l , U 3 , { 1 , 3 , 5 } l ,
U 3 , { 1 , 4 , 5 } l , U 3 , { 2 , 3 , 4 } l , U 3 , { 2 , 3 , 5 } l , U 3 , { 2 , 4 , 5 } l , U 3 , { 3 , 4 , 5 } l , l [ 2 ] .
Each file W i , i [ 20 ] is split into the following subfiles:
W i , { 1 , 2 } , W i , { 1 , 3 } , W i , { 1 , 4 } , W i , { 1 , 5 } , W i , { 2 , 3 } , W i , { 2 , 4 } , W i , { 2 , 5 } , W i , { 3 , 4 } , W i , { 3 , 5 } , W i , { 4 , 5 } .
The server generates 2 5 5 = 2 security keys V 3 , { 1 , 2 , 3 , 4 , 5 } 1 and V 3 , { 1 , 2 , 3 , 4 , 5 } 2 , independently and uniformly from F 2 F / 10 . Then, it generates K = 20 random vectors as follows:
p 3 , G l = ( p 3 , G , 1 l , , p 3 , G , 20 l ) T Unif ( F 2 20 ) , l [ 2 ] , G Ω 3 .
The privacy keys are generated as follows:
W p 3 , G l , T = n = 1 20 p 3 , G , n l · W n , T , l [ 2 ] , G Ω 3 , T Ω 2 , G T = .
Let,
D 3 , G , T l = W p 3 , G l , T V 3 , G T l , l [ 2 ] , G Ω 3 , T Ω 2 , G T = .
The placement for multi-access cache is done as follows:
Z 1 M = { W i , { 1 , 2 } , W i , { 1 , 3 } , W i , { 1 , 4 } , W i , { 1 , 5 } i [ 20 ] } , Z 2 M = { W i , { 1 , 2 } , W i , { 2 , 3 } , W i , { 2 , 4 } , W i , { 2 , 5 } i [ 20 ] } , Z 3 M = { W i , { 1 , 3 } , W i , { 2 , 3 } , W i , { 3 , 4 } , W i , { 3 , 5 } i [ 20 ] } , Z 4 M = { W i , { 1 , 4 } , W i , { 2 , 4 } , W i , { 3 , 4 } , W i , { 3 , 5 } i [ 20 ] } , Z 5 M = { W i , { 1 , 5 } , W i , { 2 , 5 } , W i , { 3 , 5 } , W i , { 4 , 5 } i [ 20 ] } .
The placement for the private caches is done as follows:
Z 3 , 1 , { 1 , 2 , 3 } P = { D 3 , { 1 , 2 , 3 } , { 4 , 5 } 1 } , Z 3 , 2 , { 1 , 2 , 3 } P = { D 3 , { 1 , 2 , 3 } , { 4 , 5 } 2 } , Z 3 , 1 , { 1 , 2 , 4 } P = { D 3 , { 1 , 2 , 4 } , { 3 , 5 } 1 } , Z 3 , 2 , { 1 , 2 , 4 } P = { D 3 , { 1 , 2 , 4 } , { 3 , 5 } 2 } , Z 3 , 1 , { 1 , 2 , 5 } P = { D 3 , { 1 , 2 , 5 } , { 3 , 4 } 1 } , Z 3 , 2 , { 1 , 2 , 5 } P = { D 3 , { 1 , 2 , 5 } , { 3 , 4 } 2 } , Z 3 , 1 , { 1 , 3 , 4 } P = { D 3 , { 1 , 3 , 4 } , { 2 , 5 } 1 } , Z 3 , 2 , { 1 , 3 , 4 } P = { D 3 , { 1 , 3 , 4 } , { 2 , 5 } 2 } , Z 3 , 1 , { 1 , 3 , 5 } P = { D 3 , { 1 , 3 , 5 } , { 2 , 4 } 1 } , Z 3 , 2 , { 1 , 3 , 5 } P = { D 3 , { 1 , 3 , 5 } , { 2 , 4 } 2 } , Z 3 , 1 , { 1 , 4 , 5 } P = { D 3 , { 1 , 4 , 5 } , { 2 , 3 } 1 } , Z 3 , 2 , { 1 , 4 , 5 } P = { D 3 , { 1 , 4 , 5 } , { 2 , 3 } 2 } , Z 3 , 1 , { 2 , 3 , 4 } P = { D 3 , { 2 , 3 , 4 } , { 1 , 5 } 1 } , Z 3 , 2 , { 2 , 3 , 4 } P = { D 3 , { 2 , 3 , 4 } , { 1 , 5 } 2 } , Z 3 , 1 , { 2 , 3 , 5 } P = { D 3 , { 2 , 3 , 5 } , { 1 , 4 } 1 } , Z 3 , 2 , { 2 , 3 , 5 } P = { D 3 , { 2 , 3 , 5 } , { 1 , 4 } 2 } , Z 3 , 1 , { 2 , 4 , 5 } P = { D 3 , { 2 , 4 , 5 } , { 1 , 3 } 1 } , Z 3 , 2 , { 2 , 4 , 5 } P = { D 3 , { 2 , 4 , 5 } , { 1 , 3 } 2 } , Z 3 , 1 , { 3 , 4 , 5 } P = { D 3 , { 3 , 4 , 5 } , { 1 , 2 } 1 } , Z 3 , 2 , { 3 , 4 , 5 } P = { D 3 , { 3 , 4 , 5 } , { 1 , 2 } 2 } .
The memory size of each multi-access cache is 8, and the size of each private cache is 0.1 . After receiving the demand vectors, the server transmits X = [ Y 3 , { 1 , 2 , 3 , 4 , 5 } l , q 3 , G l ] , l [ 2 ] , G Ω 3 where Y 3 , { 1 , 2 , 3 , 4 , 5 } 1 = V 3 , { 1 , 2 , 3 , 4 , 5 } 1 W q 3 , { 1 , 2 , 3 } 1 , { 4 , 5 } W q 3 , { 1 , 2 , 4 } 1 , { 3 , 5 } W q 3 , { 1 , 2 , 5 } 1 , { 3 , 4 } W q 3 , { 1 , 3 , 4 } 1 , { 2 , 5 } W q 3 , { 1 , 3 , 5 } 1 , { 2 , 4 } W q 3 , { 1 , 4 , 5 } 1 , { 2 , 3 } W q 3 , { 2 , 3 , 4 } 1 , { 1 , 5 } W q 3 , { 2 , 3 , 5 } 1 , { 1 , 4 } W q 3 , { 2 , 4 , 5 } 1 , { 1 , 3 } W q 3 , { 3 , 4 , 5 } 1 , { 1 , 2 } ,
Y 3 , { 1 , 2 , 3 , 4 , 5 } 2 = V 3 , { 1 , 2 , 3 , 4 , 5 } 2 W q 3 , { 1 , 2 , 3 } 2 , { 4 , 5 } W q 3 , { 1 , 2 , 4 } 2 , { 3 , 5 } W q 3 , { 1 , 2 , 5 } 2 , { 3 , 4 } W q 3 , { 1 , 3 , 4 } 2 , { 2 , 5 } W q 3 , { 1 , 3 , 5 } 2 , { 2 , 4 } W q 3 , { 1 , 4 , 5 } 2 , { 2 , 3 } W q 3 , { 2 , 3 , 4 } 2 , { 1 , 5 } W q 3 , { 2 , 3 , 5 } 2 , { 1 , 4 } W q 3 , { 2 , 4 , 5 } 2 , { 1 , 3 } W q 3 , { 3 , 4 , 5 } 2 , { 1 , 2 } ,
q 3 , G l = p 3 , G l d 3 , G l , l [ 2 ] , G Ω 3 .
By the above transmissions, each user will be able to get their demands. Consider the user U 3 , { 1 , 2 , 3 } 1 and the transmission Y 3 , { 1 , 2 , 3 , 4 , 5 } 1 . The term W q 3 , { 1 , 2 , 4 } 1 , { 3 , 5 } can be written as i [ 20 ] q 3 , { 1 , 2 , 4 } , i 1 W i , { 3 , 5 } . As the user knows q 3 , { 1 , 2 , 4 } 1 = ( q 3 , { 1 , 2 , 4 } , 1 1 , q 3 , { 1 , 2 , 4 } , 20 1 ) and has access to the 3rd cache, it can calculate the term. Similarly, the user U 3 , { 1 , 2 , 3 } 1 can calculate the terms W q 3 , G 1 , S G G { 1 , 2 , 3 , 4 , 5 } , | G | = 3 , G { 1 , 2 , 3 } . Moreover, the user U 3 , { 1 , 2 , 3 } 1 can get the key V 3 , { 1 , 2 , 3 , 4 , 5 } 1 W p 3 , { 1 , 2 , 3 } 1 , { 4 , 5 } from its private cache Z 3 , 1 , { 1 , 2 , 3 } P . So, the user U 3 , { 1 , 2 , 3 } 1 can decode W d 3 , { 1 , 2 , 3 } 1 , { 4 , 5 } . Using the same procedure, the user U 3 , { 1 , 2 , 3 } 2 can decode W d 3 , { 1 , 2 , 3 } 2 , { 4 , 5 } from Y 3 , { 1 , 2 , 3 , 4 , 5 } 2 . Similarly, all the other users that are connected to the 3 multi-access caches can decode the files demanded by them. Now, let us see the demand privacy constraint. The server sends the vectors q 3 , G l , l [ 2 ] , G Ω 3 to all the users. Since the vectors are uniformly distributed, it is impossible to know anything about the demand vectors d 3 , G l , l [ 2 ] , G Ω 3 . Thus, privacy is guaranteed because each user does not know the key of any other user, and the vectors q 3 , G l , l [ 2 ] , G Ω 3 are random vectors independently and uniformly distributed over F 2 20 . Now, let us look into the aspect of security. Each signal sent by the server is protected by a security key. Therefore, an eavesdropper who continuously observes the signals sent by the server learns nothing about the file library. Since there are two transmissions, the achieved rate is 2 10 .
Example 4.
Consider C = 5 , K G C T = ( 0 , 0 , 1 , 1 , 0 , 0 ) , t=2, N = 20 . The total number of users, K = C 2 + C 3 = 5 3 + 5 2 = 20 and each file is divided in to C t = 5 2 = 10 subfiles. Ten users are connected to 2 multi-access caches, and the other ten users are connected to 3 multi-access caches. The ten users that are connected to 3 multi-access caches are represented as
U 3 , { 1 , 2 , 3 } 1 , U 3 , { 1 , 2 , 4 } 1 , U 3 , { 1 , 2 , 5 } 1 , U 3 , { 1 , 3 , 4 } 1 , U 3 , { 1 , 3 , 5 } 1 ,
U 3 , { 1 , 4 , 5 } 1 , U 3 , { 2 , 3 , 4 } 1 , U 3 , { 2 , 3 , 5 } 1 , U 3 , { 2 , 4 , 5 } 1 , U 3 , { 3 , 4 , 5 } 1 .
The other ten users that are connected to 2 multi-access caches are represented as
U 2 , { 1 , 2 } 1 , U 2 , { 1 , 3 } 1 , U 2 , { 1 , 4 } 1 , U 2 , { 1 , 5 } 1 , U 2 , { 2 , 3 } 1 , U 2 , { 2 , 4 } 1 , U 2 , { 2 , 5 } 1 , U 2 , { 3 , 4 } 1 , U 2 , { 3 , 5 } 1 , U 2 , { 4 , 5 } 1 .
The subfiles of W i , i [ 20 ] are
W i , { 1 , 2 } , W i , { 1 , 3 } , W i , { 1 , 4 } , W i , { 1 , 5 } , W i , { 2 , 3 } , W i , { 2 , 4 } , W i , { 2 , 5 } , W i , { 3 , 4 } , W i , { 3 , 5 } , W i , { 4 , 5 } .
The server generates 5 3 + 5 4 = 1 + 5 = 6 security keys { V r , S 1 : r { 2 , 3 } , S Ω 2 + r } independently and uniformly from F 2 F / 10 . Then, the server generates K = 20 random vectors as follows:
p r , G 1 = ( p r , G , 1 1 , , p r , G , 20 1 ) T Unif ( F 2 10 ) , r { 2 , 3 } , G Ω r .
The privacy keys, denoted by
{ W p r , G 1 , T : r { 2 , 3 } , G Ω r , T Ω 2 , G T = } ,
are generated as in (15). Let,
D r , G , T 1 = W p r , G 1 , T V r , G T 1 , for r { 2 , 3 } , G Ω r , T Ω 2 , G T = .
The placement for the multi-access cache is done as follows:
Z 1 M = { W i , { 1 , 2 } , W i , { 1 , 3 } , W i , { 1 , 4 } , W i , { 1 , 5 } i [ 20 ] } , Z 2 M = { W i , { 1 , 2 } , W i , { 2 , 3 } , W i , { 2 , 4 } , W i , { 2 , 5 } i [ 20 ] } , Z 3 M = { W i , { 1 , 3 } , W i , { 2 , 3 } , W i , { 3 , 4 } , W i , { 3 , 5 } i [ 20 ] } , Z 4 M = { W i , { 1 , 4 } , W i , { 2 , 4 } , W i , { 3 , 4 } , W i , { 4 , 5 } i [ 20 ] } , Z 5 M = { W i , { 1 , 5 } , W i , { 2 , 5 } , W i , { 3 , 5 } , W i , { 4 , 5 } i [ 20 ] } .
The placement for the private caches is done as follows:
Z 3 , 1 , { 1 , 2 , 3 } P = { D 3 , { 1 , 2 , 3 } , { 4 , 5 } 1 } , Z 2 , 1 , { 1 , 2 } P = { D 2 , { 1 , 2 } , { 3 , 4 } 1 , D 2 , { 1 , 2 } , { 3 , 5 } 1 , D 2 , { 1 , 2 } , { 4 , 5 } 1 } , Z 3 , 1 , { 1 , 2 , 4 } P = { D 3 , { 1 , 2 , 4 } , { 3 , 5 } 1 } , Z 2 , 1 , { 1 , 3 } P = { D 2 , { 1 , 3 } , { 2 , 4 } 1 , D 2 , { 1 , 2 } , { 2 , 5 } 1 , D 2 , { 1 , 2 } , { 4 , 5 } 1 } , Z 3 , 1 , { 1 , 2 , 5 } P = { D 3 , { 1 , 2 , 5 } , { 3 , 4 } 1 } , Z 2 , 1 , { 1 , 4 } P = { D 2 , { 1 , 4 } , { 2 , 3 } 1 , D 2 , { 1 , 4 } , { 2 , 5 } 1 , D 2 , { 1 , 4 } , { 3 , 5 } 1 } , Z 3 , 1 , { 1 , 3 , 4 } P = { D 3 , { 1 , 3 , 4 } , { 2 , 5 } 1 } , Z 2 , 1 , { 1 , 5 } P = { D 2 , { 1 , 5 } , { 2 , 4 } 1 , D 2 , { 1 , 5 } , { 2 , 3 } 1 , D 2 , { 1 , 5 } , { 3 , 4 } 1 } , Z 3 , 1 , { 1 , 3 , 5 } P = { D 3 , { 1 , 3 , 5 } , { 2 , 4 } 1 } , Z 2 , 1 , { 2 , 3 } P = { D 2 , { 2 , 3 } , { 1 , 4 } 1 , D 2 , { 2 , 3 } , { 1 , 5 } 1 , D 2 , { 2 , 3 } , { 4 , 5 } 1 } , Z 3 , 1 , { 1 , 4 , 5 } P = { D 3 , { 1 , 4 , 5 } , { 2 , 3 } 1 } , Z 2 , 1 , { 2 , 4 } P = { D 2 , { 2 , 4 } , { 1 , 3 } 1 , D 2 , { 2 , 4 } , { 1 , 5 } 1 , D 2 , { 2 , 4 } , { 3 , 5 } 1 } , Z 3 , 1 , { 2 , 3 , 4 } P = { D 3 , { 2 , 3 , 4 } , { 1 , 5 } 1 } , Z 2 , 1 , { 2 , 5 } P = { D 2 , { 2 , 5 } , { 1 , 3 } 1 , D 2 , { 2 , 5 } , { 1 , 4 } 1 , D 2 , { 2 , 5 } , { 3 , 4 } 1 } , Z 3 , 1 , { 2 , 3 , 5 } P = { D 3 , { 2 , 3 , 5 } , { 1 , 4 } 1 } , Z 2 , 1 , { 3 , 4 } P = { D 2 , { 3 , 4 } , { 1 , 2 } 1 , D 2 , { 3 , 4 } , { 1 , 5 } 1 , D 2 , { 3 , 4 } , { 2 , 5 } 1 } , Z 3 , 1 , { 2 , 4 , 5 } P = { D 3 , { 2 , 4 , 5 } , { 1 , 3 } 1 } , Z 2 , 1 , { 3 , 5 } P = { D 2 , { 3 , 5 } , { 1 , 2 } 1 , D 2 , { 3 , 5 } , { 1 , 4 } 1 , D 2 , { 3 , 5 } , { 2 , 4 } 1 } , Z 3 , 1 , { 3 , 4 , 5 } P = { D 3 , { 3 , 4 , 5 } , { 1 , 2 } 1 } , Z 2 , 1 , { 4 , 5 } P = { D 2 , { 4 , 5 } , { 1 , 2 } 1 , D 2 , { 4 , 5 } , { 1 , 3 } 1 , D 2 , { 4 , 5 } , { 2 , 3 } 1 } .
The memory size of each multi-access cache is 8, and the sizes of the private caches are M P ( 2 ) = 0.3 and M P ( 3 ) = 0.1 . After receiving the demand vectors, the server transmits X = [ Y r , Ω 2 + r 1 , q r , G 1 ] , r { 2 , 3 } , G Ω r where
Y 3 , { 1 , 2 , 3 , 4 , 5 } 1 = V 3 , { 1 , 2 , 3 , 4 , 5 } 1 W q 3 , { 1 , 2 , 3 } 1 , { 4 , 5 } W q 3 , { 1 , 2 , 4 } 1 , { 3 , 5 } W q 3 , { 1 , 2 , 5 } 1 , { 3 , 4 } W q 3 , { 1 , 3 , 4 } 1 , { 2 , 5 } W q 3 , { 1 , 3 , 5 } 1 , { 2 , 4 } W q 3 , { 1 , 4 , 5 } 1 , { 2 , 3 } W q 3 , { 2 , 3 , 4 } 1 , { 1 , 5 } W q 3 , { 2 , 3 , 5 } 1 , { 1 , 4 } W q 3 , { 2 , 4 , 5 } 1 , { 1 , 3 } W q 3 , { 3 , 4 , 5 } 1 , { 1 , 2 } ,
Y 2 , { 1 , 2 , 3 , 4 } 1 = V 2 , { 1 , 2 , 3 , 4 } 1 W q 2 , { 1 , 2 } 1 , { 3 , 4 } W q 2 , { 1 , 3 } 1 , { 2 , 4 } W q 2 , { 1 , 4 } 1 , { 2 , 3 } W q 2 , { 2 , 3 } 1 , { 1 , 4 } W q 2 , { 2 , 4 } 1 , { 1 , 3 } W q 2 , { 3 , 4 } 1 , { 1 , 2 } ,
Y 2 , { 1 , 2 , 3 , 5 } 1 = V 2 , { 1 , 2 , 3 , 5 } 1 W q 2 , { 1 , 2 } 1 , { 3 , 5 } W q 2 , { 1 , 3 } 1 , { 2 , 5 } W q 2 , { 1 , 5 } 1 , { 2 , 3 } W q 2 , { 2 , 3 } 1 , { 1 , 5 } W q 2 , { 2 , 5 } 1 , { 1 , 3 } W q 2 , { 3 , 5 } 1 , { 1 , 2 } ,
Y 2 , { 1 , 2 , 4 , 5 } 1 = V 2 , { 1 , 2 , 4 , 5 } 1 W q 2 , { 1 , 2 } 1 , { 4 , 5 } W q 2 , { 1 , 4 } 1 , { 2 , 5 } W q 2 , { 1 , 5 } 1 , { 2 , 4 } W q 2 , { 2 , 4 } 1 , { 1 , 5 } W q 2 , { 2 , 5 } 1 , { 1 , 4 } W q 2 , { 4 , 5 } 1 , { 1 , 2 } ,
Y 2 , { 1 , 3 , 4 , 5 } 1 = V 2 , { 1 , 3 , 4 , 5 } 1 W q 2 , { 1 , 3 } 1 , { 4 , 5 } W q 2 , { 1 , 4 } 1 , { 3 , 5 } W q 2 , { 1 , 5 } 1 , { 3 , 4 } W q 2 , { 3 , 4 } 1 , { 1 , 5 } W q 2 , { 3 , 5 } 1 , { 1 , 4 } W q 2 , { 4 , 5 } 1 , { 1 , 3 } ,
Y 2 , { 2 , 3 , 4 , 5 } 1 = V 2 , { 2 , 3 , 4 , 5 } 1 W q 2 , { 2 , 3 } 1 , { 4 , 5 } W q 2 , { 2 , 4 } 1 , { 3 , 5 } W q 2 , { 2 , 5 } 1 , { 3 , 4 } W q 2 , { 3 , 4 } 1 , { 2 , 5 } W q 2 , { 3 , 5 } 1 , { 2 , 4 } W q 2 , { 4 , 5 } 1 , { 2 , 3 } ,
q r , G 1 = p r , G 1 d r , G 1 , r { 2 , 3 } , G Ω r .
By the above transmissions, each user will be able to get their demands. Consider the user U 3 , { 1 , 2 , 3 } 1 and the transmission Y 3 , { 1 , 2 , 3 , 4 , 5 } 1 . The term W q 3 , { 1 , 2 , 4 } 1 , { 3 , 5 } can be written as i [ 20 ] q 3 , { 1 , 2 , 4 } , i 1 W i , { 3 , 5 } . As the user knows q 3 , { 1 , 2 , 4 } 1 = ( q 3 , { 1 , 2 , 4 } , 1 1 , q 3 , { 1 , 2 , 4 } , 20 1 ) and has access to the 3rd cache, it can calculate the term. Similarly, the user U 3 , { 1 , 2 , 3 } 1 can calculate the terms W q 3 , G 1 , S G G { 1 , 2 , 3 , 4 , 5 } , | G | = 3 , G { 1 , 2 , 3 } . Moreover, the user U 3 , { 1 , 2 , 3 } 1 can get the key V 3 , { 1 , 2 , 3 , 4 , 5 } 1 W p 3 , { 1 , 2 , 3 } 1 , { 4 , 5 } from its private cache Z 3 , 1 , { 1 , 2 , 3 } P . So, the user U 3 , { 1 , 2 , 3 } 1 can decode W d 3 , { 1 , 2 , 3 } 1 , { 4 , 5 } . Similarly, all the other users can decode the files demanded by them. The users that are connected to 2 caches make use of the transmissions done for every 4-sized subset of [ 5 ] . Now, let us see the demand privacy constraint. The server sends the vectors q r , G 1 , r { 2 , 3 } , G Ω r to all the users. Since the vectors are uniformly distributed, it is impossible to know anything about the demand vectors d r , G 1 , r { 2 , 3 } , G Ω r . So, demand privacy is guaranteed since each user does not know the key of the other user, and the vectors q r , G 1 , r { 2 , 3 } , G Ω r are random vectors independently and uniformly distributed over F 2 20 . Now, let us look into the aspect of security. Each signal sent by the server is protected by a security key. Therefore, an eavesdropper who continuously observes the signals sent by the server learns nothing about the file library. As there are six transmissions, the rate achieved is 6 10 .

6. Main Results

In this section, we present the main results of this paper.
Theorem 1.
Consider the combinatorial topology with private caches. For t [ 0 : C r ] , with M M = N t / C and M P = C r t / C t , there exists a scheme that satisfies the conditions in (8)–(10) with the rate
R = C t + r C t .
Proof. 
Consider the proposed scheme in Section 4. Each file is divided into C t subfiles. So, the size of each subfile is 1 C t . Each transmission also has a size of 1 C t . There is a transmission for every ( t + r ) -sized subset of [ C ] . Since the lengths of q G , G Ω r are negligible compared to the file size, the total rate R = C t + r C t . The scheme is given for integer values of t. For the other values of t, memory sharing can be used to get the rate. □

Comparison with [17]

In this subsection, we compare the proposed scheme with the SP-LFR scheme in [17]. Security, privacy, and LFR were considered for combinatorial topology in [17]. The scheme in [17] does not involve private caches and it does not provide privacy against colluding users, whereas our model includes both. The comparison here is therefore not between the same problem settings, but rather at the level of memory usage for the same achievable rate R = C t + r / C t . Within this scope, the proposed scheme achieves lower total memory accessed by each user and global cache memory, while also ensuring a stronger privacy guarantee. Here, by total memory accessed by each user, we mean the total cache content available to each user. By global cache memory, we mean the total cache memory available across the entire system.
We first compare the total memory accessed by each user for the same rate in both schemes. The size of the total memory accessed by each user, M 1 , for the SP-LFR scheme in [17], is given by
M 1 = r N t C + r C r t C 1 r 1 C t .
The size of the total memory accessed by each user, M 2 , for the proposed scheme is
M 2 = r N t C + C r t C t .
The rate R for both the schemes is C t + r C t . By comparing (39) and (40), it is evident that, for the same rate, the total memory accessed by each user is lower in the proposed scheme.
Furthermore, we also compare the two schemes under the setting where the same total memory accessed by each user is considered for the two schemes. Suppose both schemes are operated at the same rate R 1 , requiring total memory accessed by each user M 2 in the proposed scheme and M 1 in [17], with M 2 < M 1 . If we fix that to M 1 in both the systems, the proposed scheme has more memory available than strictly required for R 1 , and hence can operate at a strictly smaller rate R 2 < R 1 , while the scheme in [17] remains at R 1 . This behavior is also visible in Figure 2, Figure 3 and Figure 4, where for any fixed total memory accessed by each user, the red curve (proposed scheme) lies below the blue curve [17].
Next, we compare the global cache, which is the total memory available in the entire system. The global cache size, M G ( 1 ) , for the system in [17], is given by
M G ( 1 ) = C N t C + C C 1 r 1 C r t C t .
The global cache size, M G ( 2 ) , for the system considered in this work is
M G ( 2 ) = C N t C + C r C r t C t .
Since C r < C C 1 r 1 , we have M G ( 2 ) < M G ( 1 ) . Thus, the global cache memory requirement is also lower for the system considered in this work.
Furthermore, we also compare the two schemes under the setting where the same global cache size is allocated. Suppose both schemes achieve the same rate R 1 with global cache sizes M G ( 2 ) (proposed scheme) and M G ( 1 ) [17], where M G ( 2 ) < M G ( 1 ) . If we fix the global cache size to M G ( 1 ) in both systems, the proposed scheme has surplus memory available beyond what is necessary for R 1 , enabling it to operate at a strictly smaller rate R 2 < R 1 , while the scheme in [17] remains at R 1 . This advantage is consistent with Figure 5 and Figure 6, where for any fixed global cache size the red curve (proposed scheme) lies below the blue curve [17], confirming that our scheme achieves a lower rate under equal global cache memory.
Theorem 2.
For the combinatorial topology with private caches,
R * ( M M , M P ) max l { 1 , 2 , , min ( N / 2 , K ) } l N / l ( min ( l + r 1 , C ) M M + l M P ) N / l 1 .
Proof. 
The detailed proof is given in Appendix B. □
Theorem 3.
For the combinatorial topology with private caches, the achievable rate R and the optimal rate R * satisfy
R ( M M , M P ) R * ( M M , M P ) 1 , r C 1 , N 2 K . 5 , C M M N = 1 , r < C 1 , K 5 ( r + 1 ) .
Proof. 
The proof is presented in Appendix C. □
The scheme achieves optimality when r = C 1 because each user can access almost all caches, which maximizes the cached content available to each user and fully exploits coded multicasting opportunities. In this case, the achievable rate meets the converse bound (e.g., for r = C 1 and t = 1 , only one transmission is required). For smaller values of r, there remains a non-trivial gap. Since getting the gap results for any general r is mathematically rigorous, we provide an example to briefly explain the variation in the gap for different values of r.
Example 5.
Consider C = 7 , N = 100 , and t = 1 .
Table 1 reports the achievable rate R and the lower bound R * (Theorem 2) for different values of r. For r = 1 , 2 , , 5 there is a non-trivial gap between the two, while for r = C 1 = 6 we have R = R * . This confirms that the proposed scheme is optimal at the threshold r = C 1 , as stated in Theorem 3.
Theorem 4.
Consider the GCT with private caches. For t [ 0 : C ] , r [ 0 : C ] , with M M = N t C and M P ( r ) = C r t C t , there exists a scheme that satisfies the conditions in (26)–(28) with the rate, R G given by
R G = r = 0 C K r C t + r C t .
Proof. 
Consider the scheme given in Section 5. Each file is divided into C t . So, the size of each subfile is 1 / C t . The total number of transmissions made are r = 0 C K r C t + r . Each transmission has a size of 1 / C t . So, the rate is R G = r = 0 C K r C t + r C t . □
Theorem 5.
For the GCT with private cache, the optimal rate R G * is given by,
R G * max s [ C ] K s ( N ( s M M + r = 0 s s r K r M P ( r ) ) ) N 1 ,
where K s = r = 0 s s r K r .
Proof. 
Refer Appendix D for the detailed proof. □

7. Discussion

In this section, we discuss the schemes presented in this paper through numerical evaluations. We compare the proposed scheme in Section 4 with the SP-LFR scheme in [17] and also with the MAN-PDA-based SP-LFR scheme in [15]. We also compare the performance of the scheme for GCT with private caches under different K GCT vectors. The comparison is done in the following way:
  • First, we compare the proposed scheme with the SP-LFR scheme in [17]. The system model in [17] is the multi-access combinatorial topology without private caches. However, the system model considered in this work is the combinatorial topology with private caches. To compare two different systems, we make use of two parameters. One is the total memory accessed per user, which is the total amount of memory available to each user. The other is the global cache size, which is the total amount of memory available in the entire system. Both systems support the same number of users for a given r.
  • Next, we compare the proposed scheme with the scheme developed for the dedicated cache setup in [15]. The proposed scheme reduces to the dedicated cache setup in [15] when r = 1 . We first consider the case where both setups have the same number of caches. In the dedicated cache setup, the number of users is equal to the number of caches. However, in the combinatorial topology with private caches, the total number of users exceeds the number of caches. To enable a fair comparison under this setting, we use the metric of rate per user, defined as the rate divided by the total number of users. This metric was previously used in [3,7]. Next, we consider a comparison in which both the dedicated cache setup and the combinatorial topology with private caches serve the same number of users, and the total global cache size is also kept equal across both setups.
  • Lastly, we compare the performance of the scheme for GCT with private caches for different values of the vector K G C T with the SP-LFR scheme in [17] and the MAN-PDA-based SP-LFR scheme in [15].

7.1. Comparision with the SP-LFR Scheme in [17]

The performance comparison between the proposed scheme and the SP-LFR scheme in [17] is given in Figure 2 for C = 7 , r = 2 , N = 100 , t [ 1 : 5 ] . Normalised total memory accessed per user, which is the total memory accessed by each user normalised with the total number of files, is used for comparison. The scheme in [17] does not include private caches and privacy against colluding users, while our model incorporates both. The comparison is therefore not between the same problem settings, but serves to illustrate the impact of introducing private caches. Since both schemes achieve the same rate expression, the curves can be meaningfully compared in terms of memory usage at equal rates. The proposed scheme performs better than the SP-LFR scheme in [17]. The performance comparison for C = 7 , r = 3 , N = 100 is shown in Figure 3. The lower bound plotted in Figure 2 and Figure 3 corresponds to the information-theoretic lower bound derived in Theorem 2 using cut-set arguments. The gap between the proposed scheme and this lower bound is larger when the cache size is small because only limited coded multicasting opportunities exist in that regime. As the cache size grows, the multicasting gain increases, and the performance of the proposed scheme moves closer to the bound, which explains why the gap decreases. To achieve the rate of 5, the total memory accessed by each user in the SP-LFR scheme in [17] is 68.5714 , whereas the proposed scheme requires 43.4286 . The gap between the proposed scheme and the SP-LFR scheme in [17] increased when r increased from 2 to 3. This is because the keys that are required for C 1 r 1 users are stored in each cache in the SP-LFR scheme in [17], which leads to an increase in the cache memory size. However, in the proposed scheme, due to the presence of a private cache, the keys that are useful to a particular user are stored in that user’s private cache. The performance comparison for C = 7 , r = 3 , N = 60 is shown in Figure 4. As N increases, the gap between the curves decreases. The gap between the proposed scheme and the scheme in [17] increases with r and decreases with N because of the behavior of the second term in the total memory accessed per user expressions in (39) and (40). Specifically, both schemes share the same first term r N t C , which represents the storage related to files, but differ in the key-storage term (second term). Our scheme stores each user’s keys exclusively in their private cache, whereas the scheme in [17] redundantly stores keys across the multi-access caches (to compensate for the absence of private caches). As N grows, the key-storage term becomes relatively less significant, so the gap shrinks. On the other hand, as r increases, the combinatorial factor in the second term grows, which increases the difference and hence increases the gap. The Normalised global cache(global cache/N) vs. rate curves are plotted in Figure 5 for C = 7 , r = 3 , N = 100 . The proposed scheme requires the global cache memory of 120 to achieve the rate of 5, whereas the SP-LFR scheme in [17] requires the global memory of 160. Similarly, the comparison curves for C = 7 , r = 3 , N = 60 are shown in Figure 6. The performance gap widens as r increases and narrows as N increases. This can be explained by analyzing the global cache expressions in (41) and (42). The difference arises entirely from the key storage term (second term), since the file storage term N t is identical. As r increases, the combinatorial factor C C 1 r 1 grows faster than C r . As N increases, the key storage term becomes negligible relative to N t , causing the relative gap to shrink.

7.2. Comparision with the MAN-PDA Based SP-LFR Scheme in [15]

The curves for r = 1 show the performance of the MAN-PDA-based SP-LFR Scheme in [15]. As the number of users changes with r, to be able to compare the performances for different values of r, we use the rate per user instead of the rate. Let M D represent the size of cache memory in the dedicated cache setup. Now, we compare our scheme for different values of r when the total memory accessed by each user is the same. In the combinatorial topology with private caches, the total memory accessed by each user is r M M + M P . To make the users access the same amount of memory in both the setups, M D = r M M + M P . The curves for different values of r are shown in Figure 7 when C = 7 , N = 30 and in Figure 8 when C = 7 , N = 50 . The performance of the dedicated cache setup is better than our setup. This is because some subfiles are repeated in two or more caches that are accessible to a user, making the effective number of subfiles less in our setup when the total memory accessed by each user is the same. By this comparison, our setup is at a disadvantage because total system memory is more in the dedicated cache setup. For example, consider C = 7 , r = 2 , N = 50 . When M M = 50 / 7 and M P = 5 / 7 , the global cache of the dedicated cache setup is 105, whereas for our setup, it is 65. Now, we compare the performance of curves for different values of r, keeping the global cache memory of the system the same. It is shown in Figure 9 and Figure 10 for N = 30 , and N = 50 , respectively. Clearly, as the value of r increases, the performance of the curves increases. So, the performance of our scheme is better than the scheme given in [15] for the dedicated cache setup. Now, we consider both the dedicated cache setup and the combinatorial topology with private caches setup to have the same number of users. Let the number of users in the dedicated cache setup be K D . For a given r, the combinatorial topology with private cache setup supports C r users. So, we make K D = C r . For different values of r, we have to consider the dedicated cache setup with the number of users accordingly. Let M G represent the global cache memory. The performance curves are shown in Figure 11 for C = 7 , r = 2 . Clearly, the combinatorial topology with private caches outperforms the dedicated cache setup. The curves for all the values of r are shown in Figure 12, where the dotted lines represent the dedicated cache, and the solid lines represent the combinatorial topology with private caches. Figure 12 compares the performance of the proposed scheme for the combinatorial topology with private caches with the dedicated cache model from [15], under the condition that both systems serve the same number of users K = C r . This is a crucial comparison because the combinatorial topology inherently supports K = C r users—far more than the K D = C users supported by the dedicated cache setup. To enable a fair comparison, we scale the dedicated cache system to also serve K D = C r users. The x-axis shows the normalized global cache size ( M G / N ), and the y-axis shows the delivery rate R. For r = 1 , the two models coincide, while for r > 1 , the proposed scheme consistently outperforms the scheme in [15], achieving a strictly lower rate for the same M G / N . For every r > 1 , our scheme achieves a strictly lower rate per user than the dedicated cache scheme. This clear shows that the proposed scheme outperfroms the scheme in [15].

7.3. Plots for GCT with Private Caches

We now show the plots for the extended setup. Each user accesses a different number of multi-access caches in this setup, so we choose global cache memory as the metric. We choose to show the plots by considering different values for the vector K G C T . When K G C T = ( 0 , 1 , 0 , 0 , 0 , 0 ) , it represents the dedicated cache setup. The performance curves when K G C T = ( 0 , 0 , 0 , 1 , 0 , 0 ) , K G C T = ( 0 , 0 , 2 , 2 , 0 , 0 ) , and K G C T = ( 0 , 0 , 0 , 2 , 2 , 0 ) for N = 50 , and N = 80 in Figure 13 and Figure 14, respectively. When t = 0 , the size of multi-access caches is zero. So, for the connectivities that serve more users, the global memory required is more to achieve the unity rate per user value. So, K G C T = ( 0 , 0 , 0 , 1 , 0 , 0 ) has less global cache than K G C T = ( 0 , 0 , 2 , 2 , 0 , 0 ) , and K G C T = ( 0 , 0 , 0 , 2 , 2 , 0 ) . For other values of t, the performance is shown in Figure 13 when N = 80 and in Figure 14 when N = 50 . The curve shown as r = 1 represents the dedicated cache setup. Clearly, the performance of the dedicated cache setup is lower when compared to the curves for GCT with private caches. The curve with K G C T = ( 0 , 0 , 0 , 2 , 2 , 0 ) is below the curve with K G C T = ( 0 , 0 , 0 , 1 , 0 , 0 ) because the contribution of the users that are connected to 4 multi-access caches makes the overall rate per user better. Similarly, the curve with K G C T = ( 0 , 0 , 2 , 2 , 0 , 0 ) is above the curve with K G C T = ( 0 , 0 , 0 , 1 , 0 , 0 ) .
We show the performance of the achievable scheme for GCT with private caches for multiple K GCT values along with the lower bound given in Theorem 5 in Figure 15. It is evident that the gap between the achievable scheme and the lower bound is larger in the low-memory region, while it decreases as the global cache memory increases. The reason is that with more global cache memory, the number of coded multicasting opportunities grows, which in turn reduces the gap to the lower bound. This trend is shown in Table 2 for K GCT = ( 0 , 0 , 2 , 2 , 0 , 0 ) , C = 5 , N = 80 .

8. Conclusions

The security, demand privacy, and Linear Function Retrieval (LFR) for combinatorial topology with private caches are explored. We proposed a scheme that would satisfy the security, privacy, and LFR conditions. The private caches included in this work enable the proposed scheme to provide privacy against colluding users. It is shown that to achieve the same rate both in [17] and in the proposed scheme, the total memory accessed by each user is less in the proposed scheme. Our proposed scheme performs better than the SP-LFR scheme in [17] in terms of global cache memory also. As a special case, when r = 1 , the proposed scheme recovers the MAN-PDA-based SP-LFR scheme in [15]. We derived a lower bound and proved that the scheme is optimal when r C 1 . Then, we showed that the proposed scheme can be extended to a more generalized setup.

Author Contributions

Conceptualization, B.S.R.; Methodology, M.C.; Validation, B.S.R.; Formal analysis, M.C.; Investigation, M.C.; Resources, B.S.R.; Data curation, M.C.; Writing— original draft, M.C.; Writing—review & editing, B.S.R.; Visualization, B.S.R.; Supervision, B.S.R.; Project administration, B.S.R.; Funding acquisition, B.S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported partly by the Science and Engineering Research Board (SERB) of the Department of Science and Technology (DST), Government of India, through J.C. Bose National Fellowship to B. Sundar Rajan.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

Part of the content of this manuscript appears in proceedings of IEEE Information Theory Workshop (ITW), Shenzhen, China, 23–24 November 2024 [19].

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. All authors have read and agreed to the published version.

Appendix A. Proof of Proposition 1

We prove the minimum cache condition for security given in Proposition 1.
As shown in Figure 1, the server containing N files is connected to K users via an error-free shared link. Each user has access to a unique set of r caches from a total of C multi-access caches, each of size M M , and also a private cache of size M P . The condition is proved by considering single file retrieval, and the result holds for LFR as well. In this system, the total number of users is K = C r . Since each user is associated with an r-sized subset, arrange all the users in the lexicographic order of their associated subsets. Let the i t h user demand the file W d i . Let,
W ¯ = { W d 1 , W d 2 , , W d K } , Z ¯ = { Z 1 M , Z 2 M , , Z C M } { Z 1 P , , Z K P } .
By correctness condition in (8) and the security condition in (9), we can write
H ( W d 1 , W d 2 , W d K | X , Z ¯ ) = 0 ,
I ( W d 1 , W d 2 , W d K ; X ) = 0 .
We have,
C r F = H ( W ¯ )
= I ( W ¯ ; X , Z ¯ ) + H ( W ¯ | X , Z ¯ )
= I ( W ¯ ; X , Z ¯ )
= I ( W ¯ ; X ) + I ( W ¯ ; Z ¯ | X )
= I ( W ¯ ; Z ¯ | X )
H ( Z ¯ )
c = 1 C H ( Z c M ) + k = 1 K H ( Z k P )
C M M F + K M P F .
This implies C M M + K M P K . Here, (A3a) follows because there are C r users, each demanding a file of size F bits, and the files are assumed independent and uniformly distributed; therefore the total entropy is C r F . Then, (A3b) follows from the definition of mutual information: for any two random variables A and B, H ( A ) = I ( A ; B ) + H ( A | B ) . Here, A = W ¯ and B = ( X , Z ¯ ) , so we obtain H ( W ¯ ) = I ( W ¯ ; X , Z ¯ ) + H ( W ¯ X , Z ¯ ) . Finally, (A3c) and (A3e) come from the correctness condition in (A1) and the security condition in (A2), respectively. This completes the proof of Proposition 1.

Appendix B. Proof of Theorem 2

We prove the cut-set based lower bound given in Theorem 2.
Consider the combinatorial topology with private caches. Each user is connected to a unique set of r multi-access caches out of C, each of size M M , and a private cache of size M P . Since each user is associated with an r-sized subset, arrange all the users in the lexicographic order of their associated subsets. The content of a multi-access cache is represented as Z i M , i [ C ] , and the content of a private cache is represented as Z j P , j [ K ] . Each user is connected to r out of C multi-access caches and a private cache. So, if the first l users are considered, each user is connected to r out of z = min ( l + r 1 , C ) multi-access caches and one private cache. The server consists of a library of N files, W [ N ] = { W 1 , W 2 , W N } , each of size F bits. Consider the first l users, let them request W 1 , W l , respectively. The remaining users can request any arbitrary files. We consider that each user requests a single file. The lower bound is also valid for linear function retrieval. Let the server make the transmission X 1 . Using the transmission X 1 , the content of the first z = min ( l + r 1 , C ) multi-access caches, and l private caches, the first l users can decode W 1 , W l . Now, let the first l users request W l + 1 , W l + 2 , W 2 l , respectively. The server sends the transmission X 2 . Using the transmission X 2 , the first z multi-access caches, and the l private caches, the first l users can decode the files W l + 1 , W l + 2 , W 2 l . By continuing in this way, consider the transmission X N / l . The first l users can decode W ( N / l 1 ) l + 1 , W ( N / l 1 ) l + 2 , W ( N / l ) l files using the first z mutli-access caches and the first l private caches. Considering all the transmissions X 1 , X 2 , X N / l , using the first z multi-access caches and the first l private caches, the users must be able to decode W 1 , W 2 , W l N / l . Let,
W ˜ = { W 1 , W 2 , , W l N / l } , X ˜ = { X 1 , X 2 , , X N / l } , X ˜ / s = { X 1 , , X s 1 , X s + 1 , , X N / l } , Z ˜ = { Z 1 M , , Z z M } { Z 1 P , , Z l P }
By the conditions in (8) and (9),
H ( W ˜ | X ˜ , Z ˜ ) = 0 ,
I ( W ˜ ; X s ) = 0 , s = 1 , 2 N / l .
By using the transmissions in X ˜ , the first l users can decode the files in W ˜ using the cache content in Z ˜ . We have,
l N / l F = H ( W ˜ )
= I ( W ˜ ; X ˜ , Z ˜ ) + H ( W ˜ | X ˜ , Z ˜ )
= I ( W ˜ ; X ˜ , Z ˜ )
= I ( W ˜ ; { X 1 , X 2 , X N / l } , Z ˜ )
= I ( W ˜ ; X s ) + I ( W ˜ ; X ˜ / s , Z ˜ / X s )
= I ( W ˜ ; X ˜ / s , Z ˜ / X s )
H ( X ˜ / s , Z ˜ )
i = 1 , i s N / l H ( X i ) + H ( Z ˜ )
( N / l 1 ) R * F + z M M F + l M P F .
where (A6a) follows because the set W ˜ contains N / l l independent files of size F each, giving total entropy H ( W ˜ ) = N / l l F , and (A6b) follows from the definition of mutual information: For any random variables A and B, H ( A ) = I ( A ; B ) + H ( A B ) . Here, A = W ˜ and B = ( X , Z ˜ ) , where X denotes the transmissions and Z ˜ denotes the relevant cache contents. Now, (A6c) comes from the condition in (A4) and (A6f) comes from the security condition in (A5). By rearranging the terms, we get
R * l N / l ( z M M + l M P ) N / l 1 .
Optimizing over all possible values of l, we obtain
R * max l { 1 , 2 , min ( N / 2 , K ) } l N / l ( z M M + l M P ) N / l 1 .
This completes the proof.

Appendix C. Proof of Theorem 3

Consider the combinatorial topology with private caches. The server has N files, each of size F bits and it is connected to K users through an error-free shared link. Each user has access to a unique set of r out of C mutli-access caches and a private cache. For every set of r mutli-access caches, there is a user. The rate of the scheme with M M = N t / C and M P = C r t / C t that satisfies the conditions in (8)–(10) is given by
R ( M P , M M ) = C t + r C t .
The optimal rate R * is given by
R * max l { 1 , 2 , min ( N / 2 , K ) } l N / l ( z M M + l M P ) N / l 1 ,
where z = min ( l + r 1 , C ) . Now, we prove the optimality results using the inequality in (A8). First, we consider the case when r C 1 . When r = C 1 , the possible values for t are t = 0 , 1 . Let us consider the case when t = 0 . The multi-access cache size M M = N t C = 0 and private cache size M P = 1 . Now, consider Equation (A8) for l = K . Then, z = C . The optimal rate for l = K in (A8) is
R * K N / K ( C M M + K M P ) N / K 1
= K N / K K N / K 1
= K ( N / K 1 ) N / K 1
= K ,
where (A9a) comes by substituting M M = 0 and M P = 1 .
For t = 0 , the achievable rate given (A7) is
R ( 0 , 1 ) = K .
Thus, it is proved for t = 0 . Now consider the case when t = 1 , r = C 1 . The multi-access memory size M M and the private cache memory size M P is
M M = N t C = N C , M P = C r t C t = 1 C .
Let us calculate the rate. By substituting the values of t and r in (A7), we get
R ( N C , 1 C ) = C 1 + C 1 C = 1 C .
Now, let us calculate M = r M M + M P as follows
M = r M M + M P = N r C + 1 C = N ( C 1 ) C + 1 C = N ( 1 R ) + R .
By rearranging the above terms, we get
R = N M N 1 .
Now, consider the optimal rate equation in (A7) when l = 1 and substitute the values of M M and M P .
R * N ( r M M + M P ) N 1 = N M N 1 .
By comparing the above inequality with (A11), the proof for t = 1 is completed for r = C 1 . Suppose r = C , then the only possible value of t is t = 0 . The achievable rate and the optimal rate for t = 0 is given in (A10) and (A9d), respectively, which proves the optimality.
Now, let us consider the case when t = 1 , r < C 1 . The values of M M and M P for t = 1 are as given below.
M M = N C , M P = C r C .
Now, consider the optimal rate inequality in (A7) when l = 1 and substitute the values of M M and M P .
R * N ( r M M + M P ) N 1 = N ( N r C + C r C ) N 1 = ( N ( C r ) C C r C ) N 1 = ( 1 r C ) ( N 1 ) N 1 = 1 r C .
The rate, R given in (A8) for t = 1 is given by
R = C r + 1 C .
The above equation can be simplified as follows.
R = C r + 1 C = C ! ( C r 1 ) ! ( r + 1 ) ! C = ( C r ) C r ( r + 1 ) C = ( 1 r C ) ( C r r + 1 ) .
Therefore at t = 1 ,
R R * ( 1 r C ) ( C r r + 1 ) 1 r C C r r + 1 5 , if K 5 ( r + 1 ) .
This completes the proof.

Appendix D. Proof of Theorem 5

We prove the cut-set based lower bound for GCT with private cache given in Theorem 5.
We consider GCT with private caches. There are C multi-access caches and K private caches in the system. Assume that the users are arranged as follows: The users that are connected to r multi-access caches are followed by the users that are connected to r + 1 multi-access caches. The users that are connected to the same number of multi-access caches are arranged in lexicographic order. Consider the first s multi-access caches where s [ 1 : C ] . The number of users who access only from these s caches is K s = i = 0 s s i K i . Let those K s users demand W 1 , W K s . The remaining users demand arbitrary files from W [ N ] . Let the server send the transmission X 1 to satisfy the demands of the users. Using the transmission X 1 , the first s multi-access caches, and K s private caches, the considered K s users can decode the first K s files. Let Z ˜ i denote the total cache content accessible to the i t h user. Now, consider the case where the K s users request W K s + 1 , W K s + 2 , W 2 K s . Let the server send X 2 to satisfy the demands. Using the transmission X 2 , the first s multi-access caches, and K s private caches, the considered K s users can decode the W K s + 1 , W K s + 2 , W 2 K s files. If we consider L = N / K s such demand vectors and the corresponding transmissions, then all the N files can be decoded by the K s users. The decodability and security conditions are as follows:
H ( W [ N ] | X [ L ] , Z ˜ [ K s ] ) = 0 .
I ( W [ N ] ; X L ) = 0 .
We have,
N = H ( W [ N ]
= I ( W [ N ] ; X [ L ] , Z ˜ [ K s ] ) + H ( W [ N ] | X [ L ] , Z ˜ [ K s ] )
= I ( W [ N ] ; X [ L ] , Z ˜ [ K s ] )
= I ( W [ N ] ; X L ) + I ( W [ N ] ; X [ L 1 ] , Z ˜ [ K s ] / X L )
= I ( W [ N ] ; X [ L 1 ] , Z ˜ [ K s ] / X L )
l = 1 L 1 H ( X l ) + j = 1 K s H ( Z ˜ K s )
( L 1 ) R * + s M M + r = 0 s s r K r M P ( r ) ,
where (A14a) comes because the files are independently and uniformly distributed, and (A14b) come from the definition of mutual information. Then, (A14c) and (A14e) comes from the conditions in (A12) and (A13), respectively. As N K s N K s + K s 1 K s .
N ( N K s + K s 1 K s 1 ) R G * + s M M + r = 0 s s r K r M P ( r ) ,
= ( N 1 K s ) R G * + s M M + r = 0 s s r K r M P ( r ) .
By rearranging the terms we get,
R G * K s ( N ( s M M + r = 0 s s r K r M P ( r ) ) ) N 1 .
By maximizing over all possible values of s [ 1 : C ] , we get
R * max s [ C ] K s ( N ( s M M + r = 0 s . s r K r M P ( r ) ) ) N 1 .
This completes the proof.

References

  1. Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. IEEE Trans. Inf. Theory 2014, 60, 2856–2867. [Google Scholar] [CrossRef]
  2. Yan, Q.; Cheng, M.; Tang, X.; Chen, Q. On the Placement Delivery Array Design for Centralized Coded Caching Scheme. IEEE Trans. Inf. Theory 2017, 63, 5821–5833. [Google Scholar] [CrossRef]
  3. Muralidhar, P.N.; Katyal, D.; Rajan, B.S. Maddah-Ali-Niesen Scheme for Multi-access Coded Caching. In Proceedings of the IEEE Information Theory Workshop, (ITW2021), Kanazawa, Japan, 17–21 October 2021. [Google Scholar]
  4. Reddy, K.S.; Karamchandani, N. On the Exact Rate-Memory Trade-off for Multi-access Coded Caching with Uncoded Placement. In Proceedings of the 2018 International Conference on Signal Processing and Communications (SPCOM), Bangalore, India, 16–19 July 2018; pp. 1–5. [Google Scholar]
  5. Serbetci, B.; Parrinello, E.; Elia, P. Multi-access coded caching: Gains beyond cache-redundancy. In Proceedings of the IEEE Information Theory Workshop, Visby, Sweden, 25–28 August 2019. [Google Scholar]
  6. Hachem; Karamchandani, N.; Diggavi, S.N. Coded caching for multi-level popularity and access. IEEE Trans. Inf. Theory 2017, 63, 3108–3141. [Google Scholar] [CrossRef]
  7. Katyal, D.; Muralidhar, P.N.; Rajan, B.S. Multi-Access Coded Caching Schemes From Cross Resolvable Designs. IEEE Trans. Commun. 2021, 69, 2997–3010. [Google Scholar] [CrossRef]
  8. Brunero, F.; Elia, P. Fundamental Limits of Combinatorial Multi-Access Caching. IEEE Trans. Inf. Theory 2023, 69, 1037–1056. [Google Scholar] [CrossRef]
  9. Wan, K.; Sun, H.; Ji, M.; Tuninetti, D.; Caire, G. On the Optimal Load-Memory Tradeoff of Cache-Aided Scalar Linear Function Retrieval. IEEE Trans. Inf. Theory 2021, 67, 4001–4018. [Google Scholar] [CrossRef]
  10. Wan, K.; Caire, G. On Coded Caching with Private Demands. IEEE Trans. Inf. Theory 2021, 67, 358–372. [Google Scholar] [CrossRef]
  11. Kamath, S. Demand private coded caching. arXiv 2019, arXiv:1909.03324. [Google Scholar] [CrossRef]
  12. Aravind, V.R.; Sarvepalli, P.K.; Thangaraj, A. Subpacketization in Coded Caching with Demand Privacy. In Proceedings of the 2020 National Conference on Communications (NCC), Kharagpur, India, 21–23 February 2020; pp. 1–6. [Google Scholar]
  13. Yan, Q.; Tuninetti, D. Fundamental Limits of Caching for Demand Privacy Against Colluding Users. IEEE J. Sel. Areas Inf. Theory 2021, 2, 192–207. [Google Scholar] [CrossRef]
  14. Sengupta, A.; Tandon, R.; Clancy, T.C. Fundamental limits of caching with secure delivery. IEEE Trans. Inf. Forensics Secur. 2015, 10, 355–370. [Google Scholar] [CrossRef]
  15. Yan, Q.; Tuninetti, D. Key Superposition Simultaneously Achieves Security and Privacy in Cache-Aided Linear Function Retrieval. IEEE Trans. Inf. Forensics Secur. 2021, 16, 5250–5263. [Google Scholar] [CrossRef]
  16. Namboodiri, K.K.K.; Rajan, B.S. Multi-Access Coded Caching with Demand Privacy. In Proceedings of the IEEE Wireless Communications and Networking Conference, Austin, TX, USA, 10–13 April 2022; pp. 2280–2285. [Google Scholar]
  17. Chinnapadamala, M.; Rajan, B.S. Security and Privacy in Cache-Aided Linear Function Retrieval for Multi-Access Coded Caching. In Proceedings of the 2022 IEEE Information Theory Workshop (ITW), Mumbai, India, 1–9 November 2022; pp. 690–695. [Google Scholar]
  18. Shamir, A. How to share a secret. Commun. ACM 1979, 22, 612–613. [Google Scholar] [CrossRef]
  19. Chinnapadamala, M.; Rajan, B.S. Security, Privacy and Linear Function Retrieval in Multi-Access Combinatorial Topology with Private Cache. In Proceedings of the 2024 IEEE Information Theory Workshop (ITW), Shenzhen, China, 24–28 November 2024. [Google Scholar]
  20. Yao, J.; Han, T.; Ansari, N. On Mobile Edge Caching. IEEE Commun. Surv. Tutor. 2019, 21, 2525–2553. [Google Scholar] [CrossRef]
  21. Zahed, M.I.A.; Ahmad, I.; Habibi, D.; Phung, Q.V. Green and Secure Computation Offloading for Cache-Enabled IoT Networks. IEEE Access 2020, 8, 63840–63855. [Google Scholar] [CrossRef]
  22. Ngo, Q.T.; Phan, K.T.; Xiang, W.; Mahmood, A.; Slay, J. On Edge Caching in Satellite—IoT Networks. IEEE Internet Things Mag. 2021, 4, 107–112. [Google Scholar] [CrossRef]
  23. Gu, S.; Tan, Y.; Zhang, N.; Zhang, Q. Energy-Efficient Content Placement with Coded Transmission in Cache-Enabled Hierarchical Industrial Internet of Things Networks. IEEE Trans. Ind. Inform. 2021, 17, 5699–5708. [Google Scholar] [CrossRef]
  24. Zhou, H.; Jiang, K.; He, S.; Min, G.; Wu, J. Distributed Deep Multi-Agent Reinforcement Learning for Cooperative Edge Caching in Internet-of-Vehicles. IEEE Trans. Wirel. Commun. 2023, 22, 9595–9609. [Google Scholar] [CrossRef]
  25. Coutinho, R.W.L.; Boukerche, A. Modeling and Analysis of a Shared Edge Caching System for Connected Cars and Industrial IoT-Based Applications. IEEE Trans. Ind. Inform. 2020, 16, 2003–2012. [Google Scholar] [CrossRef]
  26. Aruba Networks. IoT Heading for Mass Adoption by 2019 Driven by Better-Than-Expected Business Results. arubanetworks.com. 2017. Available online: https://news.arubanetworks.com/press-release/arubanetworks/iot-heading-mass-adoption-2019-driven-better-expected-business-results (accessed on 12 August 2025).
  27. Karunarathne, S.M.; Saxena, N.; Khan, M.K. Security and Privacy in IoT Smart Healthcare. IEEE Internet Comput. 2021, 25, 37–48. [Google Scholar] [CrossRef]
  28. Wazid, M.; Das, A.K.; Kumar, N.; Vasilakos, A.V. Design of secure key management and user authentication scheme for fog computing services. Futur. Gener. Comput. Syst. 2019, 91, 475–492. [Google Scholar] [CrossRef]
  29. Wazid, M.; Das, A.K.; Kumar, N.; Conti, M.; Vasilakos, A.V. A Novel Authentication and Key Agreement Scheme for Implantable Medical Devices Deployment. IEEE J. Biomed. Health Inform. 2018, 22, 1299–1300. [Google Scholar] [CrossRef]
  30. Kruglik, S.; Frolov, A. An Information-Theoretic Approach for Reliable Distributed Storage Systems. J. Commun. Technol. Electron. 2020, 65, 1505–1516. [Google Scholar] [CrossRef]
  31. Chor, B.; Kushilevitz, E.; Goldreich, O.; Sudan, M. Private information retrieval. J. ACM 1998, 45, 965–981. [Google Scholar] [CrossRef]
  32. Sun, H.; Jafar, S.A. The capacity of private information retrieval. IEEE Trans. Inf. Theory 2017, 63, 4075–4088. [Google Scholar] [CrossRef]
  33. Wang, Q.; Skoglund, M. Secure private information retrieval from colluding databases with eavesdroppers. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 2456–2460. [Google Scholar]
  34. Ozarow, L.H.; Wyner, A.D. Wiretap Channel II. Bell Syst. Tech. J. 1984, 63, 2135–2157. [Google Scholar]
Figure 1. Combinatorial topology with private caches.
Figure 1. Combinatorial topology with private caches.
Entropy 27 01033 g001
Figure 2. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] when C = 7 , r = 2 , N = 100 .
Figure 2. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] when C = 7 , r = 2 , N = 100 .
Entropy 27 01033 g002
Figure 3. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] when C = 7 , r = 3 , N = 100 .
Figure 3. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] when C = 7 , r = 3 , N = 100 .
Entropy 27 01033 g003
Figure 4. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] when C = 7 , r = 3 , N = 60 .
Figure 4. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] when C = 7 , r = 3 , N = 60 .
Entropy 27 01033 g004
Figure 5. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] considering the global cache when C = 7 , r = 3 , N = 100 .
Figure 5. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] considering the global cache when C = 7 , r = 3 , N = 100 .
Entropy 27 01033 g005
Figure 6. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] considering the global cache when C = 7 , r = 3 , N = 60 .
Figure 6. Performance comparison of our proposed scheme with the SP-LFR scheme in [17] considering the global cache when C = 7 , r = 3 , N = 60 .
Entropy 27 01033 g006
Figure 7. Performance comparison of our proposed scheme with [15] for different values of r considering the total memory accessed per user when C = 7 , N = 30 .
Figure 7. Performance comparison of our proposed scheme with [15] for different values of r considering the total memory accessed per user when C = 7 , N = 30 .
Entropy 27 01033 g007
Figure 8. Performance comparison of our proposed scheme with [15] for different values of r considering the total memory accessed per user when C = 7 , N = 50 .
Figure 8. Performance comparison of our proposed scheme with [15] for different values of r considering the total memory accessed per user when C = 7 , N = 50 .
Entropy 27 01033 g008
Figure 9. Performance comparison of our proposed scheme with [15] for different values of r considering the global cache when C = 7 , N = 30 .
Figure 9. Performance comparison of our proposed scheme with [15] for different values of r considering the global cache when C = 7 , N = 30 .
Entropy 27 01033 g009
Figure 10. Performance comparison of our proposed scheme with [15] for different values of r considering the global cache when C = 7 , N = 50 .
Figure 10. Performance comparison of our proposed scheme with [15] for different values of r considering the global cache when C = 7 , N = 50 .
Entropy 27 01033 g010
Figure 11. Performance comparison of our proposed scheme with [15] considering the same number of users when C = 7 , r = 2 , N = 50 .
Figure 11. Performance comparison of our proposed scheme with [15] considering the same number of users when C = 7 , r = 2 , N = 50 .
Entropy 27 01033 g011
Figure 12. Performance comparison of our proposed scheme with [15] for different values of r considering the same number of users when C = 7 , N = 50 . The dotted lines represent the dedicated cache, and the solid lines represent the combinatorial topology with private caches.
Figure 12. Performance comparison of our proposed scheme with [15] for different values of r considering the same number of users when C = 7 , N = 50 . The dotted lines represent the dedicated cache, and the solid lines represent the combinatorial topology with private caches.
Entropy 27 01033 g012
Figure 13. Performance comparison of our scheme for GCT with private caches with [15,17] considering different connectivities when C = 5 , N = 80 .
Figure 13. Performance comparison of our scheme for GCT with private caches with [15,17] considering different connectivities when C = 5 , N = 80 .
Entropy 27 01033 g013
Figure 14. Performance comparison of our scheme for GCT with private cache with [15,17] considering different connectivities when C = 5 , N = 50 .
Figure 14. Performance comparison of our scheme for GCT with private cache with [15,17] considering different connectivities when C = 5 , N = 50 .
Entropy 27 01033 g014
Figure 15. Plots showing the achievable scheme in Theorem 4 and the lower bound in Theorem 5 for multiple K G C T values.
Figure 15. Plots showing the achievable scheme in Theorem 4 and the lower bound in Theorem 5 for multiple K G C T values.
Entropy 27 01033 g015
Table 1. Illustrative comparison for C = 7 , t = 1 , N = 100 .
Table 1. Illustrative comparison for C = 7 , t = 1 , N = 100 .
r K = C r M M M P RGap R / R *
17 100 / 7 14.2857 6 / 7 0.8571 3.0 1.79
221 100 / 7 14.2857 5 / 7 0.7143 5.0 4.03
335 100 / 7 14.2857 4 / 7 0.5714 5.0 5.87
435 100 / 7 14.2857 3 / 7 0.4286 3.0 5.30
521 100 / 7 14.2857 2 / 7 0.2857 1.0 3.50
67 100 / 7 14.2857 1 / 7 0.1429 1 / 7 0.1429 1.00
The Gap column shows the ratio R / R * . At r = C 1 = 6 , the scheme is optimal.
Table 2. Rate per user for K GCT = ( 0 , 0 , 2 , 2 , 0 , 0 ) , C = 5 , N = 80 .
Table 2. Rate per user for K GCT = ( 0 , 0 , 2 , 2 , 0 , 0 ) , C = 5 , N = 80 .
tAchievableLower BoundGap ( R / R * )Observation
10.150.0246.25Large gap
20.030.0056.0Gap shrinking
30.0050.0015.0Gap continues to shrink
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chinnapadamala, M.; Rajan, B.S. Security, Privacy, and Linear Function Retrieval in Combinatorial Multi-Access Coded Caching with Private Caches. Entropy 2025, 27, 1033. https://doi.org/10.3390/e27101033

AMA Style

Chinnapadamala M, Rajan BS. Security, Privacy, and Linear Function Retrieval in Combinatorial Multi-Access Coded Caching with Private Caches. Entropy. 2025; 27(10):1033. https://doi.org/10.3390/e27101033

Chicago/Turabian Style

Chinnapadamala, Mallikharjuna, and B. Sundar Rajan. 2025. "Security, Privacy, and Linear Function Retrieval in Combinatorial Multi-Access Coded Caching with Private Caches" Entropy 27, no. 10: 1033. https://doi.org/10.3390/e27101033

APA Style

Chinnapadamala, M., & Rajan, B. S. (2025). Security, Privacy, and Linear Function Retrieval in Combinatorial Multi-Access Coded Caching with Private Caches. Entropy, 27(10), 1033. https://doi.org/10.3390/e27101033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop