Cryptography in Hierarchical Coded Caching: System Model and Cost Analysis

The idea behind network caching is to reduce network traffic during peak hours via transmitting frequently-requested content items to end users during off-peak hours. However, due to limited cache sizes and unpredictable access patterns, this might not totally eliminate the need for data transmission during peak hours. Coded caching was introduced to further reduce the peak hour traffic. The idea of coded caching is based on sending coded content which can be decoded in different ways by different users. This allows the server to service multiple requests by transmitting a single content item. Research works regarding coded caching traditionally adopt a simple network topology consisting of a single server, a single hub, a shared link connecting the server to the hub, and private links which connect the users to the hub. Building on the results of Sengupta et al. (IEEE Trans. Inf. Forensics Secur., 2015), we propose and evaluate a yet more complex system model that takes into consideration both throughput and security via combining the mentioned ideas. It is demonstrated that the achievable rates in the proposed model are within a constant multiplicative and additive gap with the minimum secure rates.


Introduction
Coded caching, proposed by Maddah-Ali and Niesen [1], refers to an augmented variant of caching.Coded caching follows two strategies during two transmission phases in order to avoid a traffic bottleneck in the network.The first transmission phase, referred to as the placement phase, takes place in off-peak hours.During this phase the system attempts at placing frequently-demanded content items in local memories of corresponding interested users in order to avoid unnecessary transmission during peak time.This helps deteriorate network bandwidth over utilization and underutilization problems during peak and off-peak intervals.An effective placement strategy should consider the statistical and probabilistic nature of the users' access patterns.The second phase, i.e., the delivery phase manages the transmission in peak hours.The ideal goal in the latter phase is to send only a single coded content item which is a function of the originally-requested content items.Each user-in the ideal case-should be able to calculate its own demanded item from the transmitted item.The more the system approaches this goal, the less amount of transmission during the delivery phase (rate) is required.
The authors in [1] made a lot of simplifying assumptions when establishing the first system model for a coded caching scheme.They assumed a simple network based on a star topology which provides a one-way content transmission from a single server storing N files each of size F bits to K users and each user having cache size of MF bits.Each user requests a single file during the delivery phase.Every file sent by the server passes through a single shared link and arrives at the hub where it is duplicated and transmitted to each user through a private link.
This system model is obviously not realistic enough because it ignores many considerations of a real-world network among which we focus on scalability and security in this paper.There are various security-related issues, such as confidentiality, privacy, and distributed denial-of-service (DDoS) attack protection which need to be addressed in coded caching.Among the mentioned issues, confidentiality has received the most focus in recent years [2,3].Previous works in this area have augmented the coded caching system model by adding an adversary with access to the shared link only during the delivery phase.The space required to store the cryptographic keys in the server memory and user caches, as well as the extra traffic caused by key exchange mechanisms should be considered as obvious costs of this variant of coded caching.
In order to address the scalability issue, some researchers have augmented the coded caching system model in another way via proposing hierarchical network topology [4].In the proposed topology, the main server is mirrored in each cluster of users.This allows part of the traffic to be locally handled in user clusters which leads to improved scalability.This improvement is achieved at the cost of redundant servers and links.
Although scalability and security have been separately examined in previous research, the literature in this area has not come up with a study on the possibility or the costs of considering both issues at the same time.This paper addresses both of the mentioned issues via considering confidential content transmission over a hierarchical network.This goal is achieved by further augmenting the coded caching system model, as well as analyzing the related costs.In our proposed system model, the adversary can eavesdrop the shared links in each hierarchy level during the peak interval.
The costs of scalability have already been analyzed in previous research [4].We compare the results of our mathematical cost evaluations with those obtained in [4] to analyze the extra cost posed by confidentiality considerations.The key contribution of the paper is the result that although the achievable rates are within a constant multiplicative and additive gap to the corresponding lower bounds in both schemes, confidentiality causes the constants to grow larger.
The rest of this paper is organized as follows.Section 2 defines the problem we are tackling in this paper.This section first studies relevant works and presents some preliminaries, and then the shortcomings of the previous works which motivate our work in this paper are discussed.Section 3 explains the secure hierarchical coded caching scheme and describes the system model and configuration.The fundamental limits as well as costs are analyzed in Section 4. In this section, the secure achievable rates, memory requirements and the lower bounds on the rates are calculated.A gap analysis between the secure achievable rates and the corresponding lower bounds is presented in Section 5.The last section of this paper is Section 6 which concludes the paper and suggests further research topics.

Problem Statement
In this section, we first present some preliminary discussions regarding coded caching and review the related literature and then highlight some shortcomings in the related works which motivate us to propose the secure hierarchical coded caching scheme.

Related Works
Caching is a solution to the problem of temporally-nonuniform access to contents stored in servers which may causes the network bandwidth to be underutilized in the off-peak interval while it can render a bottleneck in the peak interval.This technique helps achieve more uniform network traffic and deteriorate the bottleneck problem by allowing the system to store frequently-accessed content items in local caches during the off-peak time.
Examining the above problems has led to different variants of caching schemes.In this paper, we focus on coded caching schemes which try to service as many user requests as possible by transmitting a single coded data item in the peak time.Coded caching schemes can be classified into the following categories with respect to their behaviors in the placement phase.

Centralized Schemes
In these schemes, the server decides the data items which are to be stored in user caches during the placement phase [1,[39][40][41][42][43].It has been shown that a multiplicative factor of 1 1+KM/N in size reduces the rate in centralized coded caching.This factor is referred to as global caching gain.As shown in [1], the centralized coded caching rate R C (M) is given by (1),

Decentralized Schemes
In the latter schemes, users are allowed to store random data in their caches.It was shown in [44] that the rate [1] in decentralized coded caching can be obtained from (2), An important point to note here is that the term "decentralized" does not refer to the underlying network and the network topology adopted in [44,45] are the same as the one considered in [1].

Hierarchical Coded Caching Scheme
The scheme introduced in [4] proposes a hierarchical coded caching scheme in which the content stored in the main server can be mirrored by intermediate servers in different levels of hierarchy before being placed in end user caches.In this scheme, the requests issued by each end user are first forwarded to the closest intermediate server.If not serviced the request is then forwarded to the higher hierarchy level.This implies the existence of different peak and off-peak intervals in different hierarchy levels.
Two different caching schemes have been proposed in this paper.The first scheme referred to as Scheme A allows simultaneous coded multicasting in both hierarchy levels.Each mirror first downloads the content items requested by its corresponding users from the main servers.Then, the items are coded and forwarded to the users.In Scheme B, mirrors act as memory-less routers.They receive the items from the main server and forward them without being stored or coded.It has been demonstrated that both schemes can individually perform sub-optimally [4].The authors in this paper argued that because of the disjunctive relation between Scheme A and Scheme B, the rate of each link is the sum of the individual rates induced by the two schemes.They proposed a hybrid scheme named as the generalized coded caching scheme that attempts to incorporates a proper combination of Scheme A and Scheme B in order to approximately minimize the overall rate.

Secure Coded Caching Scheme
The scheme presented in [3] argued that the shared link may be eavesdropped by an adversary since it is publicly accessible as a broadcast medium.Thus they proposed an on time pad (OTP) cryptosystem [46] to preserve the confidentiality of the data items exchanged through this link.In their proposed scheme, the keys are placed in user cache along with the data in the placement phase.These confidentiality measures can be applied in both centralized and decentralized coded caching systems.
It is demonstrated in this paper that the secure rates for the centralized scheme and the decentralized scheme can be obtained through replacing M/N by (M − 1)/(N − 1) in Equations ( 1) and (2), respectively.The authors of [4] argued that the overall rate of the hierarchical network is the sum of the individual rates in different levels of the hierarchy.Thus, if the overall rate needs to be minimized, both levels should operate at their minimum rates.
The relationship between the goals followed by the mentioned schemes motivates our work in this paper.Moreover, we compare our results with the ones obtained in [4] as reference.

Motivations
The researchers who proposed the idea of coded caching made several simplifying assumptions regarding the system model [1].These assumptions made the core idea more manageable in its early days.However, several aspects of the primary system model obviously need to be revisited in order for the scheme to be applicable to real-world networks.Scalability and security are two aspects considered by other researchers [3,4].However, there are still several related issues which can motivate further research.For example, It should be considered that confidentiality (addressed in [3]) is not the only aspect of security.Moreover, the network topology (studied in [4]) is not the only factor affecting scalability.However, what motivates us for the work of this paper is the lack of a research on a system model which is both secure and confidential.
Achieving the confidentiality promised in [3], as well as the scalability of the network studied in [4] by combining both ideas looks an enticing natural idea.However, the important issue to consider here is that combining these ideas can bring about new problems.In fact, the traffic and memory space overhead caused by the secure coded caching is against the scalability aimed by the hierarchical network.The key transmission occupies the bandwidth of the network which adversely affects the scalability.This problem will look more prominent when we consider the fact that OTP requires a new key for each single transmission.On the other hand, storing the keys in user caches prevents some frequently-requested data items to be stored during the placement phase because of the limited cache sizes.This will affect the peak time rate and may, consequently, overshadow the scalability of the underlying hierarchical network.Thus, every research focusing on simultaneous confidentiality and scalability should consider the trade-off between the two parameters.This trade-off will appear as an extra cost induced by the security-related constraints which should be tolerated by the hierarchical coded caching scheme.
In this paper, we first present an extended coded caching scheme which incorporates OTP confidentiality provisions and hierarchical network topology in the system model.Then, we analyze the extra cost induced by confidentiality via comparing the rate bounds to the case of non-secure hierarchical coded caching.

Secure Hierarchical Coded Caching
In this section, we present our secure hierarchical coded caching scheme and the related system model.Our system model needs to be defined in two aspects.We first introduce the topology and resources of the underlying network and then discuss the caching scheme which describes the transmissions in placement and delivery phases.Next, we discuss the security-related considerations.
In terms of topology, we adopt the 2-level hierarchical topology, described in [4].In the top level of the hierarchy, the main server is connected to a hub via a shared link and then to mirror servers via separate links.In each cluster in the next hierarchy level, a shared link connects the mirror server to the hub while end users are connected to the same hub using separate links.We assume the number of the clusters to be equal to K 1 each of which connects K 2 end users.
As for the resources, the main server is assumed to store N files represented by W 1 through W N each of size F bits.We assumed that the bits in a file are independent and uniformly distributed.The cache sizes in the mirrors and the end users are assumed to be M 1 F and M 2 F bits, respectively.The main and mirror servers are assumed to have unlimited processing power.
With respect to the caching scheme, we will assume the generalized caching scheme presented in [4] which is a combination of Scheme A and Scheme B. We follow the procedure to find the most efficient combination of the schemes.
During the delivery phase, each user makes exactly one demand.The local demands in each cluster are collected by the corresponding mirror server and then forwarded to the main server.The demand issued by U (i,j) is represented by the element d i,j in the demand matrix D. According to the demands, the main server encodes the proper content along with with the orthogonal keys and transmits them within a file X D of size R S 1 F bits to all mirrors.Then, each mirror re-encodes (Scheme A) or forwards (Scheme B) the data requested by its corresponding users along with the related keys and transmits them within a file Y D of size R S 2 F bits.Security-related constraints are considered in order to keep the transferred contents confidential from an external adversary assumed to have access to every shared link.In order to add confidentiality to our caching scheme, we adopt the security constraints proposed in [2,3].Adopting the orthogonal key scheme proposed in [3], user caches, as well as mirror server memories are considered to be partitioned into Data and Key regions in order to keep space for storing the keys in the placement phase.Figure 1 shows the access model of adversary as well as the security-related configuration.The mentioned security constraints guarantees that I(X D ; W 1 , W 2 , . . ., W N ) = 1 and , where 1 → 0 and 2 → 0 which states that the external adversary cannot reveal any information regarding the files W 1 , W 2 , . . ., W N by eavesdropping the shared links without access to users' and mirrors' caches.It is to be noted that 1 → 0 and 2 → 0 are for the case when file size is sufficiently large, i.e., when file size → ∞.The minimum number of mirrors or users needed to be compromised in order to break the security was discussed in [3].
Adopting the security constraints from [3] requires some extra assumption regarding the placement phase in Scheme A. Since the users cannot establish an immediate communication with the main server, we assume a delegation between the main server and mirror servers in the placement phase in Scheme A. It means that the mirror servers are trusted and granted access to the keys because they need to decrypt and encrypt the contents again before and after re-encoding them.
Another assumption adopted from [3] in our system model is that every user is interested in no more than one file in the delivery phase and the demanded files are mutually different.The system cannot allocate resources, such as private links, network bandwidth, and cache space to a user with no demands in the delivery phase.Therefore, we suppose every user makes exactly one request in this phase.The latter assumptions obviously result in N ≥ K 1 K 2 as a criterion for the server to be able to answer all user requests.Again, we note that it is not reasonable to store files which will never be demanded.Thus, we assume that N = K 1 K 2 .Throughout, we assume that the placement phase is secure and links are error-free.
Let us represent the secure rate in the top hierarchy level by R S 1 and the second level secure rate by R S 2 .For a demand matrix D and for a large-enough file size F, a tuple (M 1 , M 2 , R S 1 , R S 2 ) is said to be feasible for D if each user U (i,j) is able to recover its requested file d i,j securely with a probability arbitrarily close to unity.Moreover, (M 1 , M 2 , R S 1 , R S 2 ) is feasible if it is feasible for all possible request matrices D. Throughout, we assume feasible rate region in our analysis.

Fundamental Limits and Cost Analysis
The procedure we follow in our evaluations in this section can be described as follows.In order to maximize the secure achievable rate in the generalized scheme, we try to find the most effective combination of the Schemes A and B. To do this, we first parameterize the combination.We assume that a fraction equal to α of each file residing in the server (as well as transmissions in the top hierarchy level) are ruled by Scheme A and the rest (1 − α) are transmitted on the basis of Scheme B. The corresponding fractions in the user cache (as well as transmissions in the second hierarchy level) are assumed to be equal to β and 1 − β, respectively.Then we try to find the best possible values for α and β which will result in the most effective combination.We denote the latter values by α * and β * .In the next step, we calculate the secure achievable rate for the generalized scheme via calculating the rates for both Schemes A and B and then combining the results together assigning the values α * and β * to α and β.We calculate the lower bounds of the rates through a similar procedure and then analyze the gap between the achievable rates and the rates specified by the lower bounds.A comparison between our results and those obtained in [4] highlights the cost of security in hierarchical network caching.

Preliminary Discussions
While analyzing the rates in each scheme, we separately consider each of the three regimes proposed in [4].This makes it plausible to compare our results to those obtained in [4].The mentioned regimes are characterized as shown in Equation (3) in terms of M 1 and M 2 , The results in [4], to which we compare our own results are as follows.The optimum values of α and β for the mentioned regimes in the non-secure hierarchical coded caching scheme are [4], Moreover, the corresponding non-secure achievable rates for Scheme A and Scheme B have been calculated as functions of α * and β * in [4], See Figure 2 for different regimes of M 1 , M 2 for α * and β * .In ( 4) and ( 5), the approximation is within a constant additive and multiplicative as given by ( 7) and ( 8), regime III regime I regime II Different regimes of M 1 , M 2 for α * and β * .

Secure Achievable Rates
Before beginning to develop our mathematical modelings, let us introduce some notations we will use in our equations.We will refer to the jth user (j ∈ [1, 2, . . ., K 2 ] ) in the ith cluster where i ∈ [1, 2, . . ., K 1 ] as U (i,j) and refer to the corresponding cache as C (i,j) .Let us represent the coded content items transmitted in the first and second levels of the hierarchy by X D and Y D , respectively, where D is the request matrix.Furthermore, let us represent the secure rate in the top hierarchy level by R S 1 and the second level secure rate by R S 2 .Now, let us begin the derivation of our model by calculating R S 1 and R S 2 for scheme A. For N files and K 1 mirrors each with a cache size of where r(., .) is defined as: Similarly, R S 1 and R S 2 for the scheme B can be calculated as Let us normalize the total file size, mirror memory size, and user cache size involved by scheme A as shown in ( 14) and (15), respectively, Moreover, let us normalize user cache size involved by scheme B as shown in (16), Thus, the secure rates induced by scheme A and scheme B can be normalized with respect to the file size as given by and In the next step, we will calculate α * and β * for each of the regimes in a way that both R S 1 (α, β) and R S 2 (α, β) can be minimized.Let us begin with regime 1.According to (17b) and (18b), for α = β it holds that R S 2 (α, α) = r((M 2 − 1)/(N − 1), K 2 ) .It can be verified that α = M/N results in a near-optimal value for R S 1 (α, α) in Regime A. Thus, we chose α * = M 1 /N and β * = M 1 /N in this regime.Choosing α * = M 1 /N allows each mirror to store the first part of each of the N files in the first transmission.Thus, there will be no need for further transmission between the server and the mirrors in the placement phase or key memory space in the mirrors.Now let us proceed with regime 2. In this regime, it can be verified from Equation (20b) that M 2 < N/K 2 which means that the M 2 cache area is very small.Thus, R S 2 (α, β) will be of order K 2 for any choice of α and β.Therefore, we only need to choose α and β in a way that R S 1 (α, β) is minimized.In this regime, the optimized values for α and β can be obtained as In regime 3 (like in regime 1), a choice of α = β = M 1 /N is preferable.However, it should be considered that a large value of β leads to an unacceptably-large value of R S 1 (α, β).Thus, a minimum threshold of β * = M 1 /M 2 N should be considered.Similar to the case of regime 1, no extra transmission between the server and the mirrors in the placement phase or key area in the cache is required in this regime.
After deciding the proper choice of α * , β * , let us calculate R S 1 (α * , β * ) and R S 2 (α * , β * ) for the generalized scheme as a combination of the secure rates in the two schemes A and B.
Theorem 1.We have the following conditions on R S 1 (α * , β * ) and R S 2 (α * , β * ) and Proof.The normalized achievable secure rates for the generalized scheme can be calculated in the form of functions of α and β, and With the proper choice of α * , β * , we proceed to calculate secure achievable rates R S 1 (α * , β * ), R S 2 (α * , β * ).As we observe, the secure achievable rates for the generalized caching scheme is a function of r(., .), as mentioned in the Equation (1).We observe the following, Now let us proceed with calculating the secure achievable rates for each of the regimes of M 1 and M 2 beginning with regime 1.According to ( 5), (20), and ( 21), the secure achievable rates in this regime can be upper bounded as shown in inequalities (22a) and (22b), and Through a similar reasoning, the upper bounds to the secure achievable rate R S 1 (α * , β * ) in regime 2 can be obtained from the inequality (23a) (the form of equations are different from regime 1).
Furthermore, according to (20) and ( 21), the reader can easily verify that R S 2 (α * , β * ) in regime 2 will be upper bounded by Additionally, for regime 3 we have, and Summarizing our results for the generalized scheme from the discussions above we have and The proof is now complete.

Memory Requirements
An information-theoretical analysis will reveal some minimum requirements regarding the cache size in the mirrors and uses cache.Consider a caching system in which two files denoted by A and B are residing in the main server N = 2. Consider K 1 = 2 mirrors denoted by m 1 and m 2 with each of mirrors cache size M 1 .Cache contents denoted by Z m 1 , Z m 2 cached by users in the placement phase.Let us assume that mirror m 1 demands a content item A α in the delivery phase which is part of file A and mirror m 2 demands part of file B denoted by B α .Both demanded items are assumed to be of size αF which can be considered a fraction equal to α of the size of file A or B. The mentioned demands can be represented by a demand vector (d 1 , d 2 ) = (A α , B α ).
In this setup, the main server will transmit X (A α ,B α ) to the mirrors which should be capable of regenerating the items A α and B α when combined with Z m 1 and Z m 2 .From an information theoretical point of view, the criterion stated by inequality ( 26) should hold to make it possible to achieve this goal, For the security constraint between the server and the mirrors, inequality (27) should hold in order to keep the delivery phase transmissions confidential, Using ( 26) and ( 27) we have From (28) we immediately obtain When δ and approach zero, inequality (29) will be converted to M 1 ≥ α.Again, we note that users should be able to recover both file A and file B from a single cached item Z m 1 along with two items within X (A α ,B α ) and X (B α ,A α ) transmitted in response to the demand vectors (d 1 , , respectively.The latter requirement leads to the following inequalities, Through similar reasoning the latter two inequalities will lead to R * s + M 1 ≥ 2α where R * s is minimum rate between the server and the mirrors.

Lower Bounds
Now let us discuss the lower bounds on secure rates R S 1 and R S 2 for different values of M 1 , M 2 given the feasibility of (M 1 , M 2 , R S 1 , R S 2 ).To do this, we follow the approach taken in [3] for the secure non-hierarchical scheme and extend the discussions and results to the case of the secure hierarchical network.
Theorem 2. We have and Proof.Let us begin with the lower bound on R S 1 .For s 1 ∈ {1, 2, . . ., K 1 } and s 2 ∈ {1, 2, . . ., K 2 }, suppose the first s 1 mirrors store Z m 1 , Z m 2 , . . ., Z m s 1 .Furthermore, assume that for i ∈ {1, 2, . . ., s 1 } and j ∈ {1, 2, . . ., s 2 }, every user C (i,j) caches Z u i,1 , Z u i,2 , . . ., Z u i,s 2 .Suppose the mentioned users issue the demand matrix D 1 defined as d 1 i,j = (i − 1)s 2 + j which includes requests for the first s 1 s 2 files residing in the main server.The items transmitted by the main server within Similarly, for the different request matrix D, where user U (i,j) demands d i,j = s 1 s 2 + (i − 1)s 2 + j, i.e., requesting next s 1 s 2 files from the server.The transmission X 2 , along with mirrors Z m i,1 , Z m i,2 , . . ., Z m i,s 2 and users cache Z u i,1 , Z u i,2 , . . ., Z u i,s 2 must be able to decode the files W s 1 s 2 +1 , W s 1 s 2 +2 , . . ., W 2s 1 s 2 .Likewise, considering all N/s 1 s 2 request matrices, multicast transmission X 1 , . . ., X N/s 1 s 2 along with mirrors Z m 1 , Z m 2 , . . .Z m s 1 and users cache Z u i,1 , Z u i,2 , . . ., Z u i,s 2 , must be able to recover the files W 1 , . . ., W s 1 s 2 N/s 1 s 2 .Let Another point implied by the feasibility of (M 1 , M 2 , R s 1 , R s 2 ) in our system model is that the external adversary should not be able to retrieve any information regarding the contents being transmitted in the delivery phase.This criterion is formally described by inequalities (32) and (33), and Consider the information flow consisting of multicast transmission X 1 , . . ., X N/s 1 s 2 , mirrors Z 1 , Z 2 , . . ., Z s 1 and users cache Z i,1 . . ., Z i,s 2 for decoding file W 1 , . . ., W s 1 s 2 N/s 1 s 2 .We have So, Solving and optimizing for all possible values of s 1 and s 2 we obtain We can obtain an alternate lower bound by using N/s 1 s 2 transmissions instead of N/s 1 s 2 in (35), The inequalities (36) and (37) hold for any value of s 1 ∈ {1, 2, . . ., K 1 } and s 2 ∈ {1, 2, . . ., K 2 }.So, we obtain the following lower bound on R S 1 for the tuple (M 1 , M 2 , R S 1 , R S 2 ) to be feasible, After calculating the lower bound for R S 1 , let us proceed with that of R S 2 assuming the feasibility of (M 1 , M 2 , R S 1 , R S 2 ).Let t ∈ {1, 2, . . ., K 2 }.Consider the t users cache U (1,j) as Z u 1,1 , Z u 1,2 , . . ., Z u 1,t with j ∈ {1, 2, . . ., t}.Consider the request matrix D with demands d 1,j = j, i.e., requesting t files from the server.The transmission Y 1 = Y (d 1,1 ,...,d 1,t ) , along with the users cache U (1,j) Z u 1,1 , Z u 1,2 , . . ., Z u 1,t must be able to decode the files W 1 , . . ., W t .Similarly, for the different request matrix D, where the user demands d i,j = t + j, i.e., requesting another t files from the server.The transmission Y 2 along with the users cache U 1,j , must be able to decode the files W t+1 , . . ., W 2t .Likewise, considering the all N/t request matrices, multicast transmission, Y 1 , . . ., Y N/t along with the users cache Z u 1,1 , Z u 1,2 , . . ., Z u 1,t must be able to recover the files W 1 , . . ., W N .Assuming Y l be the information leaked to the external adversary through the link connecting between the mirror and its corresponding users.
The file recovery and security constraints can be stated as This is similar case of single layer secure scheme.Consider the information flow consisting of multicast transmission Y 1 , . . ., Y N/t and users cache Z u 1,1 , Z u 1,2 , . . ., Z u 1,t for decoding the files W 1 , W 2 , . . ., W N .We have Therefore, Solving and optimizing for all value of t, we obtain the following lower bound R lb S 2 (M 1 , M 2 ).

Gap Analysis
In this section, we analyze the gap between the secure achievable rates and the corresponding lower bounds.5.1.R S 1 (α * , β * ) against R lb S 1 (M 1 , M 2 ) Theorem 3. R S 1 (α * , β * ) will vary between a constant multiplicative and a constant additive gap with R lb S 1 (M 1 , M 2 ).Specifically, Proof.The values of α * and β * , and consequently R s 1 (α * , β * ) obviously depend on the regime characterized by M 1 and M 2 .This makes it necessary to examine each of the regimes.We begin with regime 1 assuming that Inequalities (25a) and (38) give the achievable secure rate R S 1 (α * , β * ), as well as the lower bound on R S 1 (M 1 , M 2 ) for regime 1.
In order to make the margin of the gap more manageable, we further divide our discussions regarding this regime into three sub-regimes specified as follows: For sub-regime 1.A), let us feed s 1 = 1 and s 2 = N 2M 2 (which is a valid choice because z ≥ z/2 for any z ≥ 1) into (38) which gives In deriving (44), we have used (a Combining ( 44) and (22a), we obtain For sub-regime 1.B, let otherwise.
Note that for M 1 ≥ M 2 , we have and for M 1 < M 2 , we have Finally, feeding the chosen values of s 1 , s 2 into (38) we obtain Combining ( 46) and (22a), we obtain Similarly, in sub-regime 1.C, we have Combining ( 48) and (22a), we obtain Our analysis for sub-regimes 1.A, 1.B and 1.C demonstrate that the secure achievable rate R lb S 1 (M 1 , M 2 ) is within a constant multiplicative and additive gap for regime 1.
As for regime 2, we further divide it into the following sub-regimes. (2.A) For sub-regime 2.A, we assume s 1 = K 1 3 and s 2 = K 2 .Using z ≥ z/2 for any z ≥ 1, we see that it is a valid choice of s 1 , s 2 , since K 1 ≥ 4 and thus K 1 /3 ≥ 1. Equating the values of s 1 , s 2 in (38), we obtain where (a) follows from . The results obtained for sub-regimes 3.A and 3.B suggest that the gap analysis for regime 3 will be similar to the case of regime 1 and regime 2. On the other hand, we show that regimes 1, 2 and 3 cover the entire (M 1 , M 2 ) plane.This helps us come into the conclusion that in each subregime R S 1 (α * , β * ) and R lb S 1 (M 1 , M 2 ) are within a constant multiplicative and additive gap.Therefore, the unified final result which we will obtain for all the studied regimes is Furthermore, the lower bound on R S 2 (α * , β * ) can be obtained from (43) as In the rest of our discussion we will partition the (M 1 , M 2 ) plane by distinguishing the following two cases: (1) We will examine the mentioned cases in order to improve the margin of the gap. ( in (60).This is a valid choice since K 2 ≥ 4. Thus, By feeding the value of t into (60), it follows that Since ∀z ≥ 1 : z ≥ z/2, we can continue as follows, Because N ≥ K 1 K 2 and K 1 ≥ 4, we have From ( 61) and (25b) we obtain (2) For N 4 ≤ M 2 ≤ N, it holds that Therefore, The entire (M 1 , M 2 ) plane is obviously covered by cases 1) and 2).Thus, R S 2 (α * , β * ) and R lb S 2 (M 1 , M 2 ) are shown to be embraced by constant additive and multiplicative curves as shown by (64) which is derived via combining (62) and (63), (64)

Conclusions and Further Works
In this paper, we further developed the system model of a coded caching scheme by simultaneously assuming a hierarchical network and adversaries tapping the shared links in peak time.We calculated the secure achievable rates of each link in the proposed scheme.The parameters considered in previously-proposed hierarchical scheme have been reconsidered here to obtain approximate minimum achievable rates.Furthermore, we calculated the lower bound on the feasible rates.We showed that the secure achievable rates are within a constant multiplicative and additive gap to the corresponding lower bounds.These results are similar to those obtained in the non-secure hierarchical scheme, but the cost of security appears in the form of larger constants.Our work can be continued by proposing and evaluating yet more complex system models.More complicated models can assume that the adversary has access to the shared links in the placement phase or allow the users to issue more than one request in the delivery phase.

Figure 1 .
Figure 1.A hierarchical caching system with external adversaries acting overall shared links.