# Efficient Algorithms for Coded Multicasting in Heterogeneous Caching Networks

^{1}

Department of Electrical Engineering and Information Technology (DIETI), Universitá di Napoli Federico II, 80138 Napoli, Italy

^{2}

Department of Electrical and Computer Engineering (ECE), University of Utah, Salt Lake City, UT 84112, USA

^{3}

IBM Research, New York, NY 10598, USA

^{4}

Department of Math and Algorithms, Nokia Bell Labs, Murray Hill, NJ 07738, USA

^{5}

Faculty of Electrical Engineering and Computer Science (EECS), Technical University of Berlin, 10587 Berlin, Germany

^{*}

Author to whom correspondence should be addressed.

Received: 14 January 2019 / Revised: 12 March 2019 / Accepted: 13 March 2019 / Published: 25 March 2019

(This article belongs to the Special Issue Information Theory for Data Communications and Processing)

Coded multicasting has been shown to be a promising approach to significantly improve the performance of content delivery networks with multiple caches downstream of a common multicast link. However, the schemes that have been shown to achieve order-optimal performance require content items to be partitioned into several packets that grows exponentially with the number of caches, leading to codes of exponential complexity that jeopardize their promising performance benefits. In this paper, we address this crucial performance-complexity tradeoff in a heterogeneous caching network setting, where edge caches with possibly different storage capacity collect multiple content requests that may follow distinct demand distributions. We extend the asymptotic (in the number of packets per file) analysis of shared link caching networks to heterogeneous network settings, and present novel coded multicast schemes, based on local graph coloring, that exhibit polynomial-time complexity in all the system parameters, while preserving the asymptotically proven multiplicative caching gain even for finite file packetization. We further demonstrate that the packetization order (the number of packets each file is split into) can be traded-off with the number of requests collected by each cache, while preserving the same multiplicative caching gain. Simulation results confirm the superiority of the proposed schemes and illustrate the interesting request aggregation vs. packetization order tradeoff within several practical settings. Our results provide a compelling step towards the practical achievability of the promising multiplicative caching gain in next generation access networks.