Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (61)

Search Parameters:
Keywords = caching placement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1907 KB  
Article
Intelligent Hybrid Caching for Sustainable Big Data Processing: Leveraging NVM to Enable Green Digital Transformation
by Lei Tong, Qing Shen and Zhenqiang Xie
Sustainability 2026, 18(5), 2601; https://doi.org/10.3390/su18052601 - 6 Mar 2026
Viewed by 305
Abstract
Apache Spark has gained widespread adoption for large-scale data processing. However, conventional caching methods inadequately address the dual challenges of performance bottlenecks and escalating energy consumption in data-intensive workloads. This paper introduces a sustainable computing framework that integrates Directed Acyclic Graph (DAG) dependency [...] Read more.
Apache Spark has gained widespread adoption for large-scale data processing. However, conventional caching methods inadequately address the dual challenges of performance bottlenecks and escalating energy consumption in data-intensive workloads. This paper introduces a sustainable computing framework that integrates Directed Acyclic Graph (DAG) dependency analysis with garbage collection (GC) behavior monitoring to optimize data placement between DRAM and non-volatile memory (NVM). The proposed Intelligent Hybrid Caching Management Framework (IHCMF) dynamically predicts data access patterns and migrates cache blocks based on cost–benefit analysis, achieving a 37.5% execution time reduction over default Spark configurations in SparkBench evaluations. By improving throughput-per-watt and projecting potential benefits from NVM’s near-zero idle power and extended hardware lifespan, IHCMF provides a scalable, cost-effective caching solution for resource-constrained edge computing environments. This work demonstrates that high-performance computing can be reconciled with environmental sustainability through intelligent memory management. Full article
(This article belongs to the Topic Green Technology Innovation and Economic Growth)
Show Figures

Figure 1

18 pages, 1460 KB  
Article
Combination Network with Multiaccess Caching
by Bowen Zheng, Yifei Huang and Dianhua Wu
Entropy 2026, 28(2), 220; https://doi.org/10.3390/e28020220 - 13 Feb 2026
Viewed by 239
Abstract
In the traditional (H,r,M,N) combination network, a central server storing N files communicates with K=(Hr) users through H cache-less relays. Each user has a local cache of size M files [...] Read more.
In the traditional (H,r,M,N) combination network, a central server storing N files communicates with K=(Hr) users through H cache-less relays. Each user has a local cache of size M files and is connected to a distinct subset of r relays. This paper studies the (H,r,L,Λ,M,N) combination network with multi-access caching, where Λ cache nodes (each of size M files) are available and each user can access L cache nodes. We show that in the regime HΛ and rL, an achievable design can be obtained via a group-wise operation, which reduces the scheme design within each group to an effective (Λ,L,L,Λ,M,N) instance. For the case Λ=H and L=r, we further propose an explicit coded caching scheme constructed via two array-based representations (a cache-node placement array and a user-retrieve array) and a derived combinatorial placement delivery array (CPDA) based on the Maddah-Ali–Niesen (MN) placement strategy. Numerical comparisons using the user-retrievable cache ratio as the evaluation metric indicate that the proposed scheme approaches the converse bound of the traditional combination network, and the performance gap diminishes as the cache ratio increases. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

36 pages, 834 KB  
Review
An Overview of Technical Aspects and Challenges in Designing Edge-Cloud Systems
by Mohammadsadeq Garshasbi Herabad, Javid Taheri, Bestoun S. Ahmed and Calin Curescu
Appl. Sci. 2026, 16(3), 1454; https://doi.org/10.3390/app16031454 - 31 Jan 2026
Viewed by 413
Abstract
Edge–cloud computing has emerged as a key enabling paradigm for augmented and virtual reality (AR/VR) systems because of the stringent computational and ultra-low-latency requirements of AR/VR workloads. Designing efficient edge–cloud systems for such workloads involves multiple technical aspects, including communication technologies, service placement, [...] Read more.
Edge–cloud computing has emerged as a key enabling paradigm for augmented and virtual reality (AR/VR) systems because of the stringent computational and ultra-low-latency requirements of AR/VR workloads. Designing efficient edge–cloud systems for such workloads involves multiple technical aspects, including communication technologies, service placement, task offloading and caching, service migration, and security and privacy. This paper provides a structured and technical analysis of these aspects from an AR/VR perspective. We adopt a two-stage literature analysis, in which Google Scholar is used to identify fundamental technical aspects and solution approaches, followed by a focused analysis of recent research trends and future directions using academic databases (e.g., IEEE Xplore, ACM Digital Library, and ScienceDirect). We present an organized classification of the core technical aspects and investigate existing solution approaches, including heuristic, metaheuristic, learning-based, and hybrid strategies. Rather than introducing application-specific designs, the analysis focuses on workload-driven challenges and trade-offs that arise in AR/VR systems. Based on this classification, we analyze recent research trends, identify underexplored technical areas, and highlight key research gaps that hinder the efficient deployment of AR/VR services over edge–cloud infrastructures. The findings of this study provide practical insights for researchers and system designers and help guide future research toward more responsive, scalable, and reliable edge–cloud AR/VR systems. Full article
(This article belongs to the Special Issue Edge Computing and Cloud Computing: Latest Advances and Prospects)
Show Figures

Figure 1

16 pages, 1309 KB  
Article
Ant Colony Optimization for CMOS Physical Design: Reducing Layout Area and Improving Aspect Ratio in VLSI Circuits
by Arnab A. Purkayastha, Jay Tharwani and Shobhit Aggarwal
Electronics 2025, 14(24), 4825; https://doi.org/10.3390/electronics14244825 - 8 Dec 2025
Viewed by 633
Abstract
This paper presents an enhanced Ant Colony Optimization (ACO) algorithm tailored for optimizing CMOS physical design in VLSI circuits. As device complexity escalates, traditional placement techniques struggle with multiobjective goals such as minimizing layout area, wirelength, and achieving effective aspect ratios. The proposed [...] Read more.
This paper presents an enhanced Ant Colony Optimization (ACO) algorithm tailored for optimizing CMOS physical design in VLSI circuits. As device complexity escalates, traditional placement techniques struggle with multiobjective goals such as minimizing layout area, wirelength, and achieving effective aspect ratios. The proposed ACO framework simulates artificial ant colonies exploring layout configurations and reinforcing promising solutions through a pheromone-guided heuristic. Evaluated on a benchmark containing ten typical logic blocks—Adder, Multiplier, Shifter, MUX, Register, ALU, Decoder, Control, Cache, and Buffer—the ACO method achieves a maximum layout area reduction of 27.27% (from 1760 to 1280 units2) and improves the aspect ratio from 3.64 to 5.0 compared to traditional layouts. The mean area reduction observed across different parameter settings is approximately 20%. The system also includes a fully configurable and modular automation tool designed for flexible parameter tuning and the rapid benchmarking of the ACO algorithm. This tool enables users to easily adjust key parameters such as number of ants, iteration count, pheromone evaporation rate, and heuristic influences, allowing for a comprehensive exploration of the optimization space. Experimental results demonstrate ACO’s scalability, adaptability, and effectiveness, establishing it as a viable approach for automation in complex physical designs. Future work will focus on hybrid algorithms and multi-objective optimization extensions. Full article
(This article belongs to the Special Issue Recent Advances in AI Hardware Design)
Show Figures

Figure 1

23 pages, 3828 KB  
Article
SARAC4N: Socially and Resource-Aware Caching in Clustered Content-Centric Networks
by Amir Raza Khan, Umar Shoaib and Hannan Bin Liaqat
Future Internet 2025, 17(8), 341; https://doi.org/10.3390/fi17080341 - 29 Jul 2025
Cited by 1 | Viewed by 1751
Abstract
The Content-Centric Network (CCN) presents an alternative to the conventional TCP/IP network, where IP is fundamental for communication between the source and destination. Instead of relying on IP addresses, CCN emphasizes content to enable efficient data distribution through caching and delivery. The increasing [...] Read more.
The Content-Centric Network (CCN) presents an alternative to the conventional TCP/IP network, where IP is fundamental for communication between the source and destination. Instead of relying on IP addresses, CCN emphasizes content to enable efficient data distribution through caching and delivery. The increasing demand of graphic-intensive applications requires minimal response time and optimized resource utilization. Therefore, the CCN plays a vital role due to its efficient architecture and content management approach. To reduce data retrieval delays in CCNs, traditional methods improve caching mechanisms through clustering. However, these methods do not address the optimal use of resources, including CPU, memory, storage, and available links, along with the incorporation of social awareness. This study proposes SARAC4N, a socially and resource-aware caching framework for clustered Content-Centric Networks that integrates dual-head clustering and popularity-driven content placement. It enhances caching efficiency, reduces retrieval delays, and improves resource utilization across heterogeneous network topologies. This approach will help resolve congestion issues while enhancing social awareness, lowering error rates, and ensuring efficient content delivery. The proposed Socially and Resource-Aware Caching in Clustered Content-Centric Network (SARAC4N) enhances caching effectiveness by optimally utilizing resources and positioning them with social awareness within the cluster. Furthermore, it enhances metrics such as data retrieval time, reduces computation and memory usage, minimizes data redundancy, optimizes network usage, and lowers storage requirements, all while maintaining a very low error rate. Full article
Show Figures

Figure 1

15 pages, 548 KB  
Article
Centralized Hierarchical Coded Caching Scheme for Two-Layer Network
by Kun Zhao, Jinyu Wang and Minquan Cheng
Entropy 2025, 27(3), 316; https://doi.org/10.3390/e27030316 - 18 Mar 2025
Viewed by 1004
Abstract
This paper considers a two-layer hierarchical network, where a server containing N files is connected to K1 mirrors and each mirror is connected to K2 users. Each mirror and each user has a cache memory of size M1 and [...] Read more.
This paper considers a two-layer hierarchical network, where a server containing N files is connected to K1 mirrors and each mirror is connected to K2 users. Each mirror and each user has a cache memory of size M1 and M2 files, respectively. The server can only broadcast to the mirrors, and each mirror can only broadcast to its connected users. For such a network, we propose a novel coded caching scheme based on two known placement delivery arrays (PDAs). To fully utilize the cache memory of both the mirrors and users, we first treat the mirrors and users as cache nodes of the same type; i.e., the cache memory of each mirror is regarded as an additional part of the connected users’ cache, then the server broadcasts messages to all mirrors according to a K1K2-user PDA in the first layer. In the second layer, each mirror first cancels useless file packets (if any) in the received useful messages and forwards them to the connected users, such that each user can decode the requested packets not cached by the mirror, then broadcasts coded subpackets to the connected users according to a K2-user PDA, such that each user can decode the requested packets cached by the mirror. The proposed scheme is extended to a heterogeneous two-layer hierarchical network, where the number of users connected to different mirrors may be different. Numerical comparison showed that the proposed scheme achieved lower coding delays compared to existing hierarchical coded caching schemes at most memory ratio points. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

27 pages, 628 KB  
Article
Long-Term Energy Consumption Minimization Based on UAV Joint Content Fetching and Trajectory Design
by Elhadj Moustapha Diallo, Rong Chai, Abuzar B. M. Adam, Gezahegn Abdissa Bayessa, Chengchao Liang and Qianbin Chen
Sensors 2025, 25(3), 898; https://doi.org/10.3390/s25030898 - 2 Feb 2025
Cited by 2 | Viewed by 1434
Abstract
Caching the contents of unmanned aerial vehicles (UAVs) could significantly improve the content fetching performance of request users (RUs). In this paper, we study UAV trajectory design, content fetching, power allocation, and content placement problems in multi-UAV-aided networks, where multiple UAVs can transmit [...] Read more.
Caching the contents of unmanned aerial vehicles (UAVs) could significantly improve the content fetching performance of request users (RUs). In this paper, we study UAV trajectory design, content fetching, power allocation, and content placement problems in multi-UAV-aided networks, where multiple UAVs can transmit contents to the assigned RUs. To minimize the energy consumption of the system, we develop a constrained optimization problem that simultaneously designs UAV trajectory, power allocation, content fetching, and content placement. Since the original minimization problem is a mixed-integer nonlinear programming (MINLP) problem that is difficult to solve, the optimization problem was first transformed into a semi-Markov decision process (SMDP). Next, we developed a new technique to solve the joint optimization problem: option-based hierarchical deep reinforcement learning (OHDRL). We define UAV trajectory planning and power allocation as the low-level action space and content placement and content fetching as the high-level option space. Stochastic optimization can be handled using this strategy, where the agent makes high-level option selections, and the action is carried out at a low level based on the chosen option’s policy. When comparing the proposed approach to the current technique, the numerical results show that it can produce more consistent learning performance and reduced energy consumption. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

27 pages, 4401 KB  
Article
An Efficient Multipath-Based Caching Strategy for Information-Centric Networks
by Wancai Zhang and Rui Han
Electronics 2025, 14(3), 439; https://doi.org/10.3390/electronics14030439 - 22 Jan 2025
Cited by 3 | Viewed by 1715
Abstract
The growing demand for large-scale data distribution and sharing presents significant challenges to content transmission within the current TCP/IP network architecture. To address these challenges, Information-Centric Networking (ICN) has emerged as a promising alternative, offering inherent support for multipath forwarding and in-network caching [...] Read more.
The growing demand for large-scale data distribution and sharing presents significant challenges to content transmission within the current TCP/IP network architecture. To address these challenges, Information-Centric Networking (ICN) has emerged as a promising alternative, offering inherent support for multipath forwarding and in-network caching to improve data transmission performance. However, most existing ICN caching strategies primarily focus on utilizing resources along the default transmission path and its neighboring nodes, without fully exploiting the additional resources provided by multipath forwarding. To address this gap, we propose an efficient multipath-based caching strategy that optimizes cache placement by decomposing the problem into two steps, multipath selection and cache node selection along the paths. First, multipath selection considers both transmission and caching resources across multiple paths, prioritizing the caching of popular content while efficiently transmitting less popular content. Next, along the selected paths, cache node selection evaluates cache load based on cache utilization and available capacity, prioritizing nodes with the lowest cache load. Extensive simulations across diverse topologies demonstrate that the proposed strategy reduces data transmission latency by at least 12.22%, improves cache hit rate by at least 16.44%, and enhances cache node load balancing by at least 18.77%, compared to the neighborhood collaborative caching strategies. Full article
Show Figures

Figure 1

23 pages, 2715 KB  
Article
A Hierarchical Cache Architecture-Oriented Cache Management Scheme for Information-Centric Networking
by Yichao Chao and Rui Han
Future Internet 2025, 17(1), 17; https://doi.org/10.3390/fi17010017 - 5 Jan 2025
Cited by 4 | Viewed by 3100
Abstract
Information-Centric Networking (ICN) typically utilizes DRAM (Dynamic Random Access Memory) to build in-network cache components due to its high data transfer rate and low latency. However, DRAM faces significant limitations in terms of cost and capacity, making it challenging to meet the growing [...] Read more.
Information-Centric Networking (ICN) typically utilizes DRAM (Dynamic Random Access Memory) to build in-network cache components due to its high data transfer rate and low latency. However, DRAM faces significant limitations in terms of cost and capacity, making it challenging to meet the growing demands for cache scalability required by increasing Internet traffic. Combining high-speed but expensive memory (e.g., DRAM) with large-capacity, low-cost storage (e.g., SSD) to construct a hierarchical cache architecture has emerged as an effective solution to this problem. However, how to perform efficient cache management in such architectures to realize the expected cache performance remains challenging. This paper proposes a cache management scheme for hierarchical cache architectures in ICN, which introduces a differentiated replica replacement policy to accommodate the varying request access patterns at different cache layers, thereby enhancing overall cache performance. Additionally, a probabilistic insertion-based SSD cache admission filtering mechanism is designed to control the SSD write load, addressing the issue of balancing SSD lifespan and space utilization. Extensive simulation results demonstrate that the proposed scheme exhibits superior cache performance and lower SSD write load under various workloads and replica placement strategies, highlighting its broad applicability to different application scenarios. Additionally, it maintains stable performance improvements across different cache capacity settings, further reflecting its good scalability. Full article
Show Figures

Figure 1

28 pages, 1238 KB  
Article
Resource Allocation in UAV-D2D Networks: A Scalable Heterogeneous Multi-Agent Deep Reinforcement Learning Approach
by Huayuan Wang, Hui Li, Xin Wang, Shilin Xia, Tao Liu and Ruonan Wang
Electronics 2024, 13(22), 4401; https://doi.org/10.3390/electronics13224401 - 10 Nov 2024
Cited by 2 | Viewed by 2547
Abstract
In unmanned aerial vehicle (UAV)-assisted device-to-device (D2D) caching networks, the uncertainty from unpredictable content demands and variable user positions poses a significant challenge for traditional optimization methods, often making them impractical. Multi-agent deep reinforcement learning (MADRL) offers significant advantages in optimizing multi-agent system [...] Read more.
In unmanned aerial vehicle (UAV)-assisted device-to-device (D2D) caching networks, the uncertainty from unpredictable content demands and variable user positions poses a significant challenge for traditional optimization methods, often making them impractical. Multi-agent deep reinforcement learning (MADRL) offers significant advantages in optimizing multi-agent system decisions and serves as an effective and practical alternative. However, its application in large-scale dynamic environments is severely limited by the curse of dimensionality and communication overhead. To resolve this problem, we develop a scalable heterogeneous multi-agent mean-field actor-critic (SH-MAMFAC) framework. The framework treats ground users (GUs) and UAVs as distinct agents and designs cooperative rewards to convert the resource allocation problem into a fully cooperative game, enhancing global network performance. We also implement a mixed-action mapping strategy to handle discrete and continuous action spaces. A mean-field MADRL framework is introduced to minimize individual agent training loads while enhancing total cache hit probability (CHP). The simulation results show that our algorithm improves CHP and reduces transmission delay. A comparative analysis with existing mainstream deep reinforcement learning (DRL) algorithms shows that SH-MAMFAC significantly reduces training time and maintains high CHP as GU count grows. Additionally, by comparing with SH-MAMFAC variants that do not include trajectory optimization or power control, the proposed joint design scheme significantly reduces transmission delay. Full article
Show Figures

Figure 1

25 pages, 1162 KB  
Article
Task Partition-Based Computation Offloading and Content Caching for Cloud–Edge Cooperation Networks
by Jingjing Huang, Xiaoping Yang, Jinyi Chen, Jiabao Chen, Zhaoming Hu, Jie Zhang, Zhuwei Wang and Chao Fang
Symmetry 2024, 16(7), 906; https://doi.org/10.3390/sym16070906 - 16 Jul 2024
Cited by 4 | Viewed by 2671
Abstract
With the increasing complexity of applications, many delay-sensitive and compute-intensive services have posed significant challenges to mobile devices. Addressing how to efficiently allocate heterogeneous network resources to meet the computing and delay requirements of terminal services is a pressing issue. In this paper, [...] Read more.
With the increasing complexity of applications, many delay-sensitive and compute-intensive services have posed significant challenges to mobile devices. Addressing how to efficiently allocate heterogeneous network resources to meet the computing and delay requirements of terminal services is a pressing issue. In this paper, a new cooperative twin delayed deep deterministic policy gradient and deep-Q network (TD3-DQN) algorithm is introduced to minimize system latency by optimizing computational offloading and caching placement asynchronously. Specifically, the task-partitioning technique divides computing tasks into multiple subtasks, reducing the response latency. A DQN intelligent algorithm is presented to optimize the offloading path to edge servers by perceiving network resource status. Furthermore, a TD3 approach is designed to optimize the cached content in the edge servers, ensuring dynamic popularity content requirements are met without excessive offload decisions. The simulation results demonstrate that the proposed model achieves lower latency and quicker convergence in asymmetrical cloud–edge collaborative networks compared to other benchmark algorithms. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 5648 KB  
Article
RMBCC: A Replica Migration-Based Cooperative Caching Scheme for Information-Centric Networks
by Yichao Chao, Hong Ni and Rui Han
Electronics 2024, 13(13), 2636; https://doi.org/10.3390/electronics13132636 - 4 Jul 2024
Cited by 1 | Viewed by 1028
Abstract
How to maximize the advantages of in-network caching under limited cache space has always been a key issue in information-centric networking (ICN). Replica placement strategies aim to fully utilize cache resources by optimizing the location and quantity distribution of replicas in the network, [...] Read more.
How to maximize the advantages of in-network caching under limited cache space has always been a key issue in information-centric networking (ICN). Replica placement strategies aim to fully utilize cache resources by optimizing the location and quantity distribution of replicas in the network, thereby improving the performance of the cache system. However, existing research primarily focuses on optimizing the placement of replicas along the content delivery path, which cannot avoid the inherent drawback of not being able to leverage off-path cache resources. The proposals for off-path caching cannot effectively solve this problem as they introduce excessive complexity and cooperation costs. In this paper, we address the trade-off between cache resource utilization and cooperation costs by introducing a mechanism complementary to replica placement. Instead of redesigning a new caching strategy from scratch, we propose a proactive cooperative caching mechanism (called RMBCC) that involves an independent replica migration process, through which we proactively relocate replicas evicted from the local cache to neighboring nodes with sufficient cache resources. The cooperation costs are effectively controlled through migration replica filtering, migration distance limitation, as well as hop-by-hop migration request propagation. Extensive simulation experiments show that RMBCC can be efficiently integrated with different on-path caching strategies. Compared with representative caching schemes, RMBCC achieves significant improvements in evaluation metrics such as cache hit ratio and content retrieval time, while only introducing negligible cooperation overhead. Full article
Show Figures

Figure 1

17 pages, 608 KB  
Article
Optimized Two-Tier Caching with Hybrid Millimeter-Wave and Microwave Communications for 6G Networks
by Muhammad Sheraz, Teong Chee Chuah, Mardeni Bin Roslee, Manzoor Ahmed, Amjad Iqbal and Ala’a Al-Habashna
Appl. Sci. 2024, 14(6), 2589; https://doi.org/10.3390/app14062589 - 20 Mar 2024
Cited by 6 | Viewed by 1918
Abstract
Data caching is a promising technique to alleviate the data traffic burden from the backhaul and minimize data access delay. However, the cache capacity constraint poses a significant challenge to obtaining content through the cache resource that degrades the caching performance. In this [...] Read more.
Data caching is a promising technique to alleviate the data traffic burden from the backhaul and minimize data access delay. However, the cache capacity constraint poses a significant challenge to obtaining content through the cache resource that degrades the caching performance. In this paper, we propose a novel two-tier caching mechanism for data caching on mobile user equipment (UE) and the small base station (SBS) level in ultra-dense 6G heterogeneous networks for reducing data access failure via cache resources. The two-tier caching enables users to retrieve their desired content from cache resources through device-to-device (D2D) communications from neighboring users or the serving SBS. The cache-enabled UE exploits millimeter-wave (mmWave)-based D2D communications, utilizing line-of-sight (LoS) links for high-speed data transmission to content-demanding mobile UE within a limited connection time. In the event of D2D communication failures, a dual-mode hybrid system, combining mmWave and microwave μWave technologies, is utilized to ensure effective data transmission between the SBS and UE to fulfill users’ data demands. In the proposed framework. the data transmission speed is optimized through mmWave signals in line-of-sight (LoS) conditions. In non-LoS scenarios, the system switches to μWave mode for obstacle-penetrating signal transmission. Subsequently, we propose a reinforcement learning (RL) approach to optimize cache decisions through the approximation of the Q action-value function. The proposed technique undergoes iterative learning, adapting to dynamic network conditions to enhance the content placement policy and minimize delay. Extensive simulations demonstrate the efficiency of our proposed approach in significantly reducing network delay compared with benchmark schemes. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

33 pages, 506 KB  
Article
Fundamental Limits of Coded Caching in Request-Robust D2D Communication Networks
by Wuqu Wang, Zhe Tao, Nan Liu and Wei Kang
Entropy 2024, 26(3), 250; https://doi.org/10.3390/e26030250 - 12 Mar 2024
Cited by 1 | Viewed by 2323
Abstract
D2D coded caching, originally introduced by Ji, Caire, and Molisch, significantly improves communication efficiency by applying the multi-cast technology proposed by Maddah-Ali and Niesen to the D2D network. Most prior works on D2D coded caching are based on the assumption that all users [...] Read more.
D2D coded caching, originally introduced by Ji, Caire, and Molisch, significantly improves communication efficiency by applying the multi-cast technology proposed by Maddah-Ali and Niesen to the D2D network. Most prior works on D2D coded caching are based on the assumption that all users will request content at the beginning of the delivery phase. However, in practice, this is often not the case. Motivated by this consideration, this paper formulates a new problem called request-robust D2D coded caching. The considered problem includes K users and a content server with access to N files. Only r users, known as requesters, request a file each at the beginning of the delivery phase. The objective is to minimize the average and worst-case delivery rate, i.e., the average and worst-case number of broadcast bits from all users among all possible demands. For this novel D2D coded caching problem, we propose a scheme based on uncoded cache placement and exploiting common demands and one-shot delivery. We also propose information-theoretic converse results under the assumption of uncoded cache placement. Furthermore, we adapt the scheme proposed by Yapar et al. for uncoded cache placement and one-shot delivery to the request-robust D2D coded caching problem and prove that the performance of the adapted scheme is order optimal within a factor of two under uncoded cache placement and within a factor of four in general. Finally, through numerical evaluations, we show that the proposed scheme outperforms known D2D coded caching schemes applied to the request-robust scenario for most cache size ranges. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

11 pages, 443 KB  
Article
Data Placement Using a Classifier for SLC/QLC Hybrid SSDs
by Heeseong Cho and Taeseok Kim
Appl. Sci. 2024, 14(4), 1648; https://doi.org/10.3390/app14041648 - 18 Feb 2024
Cited by 2 | Viewed by 3195
Abstract
In hybrid SSDs (solid-state drives) consisting of SLC (single-level cell) and QLC (quad-level cell), efficiently using the limited SLC cache space is crucial. In this paper, we present a practical data placement scheme, which determines the placement location of incoming write requests using [...] Read more.
In hybrid SSDs (solid-state drives) consisting of SLC (single-level cell) and QLC (quad-level cell), efficiently using the limited SLC cache space is crucial. In this paper, we present a practical data placement scheme, which determines the placement location of incoming write requests using a lightweight machine-learning model. It leverages information about I/O workload characteristics and SSD status to identify cold data that does not need to be stored in the SLC cache with high accuracy. By strategically bypassing the SLC cache for cold data, our scheme significantly reduces unnecessary data movements between the SLC and QLC regions, improving the overall efficiency of the SSD. Through simulation-based studies using real-world workloads, we demonstrate that our scheme outperforms existing approaches by up to 44%. Full article
(This article belongs to the Special Issue Resource Management for Emerging Computing Systems)
Show Figures

Figure 1

Back to TopTop