Next Article in Journal
Quasi-Circulator Using an Asymmetric Coupler for Tx Leakage Cancellation
Next Article in Special Issue
Moving to the Edge-Cloud-of-Things: Recent Advances and Future Research Directions
Previous Article in Journal
Time Sequential Motion-to-Photon Latency Measurement System for Virtual Reality Head-Mounted Displays
Previous Article in Special Issue
Evaluating the Impact of Optical Interconnects on a Multi-Chip Machine-Learning Architecture
Article Menu
Issue 9 (September) cover image

Export Article

Open AccessArticle
Electronics 2018, 7(9), 172;

Access Adaptive and Thread-Aware Cache Partitioning in Multicore Systems

Institute of VLSI Design, Zhejiang University, Hangzhou 310027, China
School of Information & Electronic Engineering, Zhejiang Gongshang University, Hangzhou 310018, China
Author to whom correspondence should be addressed.
Received: 20 July 2018 / Revised: 12 August 2018 / Accepted: 27 August 2018 / Published: 1 September 2018
(This article belongs to the Special Issue Distributed Computing and Storage)
Full-Text   |   PDF [3408 KB, uploaded 3 September 2018]   |  


Cache partitioning is a successful technique for saving energy for a shared cache and all the existing studies focus on multi-program workloads running in multicore systems. In this paper, we are motivated by the fact that a multi-thread application generally executes faster than its single-thread counterpart and its cache accessing behavior is quite different. Based on this observation, we study applications running in multi-thread mode and classify data of the multi-thread applications into shared and private categories, which helps reduce the interferences among shared and private data and contributes to constructing a more efficient cache partitioning scheme. We also propose a hardware structure to support these operations. Then, an access adaptive and thread-aware cache partitioning (ATCP) scheme is proposed, which assigns separate cache portions to shared and private data to avoid the evictions caused by the conflicts from the data of different categories in the shared cache. The proposed ATCP achieves a lower energy consumption, meanwhile improving the performance of applications compared with the least recently used (LRU) managed, core-based evenly partitioning (EVEN) and utility-based cache partitioning (UCP) schemes. The experimental results show that ATCP can achieve 29.6% and 19.9% average energy savings compared with LRU and UCP schemes in a quad-core system. Moreover, the average speedup of multi-thread ATCP with respect to single-thread LRU is at 1.89. View Full-Text
Keywords: shared cache partitioning; thread-aware; access type classification; way access permission registers; thread-aware cache monitor; MILP shared cache partitioning; thread-aware; access type classification; way access permission registers; thread-aware cache monitor; MILP

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Huang, K.; Wang, K.; Zheng, D.; Zhang, X.; Yan, X. Access Adaptive and Thread-Aware Cache Partitioning in Multicore Systems. Electronics 2018, 7, 172.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Electronics EISSN 2079-9292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top