Energy-efficient and Scalable Embedded Memories for Future Technologies

A special issue of Journal of Low Power Electronics and Applications (ISSN 2079-9268).

Deadline for manuscript submissions: closed (5 December 2015) | Viewed by 26900

Special Issue Editor


E-Mail Website
Guest Editor
Computer Science and Engineering, University of South Florida, USA
Interests: low-power, energy-efficient and robust circuit/system design and digital testing for nanometer technologies

Special Issue Information

Dear Colleagues,

Conventional CMOS memory, i.e., Static Random Access Memory (SRAM) has been the popular choice for embedded memory application for the last several decades. However, SRAM seems to be approaching a brick wall. On the one hand, process variability and leakage power is posing severe obstructions towards SRAM scaling to future nodes, and, on the other hand, emerging energy-constrained and bandwidth hungry electronic gadgets’ demand for larger, as well as energy-efficient on-chip cache, which cannot be satisfied with SRAM. To address the changing landscape of the consumer market, there is a corresponding need of changing the design paradigm. What is really needed is a memory technology that is at least 50–100X denser, 1000X energy-efficient, and as fast as SRAM.

This Special Issue will explore the latest trends in the embedded memory segment, and discuss the most promising emerging memory technologies, where storage element is based on charge, spin, and resistance. The specific focus will be on the operating principles, design challenges and solutions of non-volatile memories (NVM) including, but not limited to, Spin Transfer Torque RAM (STTRAM), Domain Wall Memory (DWM), Resistive RAM (RRAM), and Ferroelectric RAM (FeRAM). New applications of these emerging memory technologies for exciting applications, such as memory-based computations, hardware security, and neuromorphic computing, will also be explored.

Dr. Swaroop Ghosh
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Low Power Electronics and Applications is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • low-power
  • embedded memory
  • spintronic memory
  • resistive memory
  • non-volatile memory
  • hardware security
  • neuromorphic computing
  • memory-based computing
  • cloud computing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

1845 KiB  
Article
A Sub-Threshold 8T SRAM Macro with 12.29 nW/KB Standby Power and 6.24 pJ/access for Battery-Less IoT SoCs
by Farah B. Yahya, Harsh N. Patel, James Boley, Arijit Banerjee and Benton H. Calhoun
J. Low Power Electron. Appl. 2016, 6(2), 8; https://doi.org/10.3390/jlpea6020008 - 24 May 2016
Cited by 12 | Viewed by 8537
Abstract
We present an ultra-low power (ULP) 1 KB SRAM macro for Internet of Things (IoT) battery-less systems-on-chip (SoCs) operating under varying energy harvesting conditions. The unique combination of features within this array allows battery-less SoCs to retain important information for a significantly longer [...] Read more.
We present an ultra-low power (ULP) 1 KB SRAM macro for Internet of Things (IoT) battery-less systems-on-chip (SoCs) operating under varying energy harvesting conditions. The unique combination of features within this array allows battery-less SoCs to retain important information for a significantly longer period of time when energy harvesting conditions are poor. The array uses 8T high-threshold (high-VT) static random access memory (SRAM) cells with word line boosting to eliminate write failures coupled with a read-before-write scheme to address read-disturb in half-selected cells. Due to the reduced on current in high-VT devices, read word line boosting is implemented to improve the drive strength of the read buffer, and to eliminate read failures. Leakage currents through the unselected cells during a read operation is addressed by boosting the footer virtual VSS (VVSS) of the read port to the supply voltage (VDD). To reduce the power consumption of instruction memories in battery-less SoCs, two features were utilized in this array: a read burst mode is used when reading consecutive addresses to reduce the read energy, and instructions with higher percentages of “1” data are defined since reading a “1” is less costly than reading a “0” in 8T cells. The proposed array can operate at a wide range of supply voltages (350–700 mV) and has two ULP modes: standby with retention (1.5 pW/bit) and shutdown without retention (0.13 pW/bit). Aggressive power gating of all peripherals during the standby state reduces the array power consumption down to 12.29 nW/KB at 320 mV with data retention. Compared to previously published 8T arrays, the proposed design provides the lowest standby power. The complete shutdown of the array allows further reduction down to 1.09 nW/KB and is suitable for reducing the power consumption of data memories in battery-less SoCs. The measured results from a commercial 130 nm chip show that the proposed array consumes a minimum of 6.24 pJ/access with a 17.16 nW standby power at 400 mV. The read burst mode allows up to 22% reduction in energy/access at 400 mV. Full article
Show Figures

Figure 1

517 KiB  
Article
A Survey of Cache Bypassing Techniques
by Sparsh Mittal
J. Low Power Electron. Appl. 2016, 6(2), 5; https://doi.org/10.3390/jlpea6020005 - 28 Apr 2016
Cited by 27 | Viewed by 10595
Abstract
With increasing core-count, the cache demand of modern processors has also increased. However, due to strict area/power budgets and presence of poor data-locality workloads, blindly scaling cache capacity is both infeasible and ineffective. Cache bypassing is a promising technique to increase effective cache [...] Read more.
With increasing core-count, the cache demand of modern processors has also increased. However, due to strict area/power budgets and presence of poor data-locality workloads, blindly scaling cache capacity is both infeasible and ineffective. Cache bypassing is a promising technique to increase effective cache capacity without incurring power/area costs of a larger sized cache. However, injudicious use of cache bypassing can lead to bandwidth congestion and increased miss-rate and hence, intelligent techniques are required to harness its full potential. This paper presents a survey of cache bypassing techniques for CPUs, GPUs and CPU-GPU heterogeneous systems, and for caches designed with SRAM, non-volatile memory (NVM) and die-stacked DRAM. By classifying the techniques based on key parameters, it underscores their differences and similarities. We hope that this paper will provide insights into cache bypassing techniques and associated tradeoffs and will be useful for computer architects, system designers and other researchers. Full article
Show Figures

Figure 1

613 KiB  
Article
A Technique for Improving Lifetime of Non-Volatile Caches Using Write-Minimization
by Sparsh Mittal and Jeffrey Vetter
J. Low Power Electron. Appl. 2016, 6(1), 1; https://doi.org/10.3390/jlpea6010001 - 18 Jan 2016
Cited by 11 | Viewed by 7166
Abstract
While non-volatile memories (NVMs) provide high-density and low-leakage, they also have low write-endurance. This, along with the write-variation introduced by the cache management policies, can lead to very small cache lifetime. In this paper, we propose ENLIVE, a technique for ENhancing the LIfetime [...] Read more.
While non-volatile memories (NVMs) provide high-density and low-leakage, they also have low write-endurance. This, along with the write-variation introduced by the cache management policies, can lead to very small cache lifetime. In this paper, we propose ENLIVE, a technique for ENhancing the LIfetime of non-Volatile cachEs. Our technique uses a small SRAM (static random access memory) storage, called HotStore. ENLIVE detects frequently written blocks and transfers them to the HotStore so that they can be accessed with smaller latency and energy. This also reduces the number of writes to the NVM cache which improves its lifetime. We present microarchitectural schemes for managing the HotStore. Simulations have been performed using an x86-64 simulator and benchmarks from SPEC2006 suite. We observe that ENLIVE provides higher improvement in lifetime and better performance and energy efficiency than two state-of-the-art techniques for improving NVM cache lifetime. ENLIVE provides 8.47×, 14.67× and 15.79× improvement in lifetime or two, four and eight core systems, respectively. In addition, it works well for a range of system and algorithm parameters and incurs only small overhead. Full article
Show Figures

Figure 1

Back to TopTop