Novel Device for Computing-In Memory

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Microelectronics".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 3864

Special Issue Editors


E-Mail Website
Guest Editor
Fraunhofer Institute of Photonics Microsystems IPMS, 01109 Dresden, Germany
Interests: semiconductor memory design; SRAM; DRAM; flash and ferroelectric memory cell; analog to digital converters; time to digital converters; computing in memory architecture design; device noise investigation and mitigation; process variation aware digital design; hardware implementation of the soft computing algorithms; CNN accelerator design; neuromorphic computing; low power integrated circuit design

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616, USA
Interests: DFP (Design For Power) and DFM (Design for Manufacturing) for ultra-low-power VLSI chip design and automation; circuit design for nanometer scaled devices such as carbon nanotube FETs
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Today's computing is not limited to designing an application graph of the desired work for available hardware and then starting the execution; it is more application-specific and data-centric. The processor should be able to predict future machine cycles and improve performance through data-centric learning.  This type of architecture requires a high data bus bandwidth and speed. Von-Neumann architecture is widely for flexible design space but limits performance due to data movement between the processor and memory, which depends on memory speed and bandwidth. Computing in memory (CIM) is an alternative processor design technique where processing is carried out inside the memory itself. It attracts manly data-centric applications such as neuromorphic computing, machine learning, and soft computing. CIM may use traditional memories, such as SRAM, DRAM, and Flash, along with emerging devices based on memories, such as Ferroelectric transistor (FeFET), spin-transfer torque magnetic random-access memory (STT-MRAM), and pulse-code modulation  (PCM). Research on CIM focuses on different aspects of computing engine design, such as the device, circuit, architecture, accelerators, processor design, and algorithms. This Special Issue covers all topics related to the full CIM stack: device entry, fabrication, measurement, data analysis circuit design, architecture, algorithm, accelerator design, and applications.

Dr. Nandakishor Yadav
Dr. Ken Choi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computing in memory (CIM)
  • in-memory computing (IMC)
  • processing in memory (PIM)
  • computational modeling
  • central processing unit
  • accelerator design
  • image processing
  • artificial intelligence
  • machine learning
  • data-centric hardware
  • neuromorphic engineering
  • deep neural network (DNN)
  • SRAM
  • DRAM
  • flash
  • FeFET
  • MRAM
  • RRAM
  • TCAM
  • FPGA
  • ASIC

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

10 pages, 1461 KiB  
Article
A Novel 8T XNOR-SRAM: Computing-in-Memory Design for Binary/Ternary Deep Neural Networks
by Nader Alnatsheh, Youngbae Kim, Jaeik Cho and Kyuwon Ken Choi
Electronics 2023, 12(4), 877; https://doi.org/10.3390/electronics12040877 - 9 Feb 2023
Cited by 3 | Viewed by 3266
Abstract
Deep neural networks (DNNs) and Convolutional neural networks (CNNs) have improved accuracy in many Artificial Intelligence (AI) applications. Some of these applications are recognition and detection tasks, such as speech recognition, facial recognition and object detection. On the other hand, CNN computation requires [...] Read more.
Deep neural networks (DNNs) and Convolutional neural networks (CNNs) have improved accuracy in many Artificial Intelligence (AI) applications. Some of these applications are recognition and detection tasks, such as speech recognition, facial recognition and object detection. On the other hand, CNN computation requires complex arithmetic and a lot of memory access time; thus, designing new hardware that would increase the efficiency and throughput without increasing the hardware cost is much more critical. This area in hardware design is very active and will continue to be in the near future. In this paper, we propose a novel 8T XNOR-SRAM design for Binary/Ternary DNNs (TBNs) directly supporting the XNOR-Network and the TBN DNNs. The proposed SRAM Computing-in-Memory (CIM) can operate in two modes, the first of which is the conventional 6T SRAM, and the second is the XNOR mode. By adding two extra transistors to the conventional 6T structure, our proposed CIM showed an improvement up to 98% for power consumption and 90% for delay compared to the existing state-of-the-art XNOR-CIM. Full article
(This article belongs to the Special Issue Novel Device for Computing-In Memory)
Show Figures

Figure 1

Back to TopTop