Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (46)

Search Parameters:
Keywords = Von Neumann’s bottleneck

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 2513 KiB  
Article
Optoelectronic Memristor Based on ZnO/Cu2O for Artificial Synapses and Visual System
by Chen Meng, Hongxin Liu, Tong Li, Jin Luo and Sijie Zhang
Electronics 2025, 14(12), 2490; https://doi.org/10.3390/electronics14122490 - 19 Jun 2025
Viewed by 421
Abstract
The development of artificial intelligence has resulted in significant challenges to conventional von Neumann architectures, including the separation of storage and computation, and power consumption bottlenecks. The new generation of brain-like devices is accelerating its evolution in the direction of high-density integration and [...] Read more.
The development of artificial intelligence has resulted in significant challenges to conventional von Neumann architectures, including the separation of storage and computation, and power consumption bottlenecks. The new generation of brain-like devices is accelerating its evolution in the direction of high-density integration and integrated sensing, storage, and computing. The structural and information transmission similarity between memristors and biological synapses signifies their unique potential in sensing and memory. Therefore, memristors have become potential candidates for neural devices. In this paper, we have designed an optoelectronic memristor based on a ZnO/Cu2O structure to achieve synaptic behavior through the modulation of electrical signals, demonstrating the recognition of a dataset by a neural network. Furthermore, the optical synaptic functions, such as short-term/long-term potentiation and learn-forget-relearn behavior, and advanced synaptic behavior of optoelectronic modulation, are successfully simulated. The mechanism of light-induced conductance enhancement is explained by the barrier change at the interface. This work explores a new pathway for constructing next-generation optoelectronic synaptic devices, which lays the foundation for future brain-like visual chips and intelligent perceptual devices. Full article
Show Figures

Figure 1

11 pages, 5145 KiB  
Article
Island-like Perovskite Photoelectric Synaptic Transistor with ZnO Channel Layer Deposited by Low-Temperature Atomic Layer Deposition
by Jiahui Liu, Yuliang Ye and Zunxian Yang
Materials 2025, 18(12), 2879; https://doi.org/10.3390/ma18122879 - 18 Jun 2025
Viewed by 348
Abstract
Artificial photoelectric synapses exhibit great potential for overcoming the Von Neumann bottleneck in computational systems. All-inorganic halide perovskites hold considerable promise in photoelectric synapses due to their superior photon-harvesting efficiency. In this study, a novel wavy-structured CsPbBr3/ZnO hybrid film was realized [...] Read more.
Artificial photoelectric synapses exhibit great potential for overcoming the Von Neumann bottleneck in computational systems. All-inorganic halide perovskites hold considerable promise in photoelectric synapses due to their superior photon-harvesting efficiency. In this study, a novel wavy-structured CsPbBr3/ZnO hybrid film was realized by depositing zinc oxide (ZnO) onto island-like CsPbBr3 film via atomic layer deposition (ALD) at 70 °C. Due to the capability of ALD to grow high-quality films over small surface areas, dense and thin ZnO film filled the gaps between the island-shaped CsPbBr3 grains, thereby enabling reduced light-absorption losses and efficient charge transport between the CsPbBr3 light absorber and the ZnO electron-transport layer. This ZnO/island-like CsPbBr3 hybrid synaptic transistor could operate at a drain-source voltage of 1.0 V and a gate-source voltage of 0 V triggered by green light (500 nm) pulses with low light intensities of 0.035 mW/cm2. The device exhibited a quiescent current of ~0.5 nA. Notably, after patterning, it achieved a significantly reduced off-state current of 10−11 A and decreased the quiescent current to 0.02 nA. In addition, this transistor was able to mimic fundamental synaptic behaviors, including excitatory postsynaptic currents (EPSCs), paired-pulse facilitation (PPF), short-term to long-term plasticity (STP to LTP) transitions, and learning-experience behaviors. This straightforward strategy demonstrates the possibility of utilizing neuromorphic synaptic device applications under low voltage and weak light conditions. Full article
(This article belongs to the Section Electronic Materials)
Show Figures

Graphical abstract

24 pages, 3425 KiB  
Article
A Neural Network Compiler for Efficient Data Storage Optimization in ReRAM-Based DNN Accelerators
by Hsu-Yu Kao, Liang-Ying Su, Shih-Hsu Huang and Wei-Kai Cheng
Electronics 2025, 14(12), 2352; https://doi.org/10.3390/electronics14122352 - 8 Jun 2025
Cited by 1 | Viewed by 492
Abstract
ReRAM-based DNN accelerators have emerged as a promising solution to mitigate the von Neumann bottleneck. While prior research has introduced tools for simulating the hardware behavior of ReRAM’s non-linear characteristics, there remains a notable gap in high-level design automation tools capable of efficiently [...] Read more.
ReRAM-based DNN accelerators have emerged as a promising solution to mitigate the von Neumann bottleneck. While prior research has introduced tools for simulating the hardware behavior of ReRAM’s non-linear characteristics, there remains a notable gap in high-level design automation tools capable of efficiently deploying DNN models onto ReRAM-based accelerators with simultaneous optimization of execution time and memory usage. In this paper, we propose a neural network compiler built on the open-source TVM framework to address this challenge. The compiler incorporates both layer fusion and model partitioning techniques to enhance data storage efficiency. The core contribution of our work is an algorithm that determines the optimal mapping strategy by jointly considering layer fusion and model partitioning under hardware resource constraints. Experimental evaluations demonstrate that the proposed compiler adapts effectively to varying hardware resource limitations, enabling efficient storage optimization and supporting early-stage design space exploration. Full article
(This article belongs to the Special Issue Research on Key Technologies for Hardware Acceleration)
Show Figures

Figure 1

30 pages, 2809 KiB  
Review
A Survey on Computing-in-Memory (CiM) and Emerging Nonvolatile Memory (NVM) Simulators
by John Taylor Maurer, Ahmed Mamdouh Mohamed Ahmed, Parsa Khorrami, Sabrina Hassan Moon and Dayane Alfenas Reis
Chips 2025, 4(2), 19; https://doi.org/10.3390/chips4020019 - 3 May 2025
Viewed by 1737
Abstract
Modern computer applications have become highly data-intensive, giving rise to an increase in data traffic between the processor and memory units. Computing-in-Memory (CiM) has shown great promise as a solution to this aptly named von Neumann bottleneck problem by enabling computation within the [...] Read more.
Modern computer applications have become highly data-intensive, giving rise to an increase in data traffic between the processor and memory units. Computing-in-Memory (CiM) has shown great promise as a solution to this aptly named von Neumann bottleneck problem by enabling computation within the memory unit and thus reducing data traffic. Many simulation tools in the literature have been proposed to enable the design space exploration (DSE) of these novel computer architectures as researchers are in need of these tools to test their designs prior to fabrication. This paper presents a collection of classical nonvolatile memory (NVM) and CiM simulation tools to showcase their capabilities, as presented in their respective analyses. We provide an in-depth overview of DSE, emerging NVM device technologies, and popular CiM architectures. We organize the simulation tools by design-level scopes with respect to their focus on the devices, circuits, architectures, systems/algorithms, and applications they support. We conclude this work by identifying the gaps within the simulation space. Full article
(This article belongs to the Special Issue Magnetoresistive Random-Access Memory (MRAM): Present and Future)
Show Figures

Figure 1

10 pages, 4044 KiB  
Article
Photonic–Electronic Modulated a-IGZO Synaptic Transistor with High Linearity Conductance Modulation and Energy-Efficient Multimodal Learning
by Zhidong Hou, Jinrong Shen, Yiming Zhong and Dongping Wu
Micromachines 2025, 16(5), 517; https://doi.org/10.3390/mi16050517 - 28 Apr 2025
Viewed by 696
Abstract
Brain-inspired neuromorphic computing is expected to overcome the von Neumann bottleneck by eliminating the memory wall between processing and memory units. Nevertheless, critical challenges persist in synaptic device implementation, particularly regarding nonlinear/asymmetric conductance modulation and multilevel conductance states, which substantially impede the realization [...] Read more.
Brain-inspired neuromorphic computing is expected to overcome the von Neumann bottleneck by eliminating the memory wall between processing and memory units. Nevertheless, critical challenges persist in synaptic device implementation, particularly regarding nonlinear/asymmetric conductance modulation and multilevel conductance states, which substantially impede the realization of high-performance neuromorphic hardware. This study demonstrates a novel advancement in photonic–electronic modulated synaptic devices through the development of an amorphous indium–gallium–zinc oxide (a-IGZO) synaptic transistor. The device demonstrates biological synaptic functionalities, including excitatory/inhibitory post-synaptic currents (EPSCs/IPSCs) and spike-timing-dependent plasticity, while achieving excellent conductance modulation characteristics (nonlinearity of 0.0095/−0.0115 and asymmetric ratio of 0.247) and successfully implementing Pavlovian associative learning paradigms. Notably, systematic neural network simulations employing the experimental parameters reveal a 93.8% recognition accuracy on the MNIST handwritten digit dataset. The a-IGZO synaptic transistor with photonic–electronic co-modulation serves as a potential critical building block for constructing neuromorphic architectures with human-brain efficiency. Full article
(This article belongs to the Section D1: Semiconductor Devices)
Show Figures

Figure 1

20 pages, 6272 KiB  
Review
Flash Memory for Synaptic Plasticity in Neuromorphic Computing: A Review
by Jisung Im, Sangyeon Pak, Sung-Yun Woo, Wonjun Shin and Sung-Tae Lee
Biomimetics 2025, 10(2), 121; https://doi.org/10.3390/biomimetics10020121 - 18 Feb 2025
Viewed by 1817
Abstract
The rapid expansion of data has made global access easier, but it also demands increasing amounts of energy for data storage and processing. In response, neuromorphic electronics, inspired by the functionality of biological neurons and synapses, have emerged as a growing area of [...] Read more.
The rapid expansion of data has made global access easier, but it also demands increasing amounts of energy for data storage and processing. In response, neuromorphic electronics, inspired by the functionality of biological neurons and synapses, have emerged as a growing area of research. These devices enable in-memory computing, helping to overcome the “von Neumann bottleneck”, a limitation caused by the separation of memory and processing units in traditional von Neumann architecture. By leveraging multi-bit non-volatility, biologically inspired features, and Ohm’s law, synaptic devices show great potential for reducing energy consumption in multiplication and accumulation operations. Within the various non-volatile memory technologies available, flash memory stands out as a highly competitive option for storing large volumes of data. This review highlights recent advancements in neuromorphic computing that utilize NOR, AND, and NAND flash memory. This review also delves into the array architecture, operational methods, and electrical properties of NOR, AND, and NAND flash memory, emphasizing its application in different neural network designs. By providing a detailed overview of flash memory-based neuromorphic computing, this review offers valuable insights into optimizing its use across diverse applications. Full article
(This article belongs to the Section Biomimetic Design, Constructions and Devices)
Show Figures

Figure 1

16 pages, 4008 KiB  
Article
Low-Power 8T SRAM Compute-in-Memory Macro for Edge AI Processors
by Hye-Ju Shin and Sung-Hun Jo
Appl. Sci. 2024, 14(23), 10924; https://doi.org/10.3390/app142310924 - 25 Nov 2024
Viewed by 1978
Abstract
The traditional Von Neumann architecture creates bottlenecks due to data movement. The compute-in-memory (CIM) architecture performs computations within memory bit-cell arrays, enhancing computational performance. Edge devices utilizing artificial intelligence (AI) address real-time problems and have established themselves as groundbreaking technology. The 8T structure [...] Read more.
The traditional Von Neumann architecture creates bottlenecks due to data movement. The compute-in-memory (CIM) architecture performs computations within memory bit-cell arrays, enhancing computational performance. Edge devices utilizing artificial intelligence (AI) address real-time problems and have established themselves as groundbreaking technology. The 8T structure proposed in this paper has strengths over other existing structures in that it better withstands environmental changes within the SRAM and consumes lower power during memory operation. This structure minimizes reliance on complex ADCs, instead utilizing a simplified voltage differential approach for multiply-and-accumulate (MAC) operations, which enhances both power efficiency and stability. Based on these strengths, it can achieve higher battery efficiency in AI edge devices and improve system performance. The proposed integrated circuit was simulated in a 90 nm CMOS process and operated on a 1 V supply voltage. Full article
Show Figures

Figure 1

21 pages, 20669 KiB  
Article
Logic-Compatible Embedded DRAM Architecture for Multifunctional Digital Storage and Compute-in-Memory
by Taehoon Kim and Yeonbae Chung
Appl. Sci. 2024, 14(21), 9749; https://doi.org/10.3390/app14219749 - 25 Oct 2024
Viewed by 2185
Abstract
The compute-in-memory (CIM) which embeds computation inside memory is an attractive scheme to circumvent von Neumann bottlenecks. This study proposes a logic-compatible embedded DRAM architecture that supports data storage as well as versatile digital computations. The proposed configurable memory unit operates in three [...] Read more.
The compute-in-memory (CIM) which embeds computation inside memory is an attractive scheme to circumvent von Neumann bottlenecks. This study proposes a logic-compatible embedded DRAM architecture that supports data storage as well as versatile digital computations. The proposed configurable memory unit operates in three modes: (1) memory mode in which it works as a normal dynamic memory, (2) logic–arithmetic mode where it performs bit-wise Boolean logic and full adder operations on two words stored within the memory array, and (3) convolution mode in which it executes digitally XNOR-and-accumulate (XAC) operation for binarized neural networks. A 1.0-V 4096-word × 8-bit computational DRAM implemented in a 45-nanometer CMOS technology performs memory, logic and arithmetic operations at 241, 229, and 224 MHz while consuming the energy of 7.92, 8.09, and 8.19 pJ/cycle. Compared with conventional digital computing, it saves energy and latency of the arithmetic operation by at least 47% and 46%, respectively. For VDD = 1.0 V, the proposed CIM unit performs two 128-input XAC operations at 292 MHz with an energy consumption of 20.8 pJ/cycle, achieving 24.6 TOPS/W. This marks at least 11.9× better energy efficiency and 38.8× better delay, thereby achieving at least 461× better energy-delay product than traditional 8-bit wide computing hardware. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

27 pages, 7049 KiB  
Review
Quantum Dots for Resistive Switching Memory and Artificial Synapse
by Gyeongpyo Kim, Seoyoung Park and Sungjun Kim
Nanomaterials 2024, 14(19), 1575; https://doi.org/10.3390/nano14191575 - 29 Sep 2024
Cited by 3 | Viewed by 2863
Abstract
Memristor devices for resistive-switching memory and artificial synapses have emerged as promising solutions for overcoming the technological challenges associated with the von Neumann bottleneck. Recently, due to their unique optoelectronic properties, solution processability, fast switching speeds, and low operating voltages, quantum dots (QDs) [...] Read more.
Memristor devices for resistive-switching memory and artificial synapses have emerged as promising solutions for overcoming the technological challenges associated with the von Neumann bottleneck. Recently, due to their unique optoelectronic properties, solution processability, fast switching speeds, and low operating voltages, quantum dots (QDs) have drawn substantial research attention as candidate materials for memristors and artificial synapses. This review covers recent advancements in QD-based resistive random-access memory (RRAM) for resistive memory devices and artificial synapses. Following a brief introduction to QDs, the fundamental principles of the switching mechanism in RRAM are introduced. Then, the RRAM materials, synthesis techniques, and device performance are summarized for a relative comparison of RRAM materials. Finally, we introduce QD-based RRAM and discuss the challenges associated with its implementation in memristors and artificial synapses. Full article
(This article belongs to the Special Issue Nanostructured Materials for Electric Applications)
Show Figures

Figure 1

13 pages, 20047 KiB  
Article
Bi-Directional and Operand-Controllable In-Memory Computing for Boolean Logic and Search Operations with Row and Column Directional SRAM (RC-SRAM)
by Han Xiao, Ruiyong Zhao, Yulan Liu, Yuanzhen Liu and Jing Chen
Micromachines 2024, 15(8), 1056; https://doi.org/10.3390/mi15081056 - 22 Aug 2024
Cited by 1 | Viewed by 1375
Abstract
The von Neumann architecture is no longer sufficient for handling large-scale data. In-memory computing has emerged as the potent method for breaking through the memory bottleneck. A new 10T SRAM bitcell with row and column control lines called RC-SRAM is proposed in this [...] Read more.
The von Neumann architecture is no longer sufficient for handling large-scale data. In-memory computing has emerged as the potent method for breaking through the memory bottleneck. A new 10T SRAM bitcell with row and column control lines called RC-SRAM is proposed in this article. The architecture based on RC-SRAM can achieve bi-directional and operand-controllable logic-in-memory and search operations through different signal configurations, which can comprehensively respond to various occasions and needs. Moreover, we propose threshold-controlled logic gates for sensing, which effectively reduces the circuit area and improves accuracy. We validate the RC-SRAM with a 28 nm CMOS technology, and the results show that the circuits are not only full featured and flexible for customization but also have a significant increase in the working frequency. At VDD = 0.9 V and T = 25 °C, the bi-directional search frequency is up to 775 MHz and 567 MHz, and the speeds for row and column Boolean logic reach 759 MHz and 683 MHz. Full article
(This article belongs to the Special Issue Emerging Memory Materials and Devices)
Show Figures

Figure 1

11 pages, 2934 KiB  
Article
Efficient Processing-in-Memory System Based on RISC-V Instruction Set Architecture
by Jihwan Lim, Jeonghun Son and Hoyoung Yoo
Electronics 2024, 13(15), 2971; https://doi.org/10.3390/electronics13152971 - 27 Jul 2024
Cited by 1 | Viewed by 3323
Abstract
A lot of research on deep learning and big data has led to efficient methods for processing large volumes of data and research on conserving computing resources. Particularly in domains like the IoT (Internet of Things), where the computing power is constrained, efficiently [...] Read more.
A lot of research on deep learning and big data has led to efficient methods for processing large volumes of data and research on conserving computing resources. Particularly in domains like the IoT (Internet of Things), where the computing power is constrained, efficiently processing large volumes of data to conserve resources is crucial. The processing-in-memory (PIM) architecture was introduced as a method for efficient large-scale data processing. However, PIM focuses on changes within the memory itself rather than addressing the needs of low-cost solutions such as the IoT. This paper proposes a new approach using the PIM architecture to overcome memory bottlenecks effectively in domains with computing performance constraints. We adopt the RISC-V instruction set architecture for our proposed PIM system’s design, implementation, and comprehensive performance evaluation. Our proposal expects to efficiently utilize low-spec systems like the IoT by minimizing core modifications and introducing PIM instructions at the ISA level to enable solutions that leverage PIM capabilities. We evaluate the performance of our proposed architecture by comparing it with existing structures using convolution operations, the fundamental unit of deep-learning and big data computations. The experimental results show our proposed structure achieves a 34.4% improvement in processing speed and 18% improvement in power consumption compared to conventional von Neumann-based architectures. This substantiates its effectiveness at the application level, extending to fields such as deep learning and big data. Full article
(This article belongs to the Special Issue Embedded Systems for Neural Network Applications)
Show Figures

Figure 1

19 pages, 6009 KiB  
Article
Efficient Data Transfer and Multi-Bit Multiplier Design in Processing in Memory
by Jingru Sun, Zerui Li, Meiqi Jiang and Yichuang Sun
Micromachines 2024, 15(6), 770; https://doi.org/10.3390/mi15060770 - 9 Jun 2024
Cited by 3 | Viewed by 1690
Abstract
Processing in Memory based on memristors is considered the most effective solution to overcome the Von Neumann bottleneck issue and has become a hot research topic. The execution efficiency of logical computation and in-memory data transmission is crucial for Processing in Memory. This [...] Read more.
Processing in Memory based on memristors is considered the most effective solution to overcome the Von Neumann bottleneck issue and has become a hot research topic. The execution efficiency of logical computation and in-memory data transmission is crucial for Processing in Memory. This paper presents a design scheme for data transmission and multi-bit multipliers within MAT (a data storage set in MPU) based on the memristive alternating crossbar array structure. Firstly, to improve the data transfer efficiency, we reserve the edge row and column of the array as assistant cells for OR AND (OA) and AND data transmission logic operations to reduce the data transfer steps. Furthermore, we convert the multipliers into multi-bit addition operations via Multiple Input Multiple Output (MIMO) logical operations, which effectively improves the execution efficiency of multipliers. PSpice simulation shows that the proposed data transmission and multi-bit multiplier solution has lower latency and power consumption and higher efficiency and flexibility. Full article
(This article belongs to the Section E:Engineering and Technology)
Show Figures

Figure 1

11 pages, 3764 KiB  
Article
Enhancing the Uniformity of a Memristor Using a Bilayer Dielectric Structure
by Yulin Liu, Qilai Chen, Yanbo Guo, Bingjie Guo, Gang Liu, Yanchao Liu, Lei He, Yutong Li, Jingyan He and Minghua Tang
Micromachines 2024, 15(5), 605; https://doi.org/10.3390/mi15050605 - 30 Apr 2024
Cited by 3 | Viewed by 1744
Abstract
Resistive random access memory (RRAM) holds great promise for in-memory computing, which is considered the most promising strategy for solving the von Neumann bottleneck. However, there are still significant problems in its application due to the non-uniform performance of RRAM devices. In this [...] Read more.
Resistive random access memory (RRAM) holds great promise for in-memory computing, which is considered the most promising strategy for solving the von Neumann bottleneck. However, there are still significant problems in its application due to the non-uniform performance of RRAM devices. In this work, a bilayer dielectric layer memristor was designed based on the difference in the Gibbs free energy of the oxide. We fabricated Au/Ta2O5/HfO2/Ta/Pt (S3) devices with excellent uniformity. Compared with Au/HfO2/Pt (S1) and Au/Ta2O5/Pt (S2) devices, the S3 device has a low reset voltage fluctuation of 2.44%, and the resistive coefficients of variation are 13.12% and 3.84% in HRS and LRS, respectively, over 200 cycles. Otherwise, the bilayer device has better linearity and more conductance states in multi-state regulation. At the same time, we analyze the physical mechanism of the bilayer device and provide a physical model of ion migration. This work provides a new idea for designing and fabricating resistive devices with stable performance. Full article
Show Figures

Figure 1

14 pages, 6733 KiB  
Article
Analyzing Various Structural and Temperature Characteristics of Floating Gate Field Effect Transistors Applicable to Fine-Grain Logic-in-Memory Devices
by Sangki Cho, Sueyeon Kim, Myounggon Kang, Seungjae Baik and Jongwook Jeon
Micromachines 2024, 15(4), 450; https://doi.org/10.3390/mi15040450 - 27 Mar 2024
Cited by 2 | Viewed by 1642
Abstract
Although the von Neumann architecture-based computing system has been used for a long time, its limitations in data processing, energy consumption, etc. have led to research on various devices and circuit systems suitable for logic-in-memory (LiM) computing applications. In this paper, we analyze [...] Read more.
Although the von Neumann architecture-based computing system has been used for a long time, its limitations in data processing, energy consumption, etc. have led to research on various devices and circuit systems suitable for logic-in-memory (LiM) computing applications. In this paper, we analyze the temperature-dependent device and circuit characteristics of the floating gate field effect transistor (FGFET) source drain barrier (SDB) and FGFET central shallow barrier (CSB) identified in previous papers, and their applicability to LiM applications is specifically confirmed. These FGFETs have the advantage of being much more compatible with existing silicon-based complementary metal oxide semiconductor (CMOS) processes compared to devices using new materials such as ferroelectrics for LiM computing. Utilizing the 32 nm technology node, the leading-edge node where the planar metal oxide semiconductor field effect transistor structure is applied, FGFET devices were analyzed in TCAD, and an environment for analyzing circuits in HSPICE was established. To seamlessly connect FGFET-based devices and circuit analyses, compact models of FGFET-SDB and -CSBs were developed and applied to the design of ternary content-addressable memory (TCAM) and full adder (FA) circuits for LiM. In addition, depression and potential for application of FGFET devices to neural networks were analyzed. The temperature-dependent characteristics of the TCAM and FA circuits with FGFETs were analyzed as an indicator of energy and delay time, and the appropriate number of CSBs should be applied. Full article
(This article belongs to the Section D1: Semiconductor Devices)
Show Figures

Figure 1

16 pages, 1251 KiB  
Article
MeMPA: A Memory Mapped M-SIMD Co-Processor to Cope with the Memory Wall Issue
by Angela Guastamacchia, Andrea Coluccio, Fabrizio Riente, Giovanna Turvani, Mariagrazia Graziano, Maurizio Zamboni and Marco Vacca
Electronics 2024, 13(5), 854; https://doi.org/10.3390/electronics13050854 - 23 Feb 2024
Cited by 1 | Viewed by 13796
Abstract
The amazing development of transistor technology has been the main driving force behind modern electronics. Over time, this process has slowed down introducing performance bottlenecks in data-intensive applications. A main cause is the classical von Neumann architecture, which entails constant data exchanges between [...] Read more.
The amazing development of transistor technology has been the main driving force behind modern electronics. Over time, this process has slowed down introducing performance bottlenecks in data-intensive applications. A main cause is the classical von Neumann architecture, which entails constant data exchanges between processing units and data memory, wasting time and power. As a possible alternative, the Beyond von Neumann approach is now rapidly spreading. Although architectures following this paradigm vary a lot in layout and functioning, they all share the same principle: bringing computing elements as near as possible to memory while inserting customized processing elements, able to elaborate more data. Thus, power and time are saved through parallel execution and usage of processing components with local memory elements, optimized for running data-intensive algorithms. Here, a new memory-mapped co-processor (MeMPA) is presented to boost systems performance. MeMPA relies on a programmable matrix of fully interconnected processing blocks, each provided with memory elements, following the Multiple-Single Instruction Multiple Data model. Specifically, MeMPA can perform up to three different instructions, each on different data blocks, concurrently. Hence, MeMPA efficiently processes data-crunching algorithms, achieving energy and time savings up to 81.2% and 68.9%, respectively, compared with a RISC-V-based system. Full article
(This article belongs to the Special Issue Advanced Memory Devices and Their Latest Applications)
Show Figures

Figure 1

Back to TopTop