Next Article in Journal
Natural Interaction in Virtual Heritage: Enhancing User Experience with Large Language Models
Previous Article in Journal
Bridging Quantitative Scoring and Qualitative Grading: A Mapping Framework for Intelligent System Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Systematic Review and Classification of HPC-Related Emerging Computing Technologies

1
ICT Research Institute, Tehran 14155-3961, Iran
2
Department of Software and IT Engineering, École de Technologie Supérieure, University of Quebec, Montreal, QC H3C 1K3, Canada
3
Department of Computer Engineering, Amirkabir University of Technology, Hafez, Tehran 15875-4413, Iran
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(12), 2476; https://doi.org/10.3390/electronics14122476
Submission received: 10 May 2025 / Revised: 13 June 2025 / Accepted: 16 June 2025 / Published: 18 June 2025

Abstract

:
In recent decades, access to powerful computational resources has brought about a major transformation in science, with supercomputers drawing significant attention from academia, industry, and governments. Among these resources, high-performance computing (HPC) has emerged as one of the most critical processing infrastructures, providing a suitable platform for evaluating and implementing novel technologies. In this context, the development of emerging computing technologies has opened up new horizons in information processing and the delivery of computing services. In this regard, this paper systematically reviews and classifies emerging HPC-related computing technologies, including quantum computing, nanocomputing, in-memory architectures, neuromorphic systems, serverless paradigms, adiabatic technology, and biological solutions. Within the scope of this research, 142 studies which were mostly published between 2018 and 2025 are analyzed, and relevant hardware solutions, domain-specific programming languages, frameworks, development tools, and simulation platforms are examined. The primary objective of this study is to identify the software and hardware dimensions of these technologies and analyze their roles in improving the performance, scalability, and efficiency of HPC systems. To this end, in addition to a literature review, statistical analysis methods are employed to assess the practical applicability and impact of these technologies across various domains, including scientific simulation, artificial intelligence, big data analytics, and cloud computing. The findings of this study indicate that emerging HPC-related computing technologies can serve as complements or alternatives to classical computing architectures, driving substantial transformations in the design, implementation, and operation of high-performance computing infrastructures. This article concludes by identifying existing challenges and future research directions in this rapidly evolving field.

1. Introduction

The emergence and expansion of high-performance computing (HPC) systems in recent decades have played a pivotal role in advancing the frontiers of science and technology [1]. These systems, with their immense computational power, serve as critical infrastructure for complex simulations, big data analytics, advanced system design, and the development of machine learning algorithms—playing an essential role across various industries and fundamental research domains [2].
HPC, as a driver of technological innovation, encompasses a wide range of applications from classical domains, such as modeling natural phenomena, fluid dynamics, computational biology, and pharmaceutical research, to emerging fields such as artificial intelligence, cryptography, and materials design [3]. HPC systems typically consist of a large number of processors and accelerators organized into a unified cluster that is accessible by multiple users or research groups. These systems, which can comprise hundreds of thousands of compute nodes, are capable of performing trillions of floating-point operations per second. As such, HPC is considered one of the most advanced and complex domains within information technology, with highly diverse and significant applications in science, economics, and engineering [4].
The historical roots of this field trace back to the development of the first supercomputers by Seymour Cray in the 1970s, marking the beginning of systems capable of extremely high computational power [5]. Over time, with the rise in parallel, clustered, and accelerator-based architectures (such as GPUs and FPGAs), HPC has evolved into a highly sophisticated field. Benchmarking initiatives like the TOP500 and Green500 provide authoritative performance and energy efficiency evaluations of these systems, primarily using tests such as Linpack [6]. These systems are built upon fundamental computational principles introduced by Alan Turing and John von Neumann. Turing’s model—known as the Turing Machine—underpins our understanding of the theoretical limits of computation, while von Neumann’s architecture, introducing the stored-program concept, became the standard for modern computer systems.
These foundational principles remain embedded in contemporary computing systems, with most computational advancements driven by hardware improvements adhering to these models. As articulated by Moore’s Law, the number of transistors in a chip roughly doubles every two years, leading to consistent growth in computing capabilities [7]. However, as we approach the physical and economic limitations of silicon-based technologies, the efficiency of traditional computing architectures—grounded in the Turing model and von Neumann’s architecture—is increasingly challenged. Key factors necessitating a paradigm shift in computing include
  • The diminishing returns of Moore’s Law;
  • Rising energy consumption costs;
  • Scalability constraints and memory management complexities;
  • The exponential growth of data and the demand for real-time processing in applications such as the Internet of Things, precision medicine, climate modeling, and robotics [8].
In response, the scientific community has increasingly turned to the development and evaluation of emerging computing technologies. These technologies aim to transcend the limitations of classical computing by leveraging principles from physics, biology, nanoscale systems, or nature-inspired models. Notable examples include quantum computing, neuromorphic computing, biocomputing, nanocomputing, adiabatic computing, in-memory computing, and serverless architectures—each offering unique capabilities to redefine future computational architectures and models.
Despite the rapid growth of research in this area, existing reviews are often limited to a few technologies or lack structured, analytical insights into their applicability within HPC systems. No study has examined these technologies in terms of their impact on the future of HPC architecture, software implementation feasibility, programming models, and emerging research trajectories. This gap underscores the need for a systematic and multidimensional review [9]. Accordingly, the primary objective of this paper is to provide a structured and systematic review of emerging computing technologies, with a particular emphasis on their potential within the high-performance computing domain.
Through a statistical analysis of 142 research documents published in recent years, this study presents both conceptual and technical insights, classifying these emerging technologies based on criteria such as technological maturity, interdisciplinary, potential impact, innovation, and potential adoption levels within HPC architectures.
Based on above facts, the key contributions of the current study are as follows:
  • Comparing the emerging HPC-related [10,11,12,13,14] based on metrics such as innovation level, global research focus, potential impact, scientific challenge level, maturity level, interdisciplinary level, etc.
  • Classifying each HPC-related emerging technology based on practical software tools such as the required framework and programming languages, simulators, analyzers, and solvers.
  • Determining the main challenges, providers, benefits, applications, and research gaps in each specific and emerging HPC-related technology.
  • Determining the practical use cases of each different emerging HPC-related technology.
  • Proposing a holistic perspective of seven emerging HPC-related computing technologies (serverless, quantum, adiabatic, nano, biological, in-memory, and neuromorphic) and introducing the future complementary research directions in the field (green HPC, AI-HPC integration, GPU cloud computing, edge-based high-performance computing, exascale computing and beyond, etc.)
The remainder of this paper is structured as follows: Section 2 presents the related work and compared the present study with the state of the art. In Section 3 we have introduced the proposed survey and classification methodology. Section 4 is about an overview of emerging HPC-related computing technologies and recent trends. In this section, we focus on the comparative analysis and software-based classification of each technology in relation to HPC. Section 5 introduces a statistical analysis of HPC-related emerging computing technologies and explores future research directions, development challenges, and opportunities. Finally, Section 6 summarizes the findings, key takeaways, and suggestions for future work.

2. Related Work

Sepúlveda et al. [15] proposed a systematic review on requirements engineering in quantum computing which highlights insights and future research directions. The main challenge in quantum computing is balancing the speed of operations with the duration for which a qubit’s quantum state can be maintained before decoherence. Decoherence is the loss of a qubit’s quantum properties due to interactions with its environment, shortening the time for useful computations. The balance between speed and decoherence time has also been addressed in [15].
One of the key challenges in its design is the efficient implementation of quantum algorithms. Additionally, the difficulty in achieving reliable and fault-tolerant quantum computing due to the complexity of quantum error correction is noted. Another application of quantum computing is the reduction in complexity in robotic mechanisms by replacing graph searches with quantum random walks. Computer security, biomedicine, new material development, and economics are among the fields that benefit from advances in this domain [15].
Article [16] investigates energy-aware techniques and environments in HPC. The authors examine various systems in four categories, namely single device, cluster, grid, and cloud computing, and consider the types of devices including CPU, GPU, multiprocessor, and hybrid systems. Energy/power management tools and APIs, as well as tools and environments for simulating energy consumption in modern HPC systems, have also been analyzed. For instance, CloudSim is used for cloud environments, Green Cloud for data centers, and SimGrid for grid environments. Finally, several metrics such as execution time, energy consumption, and temperature have been considered as optimization targets.
The authors of [17] present a comprehensive review of communication performance models in HPC. According to this review, communication performance models have three main goals: (1) the accurate prediction of communication time, (2) reliable guidance for designing and implementing HPC algorithms, and (3) inspiration for designing novel architectures. Despite achieving these goals, designing a model with minimal communication parameters such as latency, bandwidth, and software overhead remains challenging. Furthermore, energy and performance constraints are two key design challenges in HPC; however, the emergence of software-defined networks (SDNs) has enabled more programmability and configurability, aiding the design of new models for exascale computing.
In article [18], the authors investigate and categorize research in the field of storage-based computing. They highlight challenges and open issues in architecture, applications, and tools. They also examine the characteristics of microarchitecture-independent programs in near-memory computing (NMC), compiler frameworks for offloading NMC kernels to the target platform, and an analytical model to evaluate the potential of NMC. The main motivation of this research is to distinguish it from in-memory computing based on new non-volatile memories such as memristors and phase-change memories. The study of existing NMC systems has also identified challenges such as improving 3D stacking in power-aware solutions.
The author of article [19] presents a comprehensive review of quantum cryptography aimed at securing channels using the principles of quantum mechanics. The main contribution of this study is the examination of fundamental concepts of quantum computing, including the qubit as the core component of this type of computing. It is also mentioned that the most important part of quantum cryptography is quantum key distribution (QKD), which can potentially break many public-key cryptographic algorithms. Parameters such as fiber distance, secure key rate, qubit error rate, and security are introduced for evaluation. Additionally, QKD protocols such as BB84, E91, BB92, and SARG04 are categorized. The main challenges of this research include syntactic and semantic analysis at the quantum programming language level for design purposes.

Summary and Comparison

In this part, we will discuss emerging HPC-related technologies such as quantum, adiabatic, biological, nano, neuromorphic, serverless, and in-memory computing. Table 1 presents a comparison among 10 key studies from recent years. This comparison includes the type of emerging technology, using a map for categorizing/classifying different technologies, determining the classification of different emerging technologies, and highlighting research gaps.
By classification we mean classifying emerging HPC-related technologies into six categories, which are quantum computing, nanocomputing, serverless computing, biological computing, neuromorphic computing, serverless computing, and adiabatic computing, and discussing each technology in detail.
By classification/categorization mapping we mean using taxonomy-based mapping for presenting the software environments including frameworks, tools, and testbed experiments used in implementing each specific technology.
Unlike many studies that focus solely on a specific technology, the present study aims to provide a more comprehensive classification by identifying converging trends among these technologies. According to our research, only a few review and research articles exist in this field that offer a holistic perspective on next-generation computing.
As shown in Table 1, Aslanpour et al. [12] and Sepúlveda et al. [15] proposed classifications in their works. However, only Gill et al. succeeded in mapping the latest advancements to the proposed classification. Similarly, only the work of Gill et al. and our study have considered all three elements: classification, classification mapping, and future research directions.
Li et al. have investigated high-performance computing in healthcare in an automatic literature analysis perspective [20]. The results emphasize the shift in the adoption of high-performance computing (HPC) within the healthcare sector, moving away from conventional numerical simulations and surgical visualizations towards new areas including drug discovery, AI-enhanced medical image analysis, and genomic studies, along with the relationships and interdisciplinary links between various application domains.
The authors in [21] have investigated the emerging applications and challenges in quantum computing. They provide a thorough overview of the latest developments in quantum computing, focusing on unconventional architectures, their applications in material sciences, and the convergence of quantum and classical machine learning models. Additionally, the survey investigates quantum algorithms tailored for real-time systems, delves into cryptographic protocols that extend beyond Shor’s algorithm, and addresses resource management within cloud computing environments. The principal findings underscore notable advancements in algorithm efficiency and practical uses in two-dimensional materials and topological insulators, as well as the integration of quantum and classical models.
Kudithipudi et al. described the main features of neuromorphic computing in [22]. They outlined methods for developing scalable neuromorphic architectures and highlighted essential characteristics. They also explored possible applications that could gain from scaling and the primary challenges that must be tackled. Additionally, they analyzed a thorough ecosystem required to support growth and the emerging opportunities that arise when scaling neuromorphic systems.
Duarte et al. have investigated the topic of quantum-assisted machine learning by means of adiabatic quantum computing (AQC) [23]. Authors in [24] have performed a survey on neuromorphic architectures for running artificial intelligence algorithms.
All these emerging technologies are potentially applicable to HPC, as illustrated in Table 1. While they share similarities at the application level and all aim at addressing high-performance computing problems, their hardware implementations can vary significantly.

3. Methodology

In this section, we will introduce our research methodology used in the current study.

3.1. Research Objective

This study investigates and analyzes emerging HPC-related computing technologies, including quantum computing, nanocomputing, in-memory computing, neuromorphic computing, serverless paradigms, adiabatic technology, and biologically inspired solutions. The main objective is to assess the impact of these technologies on high-performance computing (HPC) and provide various classifications of existing approaches from a software perspective.

3.2. Research Questions

The following questions need to be addressed in this study:
-
What is the future research directions in each emerging HPC-related technology?
-
What is the research challenges associated with emerging HPC-related technologies?
-
What are the testbed experiments, frameworks, and tools associated with each emerging HPC-related technology?
-
What are the potential impact, scientific challenge level, maturity level, interdisciplinary level of each emerging HPC-related technology?
-
What are the main providers, benefits, and application areas of each emerging HPC-related technology?

3.3. General Methodology

This research adopts an analytical and comparative approach to examine trends in HPC-related emerging computing technologies. It specifically focuses on scientific articles and reputable sources from recent years to analyze the current technologies and tools in the field.

3.4. Data Collection

To conduct this study, a systematic literature review was carried out. Scientific articles and industrial reports were extracted and reviewed from reputable databases such as IEEE Xplore, Springer, ScienceDirect, MDPI, and other publishers. The selected articles were chosen based on the following criteria:
  • Focus on emerging HPC-related computing technologies.
  • Inclusion of practical and hardware-related solutions in relevant domains.
  • Contain practical and empirical analyses related to high-performance computing.
The number of documents that were investigated in this research from each publisher is depicted in Figure 1.

3.5. Statistical Analysis and Research Directions

To evaluate the practical capabilities of these technologies in recent years, statistical analyses were conducted, including trend analysis, adoption rates in industry and academia, and a comparative analysis of technologies. These analyses contributed to identifying future research directions.

3.6. Limitations

The limitations of this study include restricted access to certain commercial data and internal industry reports, which may have affected some of the analyses and comparisons.

3.7. Compliance with PRISMA

The proposed systematic review is in compliance with the PRISMA guideline in writing systematic surveys. A PRISMA flow diagram associated with this research can be found in Figure 2. It must be mentioned that for the bias risk assessment, we used the AMSTAR checklist and found that the proposed survey has a very low risk of bias.

3.8. Inclusion/Exclusion Criteria

In Table 2, we list the main inclusion/exclusion criteria in investigating the research body.
The study screening and filtration process was based on Table 2. We filtered every research which does not pass at least one of the inclusion criteria. The search strings used to download the studies were “survey adiabatic computing”, “survey neuromorphic computing”, “survey nano computing”, “survey serverless computing”, “survey in-memory computing”, “survey quantum computing”, “survey bio-inspired computing”, “research challenges in nano computing”, etc. The flow diagram of the filtering process is indicated in Figure 3. As can be verified, at first, we identified papers from popular databases and registers (IEEE Xplore, ScienceDirect, Springer, MDPI, arXiv, etc.); then we removed duplicate records. At the third stage, we performed title and abstract screening based on the inclusion/exclusion criteria shown in Table 2. In the next stage, we performed full-text screening based on the inclusion/exclusion criteria of Table 2, and finally we selected appropriate research articles for inclusion in the current survey.

4. Emerging HPC-Related Technologies

Emerging computational technologies refer to a category of technologies that are in the early stages of their life cycle and have not yet been widely adopted in industry but possess high potential to transform existing computing architectures. These technologies often arise from the convergence of multiple scientific disciplines (including physics, biology, nanotechnology, neuroscience, and computer engineering) and can revolutionize traditional patterns of data processing, storage, and analysis.
Gyongyosi et al. [25] argue that emerging computational technologies share the following key characteristics (Table 3):
-
Fundamental innovation: The technology must be disruptive and based on principles different from those of conventional technologies, such as quantum computing.
-
Low maturity level (low TRL): A technology that has not yet reached widespread application and remains primarily at the research or experimental stage.
-
High impact potential: A technology that has the capacity to transform processing speed, scalability, security, or efficiency.
-
Interdisciplinarity: A technology that has emerged from the convergence of diverse scientific fields.
-
Scientific and implementation challenges: The presence of open questions and complex technical challenges indicates the technology’s emerging status.
-
High global research focus: A technology that attracts the attention of research institutions and appears in current scientific publications.
Based on the above criteria, technologies such as quantum computing, nanocomputing, in-memory architectures, neuromorphic systems, serverless paradigms, adiabatic technologies, and biology-inspired solutions were selected as prominent examples of emerging computational technologies. These technologies not only operate at the frontiers of computing knowledge but will also play a crucial role in shaping the future of information-processing technologies.
It is important to note that for the key feature comparison of various technologies in Table 3, we employed the well-known Likert scale. The used scale has five subjective levels (very low, low, medium, high, and very high), which allows individuals to express their degree of agreement or disagreement with a particular statement. This scale typically provides five response options, enabling participants to indicate the strength of their agreement or disagreement regarding the statement or question presented. Regarding this Table, we averaged the Likert scores of 20 different experts regarding each computing technology.
In Figure 4, we depict the classification of emerging computing technologies related to HPC which were investigated in the current paper.

4.1. Quantum Computing

Quantum computing is not only a fusion of quantum physics, computer science, and information theory but also involves a broader range of disciplines, including engineering, mathematics, chemistry, and more. It is an emerging paradigm with very high potential, capable of accelerating computations by exploiting quantum–mechanical principles such as entanglement and superposition [26,27].

4.1.1. Definition of Quantum Computing

Quantum computing is a form of information processing that, instead of classical bits (0 and 1), uses quantum bits—or qubits. Qubits can exist simultaneously in multiple states (superposition) and be correlated with one another (entanglement), enabling quantum computers to perform calculations that would be prohibitively time-consuming on classical machines.

4.1.2. Key Quantum–Mechanical Concepts in Quantum Computing

-
Superposition: In classical computing, a bit can be either zero or one. A qubit, by contrast, can occupy both the 0 and 1 states simultaneously until it is measured [15].
-
Entanglement: Two or more qubits can become linked so that the state of one depends on the state of another, even when separated by large distances. This property underpins much of quantum computing’s power [27].
-
Interference: Quantum algorithms use interference to amplify the probability of correct outcomes and cancel out incorrect ones.

4.1.3. Key Differences from Classical Computing

We describe in Table 4 the key differences between these technologies.

4.1.4. Important Algorithms in Quantum Computing

-
Shor’s Algorithm: Efficient integer factorization and used to break RSA encryption.
-
Grover’s Algorithm: Accelerated unstructured database search.
-
Quantum Fourier Transform (QFT): The quantum analog of the discrete Fourier transform.

4.1.5. Frameworks and Programming Languages

Green et al. [28] introduced the quantum programming language Quipper, a high-level, scalable, expressive functional language. It is designed to solve real-world problems, unlike current quantum programming languages, which target only simple or experimental tasks. Quipper is an embedded language hosted in Haskell. It is also universal, usable for quantum circuits, algorithms, and circuit transformations.
The Q# language [29] is a domain-specific quantum language designed specifically to represent quantum algorithms correctly. Unlike earlier quantum languages, Q# is standalone, offering high-level abstractions, information-error reporting, and a strongly typed design to ensure type safety. It has also supported the development of quantum libraries such as Shor’s algorithm for modular arithmetic, integer factorization, elliptic-curve protocols, and Hamiltonian simulation.
Another quantum language, LIQUI⟩|⟨ [30], provides a software architecture and toolkit for quantum computing. Its suite includes programming languages, optimization and scheduling algorithms, and quantum simulators. Like Quipper, it is an embedded language; its host is Q#.
QWire is a programming language with two domains of application: describing quantum circuits and manipulating them within any chosen classical host language as an interface [31]. Its two notable features are that it has only five instructions and is modular.
Khamesi et al. [32] introduced OpenQL, a portable quantum programming framework for quantum accelerators that run on classical computers to speed up specific computations. This language also provides an API for executing quantum algorithms on both classical and quantum hardware.
Finally, the authors of [33] introduced ProjectQ, an open-source software framework for quantum computing. ProjectQ allows testing quantum algorithms via simulations and enables execution on real quantum hardware.
In addition to these languages, several well-known frameworks have gained significant attention in both industry and academia:
-
Qiskit (IBM) version 2.0: An open-source quantum software development framework providing tools to create and manipulate quantum programs and run them on physical devices and simulators [34]. Qiskit includes modules for quantum circuits, algorithms, and applications, supporting research and development across quantum computing domains.
-
PennyLane (Xanadu) version 0.41.1: A Python library for differentiable quantum programming that integrates with major machine learning libraries such as TensorFlow v.2.16.1 and PyTorch v.2.7.0 [35]. PennyLane enables training purely quantum and hybrid quantum–classical models, advancing quantum machine learning.
-
Cirq (Google) (https://quantumai.google/cirq/start/install, last accessed 15 June 2025): A Python library for designing, simulating, and running quantum circuits on Google’s quantum processors [36]. Cirq provides tools for developing quantum algorithms, optimizing circuits, and benchmarking quantum hardware performance, making it indispensable for researchers and developers.
These frameworks supply extensive libraries, tools, and support for quantum-computing research and applications, playing a pivotal role in advancing the field.
In summary, important quantum computing frameworks include [28,29,30,31,32,33,34,35,36] the following:
-
Q#: A domain-specific quantum language designed specifically to represent quantum algorithms correctly.
-
Quipper: An embedded language hosted in Haskell. It is also universal and usable for quantum circuits, algorithms, and circuit transformations.
-
LIQUi|>: A software architecture and toolkit for quantum computing. Its suite includes programming languages, optimization and scheduling algorithms, and quantum simulators.
-
QWIRE: A programming language with two domains of application: describing quantum circuits and manipulating them within any chosen classical host language as an interface.
-
OPENQL: A portable quantum programming framework for quantum accelerators that run on classical computers to speed up specific computations.
-
ProjectQ: It allows for the testing of quantum algorithms via simulations and enables execution on real quantum hardware.

4.1.6. Tools

In this part, we introduce selected tools for quantum computing:

Simulators

In the scientific literature, several quantum programming platforms have been introduced to meet programmers’ needs for running quantum algorithms. The first was QCL, whose host language is C++, as presented by Ömer [37] in 1998. Subsequent languages followed. In 2018, ref. [38] introduced the quantum programming environment Q | SI⟩. This is an embedded NET-based language that extends into a quantum while language. Q | SI⟩ includes an embedded quantum while language, a quantum simulator, and tools for the analysis and validation of quantum programs.
QuantumOptics.jl is a numerical simulator presented by [39] for research in quantum optics and quantum information. In [40], HpQC (high-performance quantum computing) was examined; it can simulate quantum computing in parallel on a single-node multicore processor.
Another important simulator for quantum computing is CUDA quantum (CDUA-Q).
In [41], the authors introduced a new library called SQC | pp⟩, and it is capable of simulating quantum algorithms. For example, they simulated Grover’s algorithm on a search space up to n = 20 qubits. Gheorghiu introduced the Quantum++ library [42], a multithreaded, general-purpose quantum computing library written in C++11. It is not limited to qubit systems or specific quantum information tasks and can simulate arbitrary quantum processes. Quantum++ can rapidly simulate 25 pure-state qubits or 12 mixed-state qubits.
In summary, the main simulators and libraries are as follows:
-
Q|SI>: An embedded NET-based language that extends into a quantum while language.
-
QuantumOptics.jl: A numerical simulator for research in quantum optics and quantum information.
HpQC: It simulates quantum computing in parallel on a single-node multicore processor.
-
CUDA-Q: It is an open-source quantum development platform orchestrating the hardware and software needed to run useful, large-scale quantum computing applications.
-
QC|pp>

Compilers

Distributed quantum computing demands a new generation of compilers to map each quantum algorithm onto distributed architectures. Several compilers have been studied in this context. Javadi-Abhari et al. [43] introduced ScaffCC, a scalable compiler for large-scale quantum programs. Also, ref. [44] presented t | ket⟩, a retargetable compiler for Noisy Intermediate-Scale Quantum (NISQ) devices. Some other mainstream compilers are omitted, such as Qiskit Transpiler and BQSKit.
In summary, the main compilers are
-
ScaffCC: A scalable compiler for large-scale quantum programs.
-
T|ket>: A retargetable compiler for Noisy Intermediate-Scale Quantum (NISQ) devices.
-
Qiskit Transpiler: It is used to write new circuit transformations (known as transpiler passes) and combine them with other existing passes, greatly reducing the depth and complexity of quantum circuits.
-
BQSKit: A powerful and portable quantum compiler framework. It can be used with ease to compile quantum programs to efficient physical circuits for any quantum processing unit (QPU).

Solvers

A solver is mathematical software that solves mathematical problems, either as a standalone program or as a software library. The authors of [45] released TRIQS/CTHYB, a continuous-time hybrid Monte Carlo solver for the quantum impurity problem. This solver is implemented in C++ with a high-level Python interface. Kawamura et al. [46] released a package named Hϕ; it is based on a specialized Lanczos-type solver suitable for various quantum lattice models. Unlike existing packages, it supports finite temperature calculations using the TPQ (thermal pure quantum) method. They also reported benchmarks on supercomputers such as the K computer and SGI ICE XA (Sekirei).
In summary, the main solvers are [45,46]
-
Hϕ: A specialized Lanczos-type solver suitable for various quantum lattice models.
-
TRIQS/CTHYB: A continuous-time quantum Monte Carlo simulation and solver tool.

Testbed Experiments

The authors of [47] introduced the PQNI (NIST Quantum Network Innovation Testbed) platform to accelerate the integration of quantum systems with active, real-world networks. This platform enables the evaluation of quantum components—such as single-photon sources, detectors, memories, and interfaces.
Clark et al. [48] introduced the QSCOUT (Quantum Scientific Computing Open User Testbed), a trapped-ion-based system for assessing quantum hardware capabilities in scientific applications. This testbed provides quantum hardware to researchers so they can run quantum algorithms and explore new ideas that may benefit more powerful future systems.
Other important testbed experiments for quantum computing are the WACQT quantum technology testbed in Chalmers university, seven quantum computing testbeds developed by the National Quantum Computing Center (NQCC), the CTIC quantum testbed (QUTE), the advanced quantum testbed (AQT) in Berkley university, etc.
In summary, the main testbeds are as follows:
-
QSCOUT: A quantum computing testbed based on trapped ions that is available to the research community as an open platform for a range of quantum computing applications.
-
PQNI: A platform to accelerate the integration of quantum systems with active, real-world networks.
-
WACQT: A testbed facility designed in Chalmers university to support the development and testing of quantum algorithms and hardware.
-
QUTE: A general-purpose quantum computing simulator that, when deployed on the ISAAC supercomputing infrastructure, allows for the simulation of quantum circuits. It is easy to use and completes complicated quantum simulations that would take hours or even days on an ordinary computer in a few minutes.
-
AQT: It explores and defines the future of superconducting quantum computers from end to end with a full-stack platform for collaborative research and development.

4.1.7. Use Cases

Some of the practical use cases of quantum computing are
-
Drug discovery and development;
-
Financial modeling;
-
Fraud detection;
-
Credit scoring;
-
Materials science simulation;
-
Quantum key distribution;
-
Logistics and supply chain management;
-
Climate change modeling.
Figure 5 shows the proposed classification for quantum computing. Based on the challenges in emerging computational technologies, this taxonomy organizes frameworks, programming languages, tools, and testbed experiments. Simulators, compilers, libraries, and solvers are regarded as software tools.

4.2. Adiabatic Computing

Adiabatic technologies refer to a class of computational methods and systems that use thermodynamic adiabatic principles to perform logic operations with very low energy dissipation—or, in theory, approaching zero dissipation. The term “adiabatic” comes from thermodynamics, meaning “without exchange of heat.” The adiabatic theorem in quantum mechanics offers new perspectives in quantum computing and yields novel algorithms. It is also used to find the ground state of a complex Hamiltonian (\H\) by evolving a time-dependent Hamiltonian (\H(t)\). In quantum mechanics, a system’s Hamiltonian represents its total energy, including the kinetic and potential components.
In computing, the concept refers to a method in which
-
Very little energy is consumed when changing logical states.
-
Energy that would normally be lost as heat in classical digital circuits is here recovered or retained.
The conventional CMOS (complementary metal–oxide–semiconductor) logic design has been the standard choice for implementing low-power systems. However, one of its weaknesses is its energy requirement—which is addressed by adiabatic logic [49]. In other words, adiabatic computing dissipates less energy during charging.
Let us illustrate the adiabatic principle with a simple example: Figure 2 shows the waveform applied to a circuit. During the rise and fall of the power supply voltage, the transistors remain in their previous states [50]. An adiabatic circuit charges node X to the same voltage as a CMOS circuit but transfers the charge over a much longer time. Because of the slow rise and fall processes, this clock is called a “power clock.”
Based on energy performance, adiabatic circuits can be classified into three types:
-
Fully adiabatic circuits, in which charging is performed extremely slowly and very little energy is dissipated per operation.
-
Quasi-adiabatic circuits, in which charging occurs with a reduced potential drop and part of the energy is recovered.
-
Non-adiabatic circuits, which make no attempt to reduce potential drop or recover transferred energy [50].
D-Wave Systems Inc., a pioneer in adiabatic quantum computing, introduced in 2011 the first commercial quantum computer based on quantum annealing. D-Wave systems are designed to solve specific optimization problems by evolving the quantum system’s Hamiltonian from a simple initial ground state to a final Hamiltonian whose ground state encodes the problem’s solution [51]. This approach has been adopted in both industry and academia, where researchers and organizations employ D-Wave quantum computers to tackle various challenges in optimization and machine learning. In Figure 6, we depict a simple example of an adiabatic CMOS circuit.

4.2.1. Frameworks and Programming Languages

Important adiabatic computing frameworks are [50,51]
-
Jade: An integrated development environment for adiabatic quantum computing.
-
CFD: In CFD simulations, an adiabatic condition generally indicates that the system or surface being modeled does not permit heat transfer or exchange with its environment, signifying that no heat is either added or removed.
-
EDA: A framework tool for adiabatic computing.

4.2.2. Tools

Simulators

Important adiabatic computing simulators are [50,51]
-
Adiabatic computing;
-
SPICE.

Solvers

Important adiabatic computing solvers are [50,51]
-
3-SAT: It explores the use of quantum adiabatic algorithms to solve the 3-satisfiability problem, a classic NP-complete problem in computer science.
-
QLS: A solver for adiabatic quantum computing.
-
Fast Solver.

4.2.3. Use Cases

Some of the practical use cases of adiabatic computing are as follows:
-
Drug discovery and development;
-
Combinatorial optimization;
-
Materials science simulation;
-
Training neural networks;
-
Feature selection;
-
Quantum simulation;
-
Portfolio optimization;
-
Risk analysis;
-
Satellite image analysis;
-
Election forecasting.
Figure 7 shows the proposed classification for adiabatic computing. Based on the challenges in emerging computational technologies, this taxonomy organizes frameworks, programming languages, tools, and testbed experiments. Simulators, compilers, libraries, and solvers are regarded as software tools.

4.3. Biological Computing

Biological computation pertains to the use of biological macromolecules for information processing. These macromolecules primarily consist of DNA, RNA, and proteins, leading to the categorization of biological computation into DNA computation, RNA computation, and protein computation. Due to the constraints of biochemical operation technology, current research in biological computation predominantly emphasizes DNA computation. This study centers on DNA computation while also providing an introduction to RNA computation and protein computation. This chapter outlines the background surrounding the emergence of biological computation, its research significance, and the advancements made in this field.
Biological computing or biocomputing has been defined in various ways from different perspectives. Some of the most common definitions are as follows:
-
Biological computing is a reflexive engineering paradigm that deals with programmable and non-programmable information-processing systems; these systems evolve algorithms in response to their environmental needs [52].
-
The practical use of biological components—such as proteins, enzymes, and bacteria—for computation.
Biocomputing is still at an early research stage but with high future potential in molecular computing and medical applications.

4.3.1. Frameworks and Programming Languages

-
PySB (Lopez et al. [53]) is a framework for building mathematical models for biochemical systems. It provides a library of macros encoding standard biochemical actions—binding, catalysis, and polymerization—allowing model construction in a high-level, action-based vocabulary. This increases model clarity, reusability, and accuracy. Note that PySB is primarily a mathematical modeling framework for biological networks, not a hardware biocomputing platform.
-
Biotite ([54]) is a Python-based framework for handling biological structural and sequence data using NumPy arrays. It serves two user groups: novices—who enjoy easy access to Biotite—and experts—who leverage its high performance and extensibility.
-
A New Modeling–Programming Paradigm named python for biological systems (Lubak et al. [55]) introduces best software–engineering practices with a focus on Python. It offers modularity, testability, and automated documentation generation.
-
pyFOOMB ([55]) is an object-oriented modeling framework for biological processes. It enables the implementation of models via ordinary differential equation (ODE) systems in a guided, flexible manner. pyFOOMB also supports model-based integration and data analysis—ranging from classical lab experiments to high-throughput biological screens.
These frameworks are especially important for modeling complex biological systems and analyzing biological data in the fields of bioengineering, biotechnology, and biological computing.
In summary, important biological computing frameworks are [53,54,55,56]
-
pyFOOMB: pyFOOMB (Python Framework for Object-Oriented Modeling of Biological Models) is a Python package developed for the purpose of modeling and simulating biological systems. It enables users to construct, simulate, and analyze intricate biological models in a flexible and modular manner by utilizing Python’s object-oriented paradigm.
-
PySB: A framework for building the mathematical models of biochemical systems as Python programs.
-
Biotite: A Python library that offers tools for sequence and structural bioinformatics. It provides a unified and accessible framework for analyzing, modeling, and simulating biological data.
-
Python for biological systems (BioPython v1.85).

4.3.2. Tools

Simulators

-
WebStoch ([56,57]): A high-performance computing service from the StochSoCs research project; it was designed for large-scale parallel stochastic simulation of biological networks. Accessible via the Internet, it lets scientists run models without HPC expertise.
-
Snoopy Hybrid Simulator ([58]): A platform-independent tool offering advanced hybrid simulation algorithms for building and simulating hybrid biological models with accuracy and efficiency.
In summary, important biological computing simulators include [56,57,58]
-
Adaptive parallel simulators;
-
StochSoCs: A stochastic simulation tool for large-scale biochemical reaction networks.
-
Snoopy: A software application mainly utilized for the modeling and simulation of biological systems, particularly those that encompass biomolecular networks. It offers a cohesive Petri net framework that facilitates various modeling paradigms, such as qualitative, stochastic, and continuous simulations.

Analyzers

Important biological computing analyzers include [59]
-
miRNA Time-Series Analyzer (Cer et al. [59]): An open-source tool written in Perl and R, and it is runnable on Linux, macOS, and Windows. It helps scientists detect differential miRNA expression and offers advantages in simplicity, reliability, performance, and broad applicability over existing time-series tools.
-
Cytoscape plug-in network v2.7.x.

Compilers

An important biological computing compiler is
-
Medley et al.’s Compiler ([60]): It transforms standard representations of chemical-reaction networks and circuits into hardware configurations for cell-morphic specialized hardware simulation. It supports a wide range of models—including mass-action kinetics, classical enzyme dynamics (Michaelis–Menten, Briggs–Haldane, and Boz–Morales models), and genetic inhibitor kinetics—and has been validated on MAP kinase models, showing that rule-based models suit this approach.

Libraries

Important biological computing libraries include
-
JGraphT ([61]): A Java library offering efficient, generic graph data structures and a rich set of advanced algorithms. Its natural modeling of nodes and edges supports transport, social, and biological networks. Benchmarks show that JGraphT competes with NetworkX and the Boost Graph Library.
-
libRoadRunner v1.1.16 ([62]): An open-source, high-performance, cross-platform library for simulating and analyzing SBML (Systems Biology Markup Language) models. Focused on biochemical networks, it enables both large models and many small models to run quickly and integrates easily with existing simulation frameworks. It also provides a Python API for seamless integration.

4.3.3. Experimental Testbeds

Important biological computing experimental testbeds include [62]
-
Molecular Communication (MC): A high-performance systems biology markup language (SBML) simulation and analysis tool.
-
BIONIC: A high-performance systems biology markup language (SBML) simulation and analysis tool.

4.3.4. Use Cases

Some of the practical use cases of biological computing include the following:
-
Molecular data storage;
-
Biological Logic Circuits;
-
Smart drug delivery systems;
-
Biological sensors;
-
Synthetic biology and genetic circuits;
-
Parallel computing;
-
Bio-inspired algorithms;
-
Security and cryptography.
Figure 8 shows the proposed classification for biological computing. Based on the challenges in emerging computational technologies, this taxonomy organizes frameworks, programming languages, tools, and testbed experiments. Simulators, compilers, libraries, and solvers are regarded as software tools.

4.4. Nanocomputing

Physicist Richard Feynman is regarded as the “father of nanotechnology.” Although he did not coin the term, his famous 1959 talk “There’s Plenty of Room at the Bottom” suggested that scientists could manipulate atoms and molecules individually [63,64]. Nanotechnology deals with materials, tools, and structures at the nanometer scale, enabling the design and fabrication of electronic components and devices that make smaller, faster, more reliable computers possible—ultimately improving the quality of life [64].
According to [64], nanocomputing comprises four generations:
-
Passive Nanostructures (2000–2005): They include dispersed structures (aerosols and colloids) and contact structures (nanocomposites and metals) [65].
-
Active Nanostructures (2005–2010): Unlike passive structures with stable behavior, active nanostructures exhibit variable or hybrid behavior. They add bioactive features (targeted drug delivery and bio-sensors) and physico-chemical activity (amplifiers, actuators, and adaptive structures).
-
Systems of Nanosystems (2010–2015): The integration of 3D nanosystems into larger platforms via methods such as biologically driven self-organization and robotics with emergent behavior.
-
Molecular Nanosystems (2015–2020): The first integrated nanosystems emerged. Fourth-generation nanomachines have heterogeneous architectures in which each molecule has a bespoke design and performs diverse functions [65,66,67,68,69,70,71,72,73,74,75,76,77,78,79].
Nanocomputing (nanoscale computing) refers to using nanotechnology to build, develop, and employ computing systems at the nanometer scale. At the intersection of nanotechnology, physics, materials science, and computer science, its goal is to create processors and computers with extremely high performance, low power consumption, and an ultra-small size.

4.4.1. Definition

Nanocomputing uses nanometer-scale structures and devices (typically < 100 nm) to perform computation, aiming to transcend the physical limits of traditional silicon transistors.

4.4.2. Fundamental Principles

-
Moore’s Law: As silicon approaches its scaling limits, Moore’s Law slows; nanocomputing is proposed as a future alternative to sustain computational progress.
-
Quantum Effects: At the nanoscale, quantum phenomena—tunneling, interference, and superposition—become significant.

4.4.3. Core Technologies and Architectures

-
Carbon Nanotubes (CNTs): They are used as transistors, interconnects, or electron channels.
-
Nanocrystals and Quantum Dots: They are employed for data storage or computation.
-
Nano-transistors: They are transistors just a few nanometers in size and are made from silicon alternatives.
-
DNA Computing: It leverages biological structures (DNA) for computational operations.
-
Switching Molecules: They are molecules that toggle between states and serve in computation applications.
Similar to previous technologies, Figure 8 presents the taxonomy of nanocomputing technology.

4.4.4. Frameworks and Programming Languages

In summary, the main nanocomputing frameworks are [65,66,67,68,69,70,71,72,73,74,75,76,77,78,79]
-
Fiction: A design tool for field-coupled nanocomputing;
-
ToPoliNano: A design tool for field-coupled nanocomputing;
-
Auto-based BDEC Computational Modeling.

4.4.5. Tools

Simulators

In summary, the main nanocomputing simulators are [65,66,67,68,69,70,71,72,73,74,75,76,77,78,79]
-
QCA designer: QCA designer serves as a simulation and layout development tool specifically for QCA. It represents a significant advancement in nanotechnology, offering a viable alternative to existing CMOS IC technology. This tool enables the production of more densely integrated circuits that are capable of low power consumption while functioning at elevated frequencies.
-
NMLSim: A new simulation technology based on the magnetization of nanometric magnets.

Analyzers and Solvers

The main nanocomputing analyzers and solvers are
-
Field-coupled Nanocomputing Energy: A software for reading a field-coupled nanocomputing (FCN) layout design; it recognizes the logic gates based on a standard cell library and builds a graph that represents its netlist and then calculates the energy losses according to two different methods.
-
MARINA Risk Assessment: A flexible risk assessment strategic analyzer for nanocomputing applications.
-
PoisSolver: A tool for modeling silicon dangling bond clocking networks.

4.4.6. Use Cases

Some of the practical use cases of nanocomputing include
-
Ultra-small, energy-efficient processors;
-
Quantum computing hardware;
-
Medical nanodevices and bio-nanosensors;
-
Wearable and implantable devices;
-
Environmental monitoring;
-
Security and authentication;
-
Space and military applications;
-
High-density data storage.
In Figure 9, we depict the nanocomputing taxonomy.

4.5. Neuromorphic Computing

Neuromorphic computing systems are contrasted with traditional von Neumann’s architectures. A neuromorphic system emulates the structure of biological neurons and synapses in the human brain using electronic or photonic circuits, with the goals of
  • Non-linear data processing;
  • Real-time learning;
  • Extremely low energy consumption.
One of the key differences between conventional (von Neumann) computing and neuromorphic systems lies in how computation is carried out. Traditional computing devices—even those that support parallelism (such as GPUs and FPGAs)—fundamentally rely on the von Neumann architecture. This means that each compute unit (core, thread, etc.) executes instructions sequentially, even if the overall system can perform operations in parallel. By contrast, a neuromorphic system can perform computation in parallel exactly where the data reside, dramatically reducing both latency and energy use [66]. Moreover, this approach allows artificial neurons and synapses to be highly interconnected, facilitating the modeling of neuroscience theories and the solution of machine learning problems. In other words, these systems mimic the brain-like ability to learn and adapt.
Despite the focus on neuromorphic principles, it is important to mention some of the well-known platforms that have played a significant role in this field:
-
SpiNNaker v4.2.0.46: Developed at the University of Manchester, SpiNNaker stands for spiking neural network architecture. This platform is designed for the real-time, large-scale simulation of spiking neural networks [80]. Its massively parallel architecture and low power consumption make SpiNNaker well suited for brain-inspired algorithms such as robotic control, cognitive modeling, and neuroscience research.
-
IBM TrueNorth: Developed by IBM Research as part of the DARPA SyNAPSE program, TrueNorth is a neuromorphic computing platform built around a non-von Neumann network of neuro-synaptic cores. It enables the efficient, parallel processing of spiking neural networks. TrueNorth chips deliver high performance on tasks such as pattern recognition, sensor data processing, and cognitive computing, demonstrating the real-world potential of neuromorphic computing.

4.5.1. Frameworks and Programming Languages

Shuman et al. [81] introduced a software framework—implemented using emerging technologies—that enables the exploration of neuromorphic computing systems. They presented the design of this framework and its use for programming memristor-oxide-based neuromorphic hardware. This programming framework proposes a method for evaluating new neuromorphic devices and makes it easy to compare multiple neuromorphic systems. Finally, they discuss how the framework can be extended to neuromorphic architectures built from a variety of novel components and materials.
The main framework for neuromorphic computing is [81]
-
Neuromorphic framework.

4.5.2. Tools

Simulators

In summary, the main neuromorphic simulators are [82,83,84,85]
-
Nest: It is a simulator for spiking neural network models that focus on the dynamics, size and structure of neural systems rather than the exact morphology.
-
Cortex: A specialized computing system created to replicate the architecture and operations of the brain’s cortex, especially its spiking neural networks. These simulators play a vital role in the field of computational neuroscience and in the advancement of sophisticated artificial intelligence.
-
System-level simulator.
-
MASTISK: An open-source versatile and flexible tool developed in MATLAB R2023b for the design exploration of dedicated neuromorphic hardware using nanodevices.
-
Xnet Event-Driven: A software simulator for memristive nanodevice-based neuromorphic hardware.
-
NeMo: It is a high-performance spiking neural network simulator which simulates the networks of Izhikevich neurons on CUDA-enabled GPUs.

Libraries

The main neuromorphic library is
-
Neko [86]: A modular, extensible, open-source Python library with backends for PyTorch and TensorFlow. Neko focuses on designing innovative learning algorithms in three areas: online local learning, probabilistic learning, and in-memory analog learning. Results show that Neko outperforms state-of-the-art algorithms in both accuracy and speed. It also provides tools for comparing gradients to facilitate the development of new algorithmic variants.

Testbed Experiments

The main neuromorphic testing environments are as follows (https://arxiv.org/html/2407.02353v1 access date: 15 June 2025):
-
SpiNNaker Platform: an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence of neuromorphic computing.
-
BrainScales Platform: It utilizes physical silicon neurons that are produced on complete 8-inch silicon wafers, interconnecting 20 of these wafers within a cabinet, alongside 48 FPGA-based communication modules. It facilitates accelerated time computations relative to real time, achieving approximately 10,000 times the speed by utilizing spike-timing-dependent plastic synapses. Each wafer is capable of accommodating around 200,000 neurons and 44 million synapses.
-
IBM TrueNorth: It could host 1 million very simple neurons or be reconfigured to the trade-off number of neurons versus neuron model complexity.
-
Intel Loihi: The most advanced neuromorphic chip for neuromorphic computing tests.

4.5.3. Use Cases

Some of the practical use cases of neuromorphic computing include the following:
-
Low-power edge AI;
-
Real-time pattern recognition;
-
Adaptive robotics;
-
Brain–computer interfaces (BCIs);
-
Cognitive computing systems;
-
Cybersecurity and anomaly detection;
-
Event-based vision (dynamic vision sensors);
-
Neuroscience and brain simulation;
-
Energy-efficient data centers;
The taxonomy of neuromorphic computing is shown in Figure 10.

4.6. In-Memory Computing

The modern computer design—known as the von Neumann architecture—consists of three components: the memory, processor, and bus. However, data transfers between the memory and the processor are often time- and energy-intensive, a problem exacerbated by the rapid rise in data-intensive AI workloads. These applications demand non-von Neumann approaches such as in-memory computing, in which many operations are performed in situ within the memory itself, using the physical properties of memory devices. In-memory computing leverages devices with intrinsic computational capabilities—such as memristors, phase-change memory (PCM), spin-transfer torque RAM (STT-RAM), and resistive RAM (RRAM) [87,88].
In-memory architectures aim to overcome the “memory bottleneck” by storing and processing data in the same place rather than shuttling it back and forth between the memory and the processor.

4.6.1. Definition

In-memory computing refers to an architectural paradigm in which data is kept wholly or partially in memory and computations are performed directly within that memory, with the goal of reducing data transfer latency between processor and memory.
For a clearer understanding, the comparison of traditional computing vs. in-memory architectures is depicted in Table 5.

4.6.2. Types of In-Memory Architectures

(a)
Software-level In-Memory Computing
-
Data reside in RAM and are processed directly (e.g., SAP HANA);
-
In-memory data stores such as Redis, MemSQL, and Apache Ignite.
(b)
Hardware-level In-Memory Computing
-
Processing-in-Memory (PIM):
Processing units are embedded within the memory chip.
Examples: UPMEM, Samsung PIM.
-
Near-Memory Computing (NMC):
Processors are located close to, but not within, the memory.
Lower energy consumption than traditional architectures, but higher than PIM.
-
Compute Express Link (CXL):
A new low-latency interface between the memory and processor.
Well suited to hybrid memory–processor architectures.
-
Hardware-Related Technologies:
HMC (Hybrid Memory Cube) and HBM (High Bandwidth Memory): 3D-stacked memories with high bandwidth for in-memory computing.
ReRAM, MRAM, and PCM: Non-volatile memories that can perform both storage and computation.

4.6.3. Frameworks and Programming Languages

Accurate and fast weather forecasting is a key challenge in high-performance computing (HPC). Jayant and Sumathi [89] focused on in-memory weather prediction using Apache Spark. They chose Spark over Hadoop for its superior processing capability. First, they ran a Spark instance in an iPython notebook, then downloaded weather datasets from relevant sites into the notebook. ClimateSpark [90] is another distributed, in-memory framework designed to facilitate complex big data analyses and time-consuming computational tasks. It leverages Spark SQL and Apache Zeppelin to build a web portal that allows climate scientists to interact with climate data, analyses, and compute resources. The authors compared ClimateSpark with SciSpark and vanilla Spark, demonstrating that ClimateSpark effectively handles multidimensional, array-based data.
In summary, the main frameworks of in-memory computing are [89,90]
-
Spark;
-
ClimateSpark: An in-memory distributed computing framework for big climate data analytics.

4.6.4. Tools

Simulators

The main simulators of in-memory computing are
-
PIMSim [91]: A highly configurable platform for circuit-, architecture-, and system-level studies. It offers three implementation modes, trading speed for accuracy. PIMSim enables the detailed modeling of performance and energy for PIM instructions, compilers, in-memory processing logic, various storage devices, and memory coherence in PIM. Experimental results show acceptable accuracy compared to state-of-the-art PIM designs.
-
CIMSIM [92]: An open-source SystemC simulator that allows for the functional modeling of in-memory architectures and defines a set of nano-instructions independent of technology.

Analyzers

Zhu et al. [93] examined the impact of I/O on the performance of modern big data applications running on in-memory cluster computing frameworks such as Apache Spark. They selected the Genome Analysis Toolkit 4 (GATK4)—a Spark-based genomic analysis tool—and measured I/O effects using various HDD and SSD configurations while also varying the number of CPU cores to improve computational and I/O decisions. They claim to be the first to propose an I/O-aware analytical model that quantitatively captures I/O effects on application performance in the Spark in-memory computing framework.
The main analyzer of in-memory computing is [93]
-
Doppio.

4.6.5. Use Cases

Some of the practical use cases of in-memory computing are as follows:
-
Real-time data analytics;
-
Artificial intelligence and machine learning;
-
Big data processing;
-
Internet of Things (IoT);
-
Financial services;
-
Healthcare and genomics;
-
Real-time personalization;
-
Supply chain optimization;
-
Simulation and scientific computing;
-
Gaming and AR/VR.
Figure 11 presents the taxonomy of in-memory computing technology.

4.7. Serverless Computing

Serverless computing represents a cloud computing execution paradigm in which developers create and operate applications without the need to manage servers. Cloud service providers take care of server provisioning, scaling, and management, allowing developers to concentrate on coding and only incur costs for the resources they utilize. This model is frequently characterized as event-driven, where code execution is initiated by specific events and occurs solely when required.
Serverless computing is one of the most innovative and transformative cloud computing models, allowing developers to run their code without managing any server infrastructure. In this model, resource allocation, scaling, and server maintenance are handled by the cloud provider, and the developer needs to only focus on business logic.
Serverless computing is a cloud computing deployment paradigm in which the provider allocates machine resources on demand and manages the servers on behalf of the customer [94]. In serverless computing, resources are not held in volatile memory between invocations. Instead, computations execute in short-lived bursts, and results are written to a disk. Compute resources are released when the application is idle. Billing is based on the actual resources consumed by the application.
Serverless application designers do not plan for capacity, configuration, management, maintenance, fault tolerance, scale containers, virtual machines, or physical servers. Amazon Lambda, launched in 2014, is often credited with popularizing serverless architectures, though it was not the first implementation of the concept—for example, Google App Engine (2008) provided a similar platform for building and deploying apps without managing underlying infrastructure. Serverless computing adds an extra layer of abstraction to cloud-computing paradigms, removing server-side management from developers’ concerns and letting them focus solely on application logic [95].

4.7.1. Key Concepts in Serverless Computing

-
Function-as-a-Service (FaaS): Developers upload small, stateless functions that execute in response to events such as HTTP requests, file uploads, or queue messages.
-
Event-Driven: The code executes only when a specific event occurs; once execution is completed, resources are freed.
-
Automated Resource Management: Users do not manage the CPU, RAM, scaling, or replication—these are entirely the provider’s responsibility.
-
Execution Unit Function: Lightweight, stateless, and invoked in response to individual events.

4.7.2. Frameworks and Programming Languages

All major cloud providers—Microsoft, Google, and Amazon—offer serverless computing services in their public cloud portfolios. Serverless computing relies on programming frameworks that hide deployment complexity and simplify writing applications, automating tasks, and sharding data, while the underlying framework handles scheduling and fault tolerance.
The main frameworks of serverless computing are as follows:
-
Ripple [96], which allows single-machine applications to exploit serverless task parallelism.
-
Fission [97], an open-source serverless framework for Kubernetes focused on developer productivity and performance. Its core is written in Go, but it supports runtimes for Python, Node.js, Ruby, Bash, and PHP.
-
Kubeless [98], which lets developers deploy small code snippets without worrying about the underlying infrastructure.
-
Luna+Serverless [99], a study integrating the Luna language with a serverless model, extending its standard library and leveraging language features to provide a serverless API.
-
Kappa [100], a serverless programming framework that enables developers to write standard Python code, which Kappa transforms and runs in parallel via Lambda functions on the serverless platform.
-
OpenWhisk [100,101,102,103], an open-source project originally developed by IBM and later contributed to the Apache Incubator. Its programming model is built around three primitives—Action (stateless functions), Trigger (classes of events from various sources), and Rule (links a Trigger to an Action). The OpenWhisk controller automatically scales functions in response to the demand.

4.7.3. Tools

Simulators

The authors of [104] designed an open-source simulation service that enables serverless application developers to optimize their Function-as-a-Service programs for cost and performance.
In 2020, ref. [105] proposed the serverless OpenDC, the first open-source, trace-driven, configurable serverless simulator.
In summary, the main simulators of serverless computing are [104,105] as follows:
-
SimFaaS: A simulation platform, which assists serverless application developers to develop optimized Function-as-a-Service applications.
-
OpenDC Serverless: The first simulator to integrate serverless and machine learning execution, both emerging services already offered by all major cloud providers.

Analyzers

Serverless computing analyzers, commonly referred to as Function-as-a-Service (FaaS) analyzers, assist developers in comprehending and enhancing their serverless applications by delivering insights regarding their performance and behavior. These tools provide a range of features, including code analysis, monitoring, debugging, and performance profiling, which empower developers to pinpoint bottlenecks, optimize resource utilization, and guarantee the reliability and efficiency of their serverless functions.
In summary, the main analyzers of serverless computing are as follows:
-
Amazon Cloudwatch: A comprehensive monitoring service that allows us to collect and track metrics, monitor logs, set alarms, and react to changes in AWS resources and applications.
-
Lumigo: A microservice monitoring and troubleshooting platform for serverless computing.
-
Epsagon: An open and composable observability and data visualization platform.

Testbed Experiments

The main testbed experiments of serverless computing are as follows:
-
CAPTAIN: A testbed for the co-simulation of sustainable and scalable serverless computing environments for AIoT-enabled systems.
-
SCOPE: A testbed for performance testing for serverless computing scenarios.

4.7.4. Use Cases

Some of the practical use cases of serverless computing are as follows:
-
Web and mobile backend services;
-
API backend and microservices;
-
Real-time file or data processing;
-
Real-time stream processing;
-
Chatbots and voice assistants;
-
Automation and scheduled tasks;
-
Continuous integration/continuous deployment (CI/CD);
-
IoT backend;
-
Scalable event-driven applications;
-
Proof of concept (PoC)/minimum viable products (MVP) development.
Figure 12 presents the taxonomy of serverless computing.

5. Statistical Analysis of Emerging HPC-Related Computing Technologies

As shown in Table 6, we analyzed the documents in terms of journal articles, conference papers, books, and book chapters for each emerging computing technology in the Scopus database to see how these emerging technologies have been addressed in recent years. Then, we filtered the name of each technology inside quotation marks; we also specified the subject areas, including computer science, engineering, physics and astronomy, chemistry, and materials science, and determined the document types, which the number of documents in Table 6 refers to within these subjects.
For example, up to 2025, we found only 45 articles addressing adiabatic computing topics, while 4953 articles focused on quantum computing. Similarly, from the conference perspective, quantum computing topped the list with 2110 papers. Additionally, from the book’s perspective, nanocomputing, serverless computing, neuromorphic computing, adiabatic computing, and biological computing had little or no documents, while quantum computing had 136 documents up to 2025.
Finally, quantum computing had 26 book chapters up to 2025, while biological computing, nanocomputing, serverless computing, and adiabatic computing had none. Overall, up to 2025, in-memory computing, quantum computing, neuromorphic computing, and serverless computing received significantly more attention than other emerging computing technologies. As further shown, some of these emerging computing technologies are applied in areas such as computer science, engineering, physics and astronomy, chemistry, and materials science. Among them, neuromorphic, biological, quantum, in-memory, and nanocomputing are used across all defined fields.

Open Research Challenges and Opportunities

This section addresses the opportunities and challenges associated with the adoption of emerging computing technologies.
Various research areas in the field of quantum computing should be considered. One prominent challenge is the optimal energy management in powerful supercomputers and cloud data centers due to their high energy consumption for solving complex global problems. Due to technological advancements in the IT industry, energy consumption and greenhouse gas (GHG) emissions are dramatically increasing, posing a serious threat to the environment [106]. Quantum machine learning, cybersecurity, and quantum chemistry are among the topics researchers can explore. Moreover, there is a need for a transition from classical to quantum computing to design the quantum Internet, which enables the transmission of large amounts of data over infinite distances at speeds exceeding the speed of light [15,107].
Similarly, in nanocomputing, several issues require attention. Connection problems from two perspectives—minimizing contact resistance between nanostructures and the external world and the high number of wires required to connect such complex devices—are among the main challenges. Integrating nanostructures into computer-aided design (CAD) tools requires nanostructures that develop circuit models [108].
Serverless computing currently faces various challenges. Several literature reviews have addressed these challenges [109,110,111,112,113]. Some of the identified challenges are cross-domain, including ensuring security and privacy in serverless applications. Others may be domain-specific, such as scheduling, pricing, caching, provider management, and function invocation. Since serverless computing is still in its early stages, existing development tools, ideas, and models are inadequate. This is a serious issue for computer programmers. On the other hand, serverless computing has many advantages, such as being more user-friendly for clients by eliminating deployment complexities. Additionally, these services are offered in some areas of cloud computing at reasonable prices, and new markets are emerging around these services, indicating the rise in new business opportunities [114].
Although many simple biocomputers have already been built, their capabilities are very limited compared to advanced biocomputers. Many people believe in the vast potential of biocomputers, but much work remains to realize this potential. One of the major challenges in biocomputing is their implementation on hardware, and evaluating the current state versus the ideal state of designing specialized (silicon-based) hardware suitable for it is necessary. Implementing such systems is essential [115]. DNA computing is still in its early stages, and its future applications are expected to include treatments using nanorobotic systems, endogenous DNA information processing, and big data storage systems.
However, past and ongoing studies indicate that despite some drawbacks, biological computing can greatly contribute to realizing ultra-fast supercomputers on the exascale or higher. In particular, its impact on reducing energy consumption, protecting the environment, producing compact high-capacity hardware, and storing large data will be significant. Another emerging computing technology we addressed is in-memory computing. Emerging memory technologies play a key role in the development of in-memory computing because traditional technologies are unstable and do not meet the needs of this emerging computing model. Some key unstable technologies include resistive RAM (ReRAM), phase-change memory (PCM), and magnetic RAMs such as spin-transfer torque MRAM (STT-MRAM) and Spin–Orbit Torque MRAM (SOT-MRAM) [116].
Ref. [117] classified the main research challenges in neuromorphic computing into five domains: applications, algorithms, software, devices, and materials. They noted that all researchers must collaborate with materials scientists to customize innovative materials for various use cases.
We compared and summarized the main providers, challenges, benefits, applications, and future research directions of different HPC-related emerging technologies, as shown in Table 7.

6. Concluding Remarks and Future Research Areas

This paper presented a comprehensive review of the advancements, innovations, challenges, and opportunities associated with emerging HPC-related computing technologies. Technologies such as quantum computing, in-memory architectures, neuromorphic systems, nanoscale computing, adiabatic technologies, serverless computing, and biologically inspired approaches were analyzed in terms of their capabilities, architectures, application potential, and technological maturity.
Through this analysis, the potential benefits of these technologies—such as enhanced performance, reduced energy consumption, improved scalability, and real-time processing capabilities—were identified. At the same time, key limitations and challenges including high error rates, infrastructure costs, a lack of mature development tools, scalability issues, and compatibility with production environments were critically examined.
Although the main focus of this study was on hardware-related aspects, software components—including domain-specific programming languages, development frameworks, libraries, compilers, simulation tools, and experimental environments—were also reviewed and categorized to offer a holistic perspective for researchers. Another key feature of this study is the systematic and comparative analysis of the practical and research applicability of these technologies in domains such as artificial intelligence, scientific simulation, big data processing, cloud computing, and edge computing, thus providing a foundation for technological decision-making in the HPC domain.
Based on the findings of this study, the following directions in Table 8 are suggested as priorities for future research on emerging computing technologies in the HPC context:

Author Contributions

Conceptualization, E.A. and N.G. (Niloofar Gholipour); methodology, E.A. and D.M.; validation, N.G. (Niloofar Gholipour), D.M. and P.G.; formal analysis, N.G. (Niloofar Gholipour) and E.A.; investigation, P.G. and D.M.; resources, E.A., N.G. (Niloofar Gholipour), P.G. and N.G. (Neda Ghorbani); writing—original draft preparation, A.S., N.G. (Neda Ghorbani) and D.M.; writing—review and editing, E.A. and P.G.; supervision, E.A.; project administration E.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable. The study does not report any data.

Acknowledgments

The authors must thank the ICT Research Institute (ITRC) for its financial support during research.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AGIArtificial General Intelligence
APIApplication Programming Interface
AR/VRAugmented Reality/Virtual Reality
AWSAmazon Web Services
BCIBrain–Computer Interface
CADComputer-Aided Design
CI/CDContinuous Integration/Continuous Delivery
CNTCarbon Nanotube
CPUCentral Processing Unit
CXLCompute Express Link
DNADeoxyribonucleic acid
FaaSFunction as a Service
GHGGreenhouse Gas
GPUGraphics Processing Unit
HBMHigh Bandwidth Memory
HDDHard Disk Drive
HMCHybrid Memory Cube
HPCHigh-Performance Computing
HPQCHigh-performance Quantum Computing
IoTInternet of Things
ITInformation Technology
MLMachine Learning
MRAMMagneto-resistive Random Access Memory
MVPMinimum Viable Product
NISQNoisy Intermediate-Scale Quantum
NMCNear-Memory Computing
ODEOrdinary Differential Equation
PCMPhase-Change Memory
PIMProcessing In Memory
PoCProof of Concept
QFTQuantum Fourier Transform
QKDQuantum Key Distribution
QPUQuantum Processing Unit
ReRAMResistive Random Access Memory
RNARibo-Nucleic Acid
SBMLSystems Biology Markup Language
SDNsSoftware-Defined Networks
SOT-MRAMSpin–Orbit Torque MRAM
SSDSolid-State Drive
STT-MRAMSpin-Transfer Torque MRAM
TRLTechnology Readiness Level

References

  1. Sterling, T.; Brodowicz, M.; Anderson, M. High Performance Computing: Modern Systems and Practices; Morgan Kaufmann: Burlington, MA, USA, 2017. [Google Scholar]
  2. Li, L. High Performance Computing Applied to Cloud Computing. Ph.D. Thesis, Finland, 2015. Available online: https://www.theseus.fi/handle/10024/95096 (accessed on 15 June 2025).
  3. Abas, M.F.b.; Singh, B.; Ahmad, K.A. High Performance Computing and Its Application in Computational Biomimetics. In High Performance Computing in Biomimetics: Modeling, Architecture and Applications; Springer: Berlin/Heidelberg, Germany, 2024; pp. 21–46. [Google Scholar]
  4. Yin, F.; Shi, F. A comparative survey of big data computing and HPC: From a parallel programming model to a cluster architecture. Int. J. Parallel Program. 2022, 50, 27–64. [Google Scholar] [CrossRef]
  5. Raj, R.K.; Romanowski, C.J.; Impagliazzo, J.; Aly, S.G.; Becker, B.A.; Chen, J.; Ghafoor, S.; Giacaman, N.; Gordon, S.I.; Izu, C.; et al. High performance computing education: Current challenges and future directions. In Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, Trondheim, Norway, 25 December 2020; pp. 51–74. [Google Scholar]
  6. Karanikolaou, E.; Bekakos, M. Action: A New Metric for Evaluating the Energy Efficiency on High Performance Computing Platforms (ranked on Green500 List). WSEAS Trans. Comput. 2022, 21, 23–30. [Google Scholar] [CrossRef]
  7. Al-Hashimi, H.M. Turing, von Neumann, and the computational architecture of biological machines. Proc. Natl. Acad. Sci. USA 2023, 120, e2220022120. [Google Scholar] [CrossRef] [PubMed]
  8. Eberbach, E.; Goldin, D.; Wegner, P. Turing’s ideas and models of computation. In Alan Turing: Life and Legacy of a Great Thinker; Springer: Berlin/Heidelberg, Germany, 2004; pp. 159–194. [Google Scholar]
  9. Gill, S.S.; Kumar, A.; Singh, H.; Singh, M.; Kaur, K.; Usman, M.; Buyya, R. Quantum computing: A taxonomy, systematic review and future directions. Softw. Pract. Exp. 2022, 52, 66–114. [Google Scholar] [CrossRef]
  10. Perrier, E. Ethical quantum computing: A roadmap. arXiv 2021, arXiv:2102.00759. [Google Scholar]
  11. Gill, S.S. Quantum and blockchain based Serverless edge computing: A vision, model, new trends and future directions. Internet Technol. Lett. 2024, 7, e275. [Google Scholar] [CrossRef]
  12. Aslanpour, M.S.; Toosi, A.N.; Cicconetti, C.; Javadi, B.; Sbarski, P.; Taibi, D.; Assuncao, M.; Gill, S.S.; Gaire, R.; Dustdar, S. Serverless edge computing: Vision and challenges. In Proceedings of the 2021 Australasian Computer Science Week Multiconference, Dunedin, New Zealand, 1–5 February 2021. [Google Scholar]
  13. Dagdia, Z.C.; Avdeyev, P.; Bayzid, M.S. Biological computation and computational biology: Survey, challenges, and discussion. Artif. Intell. Rev. 2021, 54, 4169–4235. [Google Scholar] [CrossRef]
  14. Staudigl, F.; Merchant, F.; Leupers, R. A survey of neuromorphic computing-in-memory: Architectures, simulators, and security. IEEE Des. Test 2021, 39, 90–99. [Google Scholar] [CrossRef]
  15. Sepúlveda, S.; Cravero, A.; Fonseca, G.; Antonelli, L. Systematic review on requirements engineering in quantum computing: Insights and future directions. Electronics 2024, 13, 2989. [Google Scholar] [CrossRef]
  16. Czarnul, P.; Proficz, J.; Krzywaniak, A. Energy-aware high-performance computing: Survey of state-of-the-art tools, techniques, and environments. Sci. Program. 2019, 2019, 8348791. [Google Scholar] [CrossRef]
  17. Rico-Gallego, J.A.; Díaz-Martín, J.C.; Manumachu, R.R.; Lastovetsky, A.L. A survey of communication performance models for high-performance computing. ACM Comput. Surv. (CSUR) 2019, 51, 1–36. [Google Scholar] [CrossRef]
  18. Singh, G.; Chelini, L.; Corda, S.; Awan, A.J.; Stuijk, S.; Jordans, R.; Corporaal, H.; Boonstra, A.J. Near-memory computing: Past, present, and future. Microprocess. Microsyst. 2019, 71, 102868. [Google Scholar] [CrossRef]
  19. Li, J.; Li, N.; Zhang, Y.; Wen, S.; Du, W.; Chen, W.; Ma, W. A survey on quantum cryptography. Chin. J. Electron. 2018, 27, 223–228. [Google Scholar] [CrossRef]
  20. Li, J.; Wang, S.; Rudinac, S.; Osseyran, A. High-performance computing in healthcare: An automatic literature analysis perspective. J. Big Data 2024, 11, 61. [Google Scholar] [CrossRef]
  21. Garewal, I.K.; Mahamuni, C.V.; Jha, S. Emerging Applications and Challenges in Quantum Computing: A Literature Survey. In Proceedings of the 2024 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD), Port Louis, Mauritius, 1–2 August 2024; pp. 1–12. [Google Scholar] [CrossRef]
  22. Kudithipudi, D.; Schuman, C.; Vineyard, C.M.; Pandit, T.; Merkel, C.; Kubendran, R.; Aimone, J.B.; Orchard, G.; Mayr, C.; Benosman, R.; et al. Neuromorphic computing at scale. Nature 2025, 637, 801–812. [Google Scholar] [CrossRef]
  23. Duarte, L.T.; Deville, Y. Quantum-Assisted Machine Learning by Means of Adiabatic Quantum Computing. In Proceedings of the 2024 IEEE Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Oran, Algeria, 15–17 April 2024; pp. 371–375. [Google Scholar] [CrossRef]
  24. Al Abdul Wahid, S.; Asad, A.; Mohammadi, F. A Survey on Neuromorphic Architectures for Running Artificial Intelligence Algorithms. Electronics 2024, 13, 2963. [Google Scholar] [CrossRef]
  25. Gyongyosi, L.; Imre, S. A survey on quantum computing technology. Comput. Sci. Rev. 2019, 31, 51–71. [Google Scholar] [CrossRef]
  26. Combarro, E.F.; Vallecorsa, S.; Rodríguez-Muñiz, L.J.; AguilarGonzález, Á.; Ranilla, J.; Di Meglio, A. A report on teaching a series of online lectures on quantum computing from cern. J. Supercomput. 2021, 77, 14405–14435. [Google Scholar] [CrossRef]
  27. Outeiral, C.; Strahm, M.; Shi, J.; Morris, G.M.; Benjamin, S.C.; Deane, C.M. The prospects of quantum computing in computational molecular biology. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2021, 11, 1481. [Google Scholar] [CrossRef]
  28. Green, A.S.; Lumsdaine, P.L.; Ross, N.J.; Selinger, P.; Valiron, B. Quipper: A scalable quantum programming language. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation, Seattle, WA, USA, 16–19 June 2013; pp. 333–342. [Google Scholar]
  29. Svore, K.; Geller, A.; Troyer, M.; Azariah, J.; Granade, C.; Heim, B.; Kliuchnikov, V.; Mykhailova, M.; Paz, A.; Roetteler, M. Q# enabling scalable quantum computing and development with a high-level dsl. In Proceedings of the Real World Domain Specific Languages Workshop, Vienna, Austria, 24 February 2018; pp. 1–10. [Google Scholar]
  30. Wecker, D.; Svore, K.M. Liqui |⟩: A software design architecture and domain-specific language for quantum computing. arXiv 2014, arXiv:1402.4467. [Google Scholar]
  31. Paykin, J.; Rand, R.; Zdancewic, S. Qwire: A core language for quantum circuits. ACM SIGPLAN Not. 2017, 52, 846–858. [Google Scholar] [CrossRef]
  32. Khammassi, N.; Ashraf, I.; Someren Jv Nane, R.; Krol, A.; Rol, M.A.; Lao, L.; Bertels, K.; Almudever, C.G. Openql: A portable quantum programming framework for quantum accelerators. arXiv 2020, arXiv:2005.13283. [Google Scholar] [CrossRef]
  33. Steiger, D.S.; Häner, T.; Troyer, M. Projectq: An open source software framework for quantum computing. Quantum 2018, 2, 49. [Google Scholar] [CrossRef]
  34. IBM: Qiskit: An Open-Source Quantum Computing Framework. Available online: https://qiskit.org/ (accessed on 8 June 2024).
  35. Bergholm, V.; Izaac, J.; Schuld, M.; Gogolin, C.; Ahmed, S.; Ajith, V.; Alam, M.S.; Alonso-Linaje, G.; AkashNarayanan, B.; Asadi, A.; et al. Pennylane: Automatic differentiation of hybrid quantum-classical computations. arXiv 2018, arXiv:1811.04968. [Google Scholar]
  36. Google: Cirq: A Python Library for Quantum Circuits. Available online: https://github.com/quantumlib/Cirq/blob/main/cirq-google/cirq_google/api/v2/program.proto (accessed on 8 June 2024).
  37. Ömer, B. A Procedural Formalism for Quantum Computing; Technical University of Vienna: Vienna, Austria, 1998. [Google Scholar]
  38. Liu, S.; Wang, X.; Zhou, L.; Guan, J.; Li, Y.; He, Y.; Duan, R.; Ying, M. Q|si⟩: A quantum programming environment. In Symposium on Real-Time and Hybrid Systems; Springer: Berlin/Heidelberg, Germany, 2018; pp. 133–164. [Google Scholar]
  39. Krämer, S.; Plankensteiner, D.; Ostermann, L.; Ritsch, H. Quantumoptics. jl: A julia framework for simulating open quantum systems. Comput. Phys. Commun. 2018, 227, 109–116. [Google Scholar]
  40. Bian, H.; Huang, J.; Dong, R.; Guo, Y.; Wang, X. Hpqc: A new efficient quantum computing simulator. In Proceedings of the International Conference on Algorithms and Architectures for Parallel Processing, New York, NY, USA, 2–4 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 111–125. [Google Scholar]
  41. Ardelean, S.M.; Udrescu, M. Qc|pp⟩: A behavioral quantum computing simulation library. In Proceedings of the 2018 IEEE 12th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 17–19 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 000437–000442. [Google Scholar]
  42. Gheorghiu, V. Quantum++: A modern c++ quantum computing library. PLoS ONE 2018, 13, 0208073. [Google Scholar] [CrossRef]
  43. Javadi-Abhari, A.; Patil, S.; Kudrow, D.; Heckey, J.; Lvov, A.; Chong, F.T.; Martonosi, M. Scaffcc: A framework for compilation and analysis of quantum computing programs. In Proceedings of the 11th ACM Conference on Computing Frontiers, Cagliari, Italy, 20–22 May 2014; pp. 1–10. [Google Scholar]
  44. Sivarajah, S.; Dilkes, S.; Cowtan, A.; Simmons, W.; Edgington, A.; Duncan, R. T|ket⟩: A retargetable compiler for nisq devices. Quantum Sci. Technol. 2020, 6, 014003. [Google Scholar] [CrossRef]
  45. Seth, P.; Krivenko, I.; Ferrero, M.; Parcollet, O. Triqs/cthyb: A continuous-time quantum monte carlo hybridisation expansion solver for quantum impurity problems. Comput. Phys. Commun. 2016, 200, 274–284. [Google Scholar] [CrossRef]
  46. Kawamura, M.; Yoshimi, K.; Misawa, T.; Yamaji, Y.; Todo, S.; Kawashima, N. Quantum lattice model solver hϕ. Comput. Phys. Commun. 2017, 217, 180–192. [Google Scholar] [CrossRef]
  47. Ma, L.; Tang, X.; Slattery, O.; Battou, A. A testbed for quantum communication and quantum networks. In Quantum Information Science, Sensing, and Computation XI; NIST: Baltimore, MA, USA, 2019; Volume 10984, p. 1098407. [Google Scholar]
  48. Clark, S.M.; Lobser, D.; Revelle, M.; Yale, C.G.; Bossert, D.; Burch, A.D.; Chow, M.N.; Hogle, C.W.; Ivory, M.; Pehr, J.; et al. Engineering the quantum scientific computing open user testbed (qscout): Design details and user guide. arXiv 2021, arXiv:2104.00759. [Google Scholar]
  49. Anuar, N.; Takahashi, Y.; Sekine, T. Adiabatic logic versus cmos for low power applications. Proc. ITC-CSCC 2009, 2009, 302–305. [Google Scholar]
  50. Denker, J.S. A review of adiabatic computing. In Proceedings of the 1994 IEEE Symposium on Low Power Electronics, Cambridge, MA, USA, 10–12 October 1994; IEEE: Piscataway, NJ, USA, 1994; pp. 94–97. [Google Scholar]
  51. Johnson, M.W.; Amin, M.H.; Gildert, S.; Lanting, T.; Hamze, F.; Dickson, N.; Harris, R.; Berkley, A.J.; Johansson, J.; Bunyk, P.; et al. Quantum annealing with manufactured spins. Nature 2011, 473, 194–198. [Google Scholar] [CrossRef] [PubMed]
  52. Koruga, D.L. Biocomputing. In Proceedings of the Twenty-Fourth Annual Hawaii International Conference on System Sciences, Kauai, HI, USA, 8–11 January 1991; IEEE: Piscataway, NJ, USA, 1991; Volume 1, pp. 269–275. [Google Scholar]
  53. Lopez, C.F.; Muhlich, J.L.; Bachman, J.A.; Sorger, P.K. Programming biological models in python using pysb. Mol. Syst. Biol. 2013, 9, 646. [Google Scholar] [CrossRef] [PubMed]
  54. Kunzmann, P.; Hamacher, K. Biotite: A unifying open-source computational biology framework in python. BMC Bioinform. 2018, 19, 346. [Google Scholar] [CrossRef] [PubMed]
  55. Lubbock, A.L.; Lopez, C.F. Programmatic modeling for biological systems. Curr. Opin. Syst. Biol. 2021, 27, 100343. [Google Scholar] [CrossRef]
  56. Hemmerich, J.; Tenhaef, N.; Wiechert, W.; Noack, S. Pyfoomb: Python framework for object-oriented modeling of bioprocesses. Eng. Life Sci. 2021, 21, 242–257. [Google Scholar] [CrossRef]
  57. Manolakos, E.S.; Kouskoumvekakis, E. Stochsocs: High performance biocomputing simulations for large scale systems biology. In Proceedings of the 2017 International Conference on High Performance Computing & Simulation (HPCS), Genoa, Italy, 17–21 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 921–928. [Google Scholar]
  58. Herajy, M.; Liu, F.; Rohr, C.; Heiner, M. Snoopy’s hybrid simulator: A tool to construct and simulate hybrid biological models. BMC Syst. Biol. 2017, 11, 71. [Google Scholar] [CrossRef]
  59. Cer, R.Z.; Herrera-Galeano, J.E.; Anderson, J.J.; Bishop-Lilly, K.A.; Mokashi, V.P. Mirna temporal analyzer (mirnata): A bioinformatics tool for identifying differentially expressed micrornas in temporal studies using normal quantile transformation. Gigascience 2014, 3, 2047–2217. [Google Scholar] [CrossRef]
  60. Medley, J.K.; Teo, J.; Woo, S.S.; Hellerstein, J.; Sarpeshkar, R.; Sauro, H.M. A compiler for biological networks on silicon chips. PLoS Comput. Biol. 2020, 16, 1008063. [Google Scholar] [CrossRef]
  61. Michail, D.; Kinable, J.; Naveh, B.; Sichi, J.V. Jgrapht—A java library for graph data structures and algorithms. ACM Trans. Math. Softw. (TOMS) 2020, 46, 1–29. [Google Scholar] [CrossRef]
  62. Somogyi, E.T.; Bouteiller, J.-M.; Glazier, J.A.; König, M.; Medley, J.K.; Swat, M.H.; Sauro, H.M. Libroadrunner: A high performance sbml simulation and analysis library. Bioinformatics 2015, 31, 3315–3321. [Google Scholar] [CrossRef] [PubMed]
  63. Yadav, R.; Dixit, C.; Trivedi, S.K. Nanotechnology and nano computing. Int. J. Res. Appl. Sci. Eng. Technol. 2017, 5, 531–535. [Google Scholar] [CrossRef]
  64. Jadhav, S.S.; Jadhav, S.V. Application of nanotechnology in modern computers. In Proceedings of the 2018 International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET), Kottayam, India, 21–22 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar]
  65. Roco, M.C.; Bainbridge, W.S. Overview converging technologies for improving human performance. In Converging Technologies for Improving Human Performance; Springer: Berlin/Heidelberg, Germany, 2003; pp. 1–27. [Google Scholar]
  66. Schuller, I.K.; Stevens, R.; Pino, R.; Pechan, M. Neuromorphic Computing–From Materials Research to Systems Architecture Roundtable; Technical report; USDOE Office of Science (SC): Washington, DC, USA, 2015.
  67. Walter, M.; Wille, R.; Torres, F.S.; Große, D.; Drechsler, R. Fiction: An open-source framework for the design of field-coupled nanocomputing circuits. arXiv 2019, arXiv:1905.02477. [Google Scholar]
  68. Soeken, M.; Riener, H.; Haaswijk, W.; Testa, E.; Schmitt, B.; Meuli, G.; Mozafari, F.; De Micheli, G. The epfl logic synthesis libraries. arXiv 2018, arXiv:1805.05121. [Google Scholar]
  69. Garlando, U.; Walter, M.; Wille, R.; Riente, F.; Torres, F.S.; Drechsler, R. Topolinano and fiction: Design tools for field-coupled nanocomputing. In Proceedings of the 2020 23rd Euromicro Conference on Digital System Design (DSD), Kranj, Slovenia, 26–28 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 408–415. [Google Scholar]
  70. Singh, N.; Ching, D. Auto-based bdec computational modeling framework for fault tolerant nano-computing. Appl. Math. Sci. 2015, 9, 761–771. [Google Scholar] [CrossRef]
  71. Abdullah-Al-Shafi, M.; Ziaur, R. Analysis and modeling of sequential circuits in QCA nano computing: RAM and SISO register study. Solid State Electron. Lett. 2019, 1, 73–83. [Google Scholar] [CrossRef]
  72. Walus, K.; Dysart, T.J.; Jullien, G.A.; Budiman, R.A. Qcadesigner: A rapid design and simulation tool for quantum-dot cellular automata. IEEE Trans. Nanotechnol. 2004, 3, 26–31. [Google Scholar] [CrossRef]
  73. Cowburn, R.; Welland, M. Room temperature magnetic quantum cellular automata. Science 2000, 287, 1466–1468. [Google Scholar] [CrossRef]
  74. Niemier, M.; Bernstein, G.H.; Csaba, G.; Dingler, A.; Hu, X.; Kurtz, S.; Liu, S.; Nahas, J.; Porod, W.; Siddiq, M.; et al. Nanomagnet logic: Progress toward system-level integration. J. Phys. Condens. Matter 2011, 23, 493202. [Google Scholar] [CrossRef]
  75. Soares, T.R.; Rahmeier, J.G.N.; De Lima, V.C.; Lascasas, L.; Melo, L.G.C.; Neto, O.P.V. Nmlsim: A nanomagnetic logic (nml) circuit designer and simulation tool. J. Comput. Electron. 2018, 17, 1370–1381. [Google Scholar] [CrossRef]
  76. Ribeiro, M.A.; Chaves, J.F.; Neto, O.P.V. Field-Coupled Nanocomputing Energy Analysis Tool, Sforum, Microelectronics Students Forum. 2016. Available online: https://sbmicro.org.br/sforum-eventos/sforum2016/19.pdf (accessed on 12 May 2025).
  77. Bos, P.M.; Gottardo, S.; Scott-Fordsmand, J.J.; Van Tongeren, M.; Semenzin, E.; Fernandes, T.F.; Hristozov, D.; Hund-Rinke, K.; Hunt, N.; Irfan, M.-A.; et al. The marina risk assessment strategy: A flexible strategy for efficient information collection and risk assessment of nanomaterials. Int. J. Environ. Res. Public Health 2015, 12, 15007–15021. [Google Scholar] [CrossRef] [PubMed]
  78. Chiu, H.N.; Ng, S.S.; Retallick, J.; Walus, K. Poissolver: A tool for modelling silicon dangling bond clocking networks. In Proceedings of the 2020 IEEE 20th International Conference on Nanotechnology (IEEE-NANO), Montreal, QC, Canada, 29–31 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 134–139. [Google Scholar]
  79. Bersuker, G.; Mason, M.; Jones, K.L. Neuromorphic computing: The potential for high-performance processing in space. Game Change 2018, 1–12. Available online: https://csps.aerospace.org/sites/default/files/2021-08/Bersuker_NeuromorphicComputing_12132018.pdf (accessed on 15 June 2025).
  80. Furber, S.; Lester, D.R.; Plana, L.A.; Garside, J.; Painkras, E.; Temple, S.; Brown, A.; Kendall, R.; Azhar, F.; Bainbridge, J.; et al. The spinnaker project. Proc. IEEE 2013, 102, 652–665. [Google Scholar] [CrossRef]
  81. Schuman, C.D.; Plank, J.S.; Rose, G.S.; Chakma, G.; Wyer, A.; Bruer, G.; Laanait, N. A programming framework for neuromorphic systems with emerging technologies. In Proceedings of the 4th ACM International Conference on Nanoscale Computing and Communication, Washington, DC, USA, 27–29 September 2017; pp. 1–7. [Google Scholar]
  82. Bichler, O.; Roclin, D.; Gamrat, C.; Querlioz, D. Design exploration methodology for memristor-based spiking neuromorphic architectures with the xnet event-driven simulator. In Proceedings of the 2013 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH), New York, NY, USA, 15–17 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 7–12. [Google Scholar]
  83. Bhattacharya, T.; Parmar, V.; Suri, M. Mastisk: Simulation framework for design exploration of neuromorphic hardware. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–9. [Google Scholar]
  84. Plagge, M.; Carothers, C.D.; Gonsiorowski, E.; Mcglohon, N. Nemo: A massively parallel discrete-event simulation model for neuromorphic architectures. ACM Trans. Model. Comput. Simul. (TOMACS) 2018, 28, 1–25. [Google Scholar] [CrossRef]
  85. Lee, M.K.F.; Cui, Y.; Somu, T.; Luo, T.; Zhou, J.; Tang, W.T.; Wong, W.-F.; Goh, R.S.M. A system-level simulator for rram-based neuromorphic computing chips. ACM Trans. Archit. Code Optim. (TACO) 2019, 15, 1–24. [Google Scholar] [CrossRef]
  86. Zhao, Z.; Wycoff, N.; Getty, N.; Stevens, R.; Xia, F. Neko: A library for exploring neuromorphic learning rules. arXiv 2021, arXiv:2105.00324. [Google Scholar]
  87. Mohamed, K.S. Neuromorphic Computing and Beyond; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  88. Sebastian, A.; Le Gallo, M.; Khaddam-Aljameh, R.; Eleftheriou, E. Memory devices and applications for in-memory computing. Nat. Nanotechnol. 2020, 15, 529–544. [Google Scholar] [CrossRef]
  89. Jayanthi, D.; Sumathi, G. Weather data analysis using spark–an inmemory computing framework. In Proceedings of the 2017 Innovations in Power and Advanced Computing Technologies (i-PACT), Vellore, India, 21–22 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5. [Google Scholar]
  90. Hu, F.; Yang, C.; Schnase, J.L.; Duffy, D.Q.; Xu, M.; Bowen, M.K.; Lee, T.; Song, W. Climatespark: An in-memory distributed computing framework for big climate data analytics. Comput. Geosci. 2018, 115, 154–166. [Google Scholar] [CrossRef]
  91. Xu, S.; Chen, X.; Wang, Y.; Han, Y.; Qian, X.; Li, X. Pimsim: A flexible and detailed processing-in-memory simulator. IEEE Comput. Archit. Lett. 2018, 18, 6–9. [Google Scholar] [CrossRef]
  92. BanaGozar, A.; Vadivel, K.; Stuijk, S.; Corporaal, H.; Wong, S.; Lebdeh, M.A.; Yu, J.; Hamdioui, S. Cim-sim: Computation in memory simulator. In Proceedings of the 22nd International Workshop on Software and Compilers for Embedded Systems, Sankt Goar, Germany, 27–28 May 2019; pp. 1–4. [Google Scholar]
  93. Zhou, P.; Ruan, Z.; Fang, Z.; Shand, M.; Roazen, D.; Cong, J. Doppio: I/o-aware performance analysis, modeling and optimization for inmemory computing framework. In Proceedings of the 2018 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Belfast, UK, 2–4 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 22–32. [Google Scholar]
  94. Banaei, A.; Sharifi, M. Etas: Predictive scheduling of functions on worker nodes of apache openwhisk platform. J. Supercomput. 2022, 78, 5358–5393. [Google Scholar] [CrossRef]
  95. Hassan, H.B.; Barakat, S.A.; Sarhan, Q.I. Survey on serverless computing. J. Cloud Comput. 2021, 10, 1–29. [Google Scholar] [CrossRef]
  96. Joyner, S.; MacCoss, M.; Delimitrou, C.; Weatherspoon, H. Ripple: A practical declarative programming framework for serverless compute. arXiv 2020, arXiv:2001.00222. [Google Scholar]
  97. Fission Open-Source Software. 2019. Available online: https://github.com/fission/fission (accessed on 17 June 2025).
  98. Kubeless open source software. 2018. Available online: https://github.com/vmware-archive/kubeless/ (accessed on 17 June 2025).
  99. Moczurad, P.; Malawski, M. Visual-textual framework for serverless computation: A luna language approach. In Proceedings of the 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion), Zurich, Switzerland, 17–20 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 169–174. [Google Scholar]
  100. Zhang, W.; Fang, V.; Panda, A.; Shenker, S. Kappa: A programming framework for serverless computing. In Proceedings of the 11th ACM Symposium on Cloud Computing, Virtual Event, USA, 19–20 October 2020; pp. 328–343. [Google Scholar]
  101. Mohanty, S.K.; Premsankar, G.; Di Francesco, M. An evaluation of open source serverless computing frameworks. In Proceedings of the IEEE International Conference on Cloud Computing Technology and Science, Naples, Italy, 4–6 December 2018; pp. 115–120. [Google Scholar]
  102. OpenWhisk. 2020. Available online: https://github.com/apache/incubator-openwhisk (accessed on 15 June 2025).
  103. Baldini, I.; Castro, P.; Cheng, P.; Fink, S.; Ishakian, V.; Mitchell, N.; Muthusamy, V.; Rabbah, R.; Suter, P. Cloud-native, event-based programming for mobile applications. In Proceedings of the International Conference on Mobile Software Engineering and Systems, Austin, Texas, 14–22 May 2016; pp. 287–288. [Google Scholar]
  104. Mahmoudi, N.; Khazaei, H. Simfaas: A performance simulator for serverless computing platforms. arXiv 2021, arXiv:2102.08904. [Google Scholar]
  105. Jounaid, S. Opendc Serverless: Design, Implementation and Evaluation of a Faas Platform Simulator. Ph.D. Thesis, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, 2020. [Google Scholar]
  106. Khattar, N.; Sidhu, J.; Singh, J. Toward energy-efficient cloud computing: A survey of dynamic power management and heuristics-based optimization techniques. J. Supercomput. 2019, 75, 4750–4810. [Google Scholar] [CrossRef]
  107. Córcoles, A.D.; Kandala, A.; Javadi-Abhari, A.; McClure, D.T.; Cross, A.W.; Temme, K.; Nation, P.D.; Steffen, M.; Gambetta, J.M. Challenges and opportunities of near-term quantum computing systems. arXiv 2019, arXiv:1910.02894. [Google Scholar] [CrossRef]
  108. Ojha, A.K. Nano-electronics and nano-computing: Status, prospects, and challenges. In Region 5 Conference: Annual Technical and Leadership Workshop; IEEE: Piscataway, NJ, USA, 2004; pp. 85–91. [Google Scholar]
  109. Baldini, I.; Castro, P.; Chang, K.; Cheng, P.; Fink, S.; Ishakian, V.; Mitchell, N.; Muthusamy, V.; Rabbah, R.; Slominski, A.; et al. Serverless computing: Current trends and open problems. In Research Advances in Cloud Computing; Springer: Berlin/Heidelberg, Germany, 2017; pp. 1–20. [Google Scholar]
  110. Buyya, R.; Srirama, S.N.; Casale, G.; Calheiros, R.; Simmhan, Y.; Varghese, B.; Gelenbe, E.; Javadi, B.; Vaquero, L.M.; Netto, M.A.; et al. A manifesto for future generation cloud computing: Research directions for the next decade. ACM Comput. Surv. (CSUR) 2018, 51, 1–38. [Google Scholar] [CrossRef]
  111. Castro, P.; Ishakian, V.; Muthusamy, V.; Slominski, A. The server is dead, long live the server: Rise of serverless computing, overview of current state and future trends in research and industry. arXiv 2019, arXiv:1906.02888. [Google Scholar]
  112. Wooley, J.C.; Lin, H.S. Committee on Frontiers at the Interface of Computing and Biology; National Academies Press: Washington, DC, USA, 2005. [Google Scholar]
  113. Verma, N.; Jia, H.; Valavi, H.; Tang, Y.; Ozatay, M.; Chen, L.-Y.; Zhang, B.; Deaville, P. In-memory computing: Advances and prospects. IEEE Solid-State Circuits Mag. 2019, 11, 43–55. [Google Scholar] [CrossRef]
  114. Schuman, C.D.; Potok, T.E.; Patton, R.M.; Birdwell, J.D.; Dean, M.E.; Rose, G.S.; Plank, J.S. A survey of neuromorphic computing and neural networks in hardware. arXiv 2017, arXiv:1705.06963. [Google Scholar]
  115. Adhianto, L.; Anderson, J.; Mellor, J. Refining HPCToolkit for application performance analysis at exascale. Int. J. High Perform. Comput. Appl. 2024, 38, 612–632. [Google Scholar] [CrossRef]
  116. Ellingson, S.; Pallez, G. Result-scalability: Following the evolution of selected social impact of HPC. Int. J. High Perform. Comput. Appl. 2025, 10943420251338168. [Google Scholar] [CrossRef]
  117. Chou, J.; Chung, W.-C. Cloud Computing and High Performance Computing (HPC) Advances for Next Generation Internet. Future Internet 2024, 16, 465. [Google Scholar] [CrossRef]
  118. Yang, Y.; Zhu, X.; Ma, Z.; Hu, H.; Chen, T.; Li, W.; Xu, J.; Xu, L.; Chen, K. Artificial HfO2/TiOx Synapses with Controllable Memory Window and High Uniformity for Brain-Inspired Computing. Nanomaterials 2023, 13, 605. [Google Scholar] [CrossRef]
  119. Yang, J.; Abraham, A. Analyzing the Features, Usability, and Performance of Deploying a Containerized Mobile Web Application on Serverless Cloud Platforms. Future Internet 2024, 16, 475. [Google Scholar] [CrossRef]
  120. Matsuo, R.; Elgaradiny, A.; Corradi, F. Unsupervised Classification of Spike Patterns with the Loihi Neuromorphic Processor. Electronics 2024, 13, 3203. [Google Scholar] [CrossRef]
  121. Kang, P. Programming for High-Performance Computing on Edge Accelerators. Mathematics 2023, 11, 1055. [Google Scholar] [CrossRef]
  122. Chen, S.; Dai, Y.; Liu, L.; Yu, X. Optimizing Data Parallelism for FM-Based Short-Read Alignment on the Heterogeneous Non-Uniform Memory Access Architectures. Future Internet 2024, 16, 217. [Google Scholar] [CrossRef]
  123. Silva, C.A.; Vilaça, R.; Pereira, A.; Bessa, R.J. A review on the decarbonization of high-performance computing centers. Renew. Sustain. Energy Rev. 2024, 189, 114019. [Google Scholar] [CrossRef]
  124. Liu, H.; Zhai, J. Carbon Emission Modeling for High-Performance Computing-Based AI in New Power Systems with Large-Scale Renewable Energy Integration. Processes 2025, 13, 595. [Google Scholar] [CrossRef]
  125. Zhang, N.; Yan, J.; Hu, C.; Sun, Q.; Yang, L.; Gao, D.W. Price-Matching-Based Regional Energy Market With Hierarchical Reinforcement Learning Algorithm. IEEE Trans. Ind. Inform. 2024, 20, 11103–11114. [Google Scholar] [CrossRef]
  126. Marković, D.; Grollier, J. Quantum Neuromorphic Computing. arXiv 2020, arXiv:2006.15111. Available online: https://arxiv.org/abs/2006.15111 (accessed on 15 June 2025). [CrossRef]
  127. Wang, C.; Shi, G.; Qiao, F.; Lin, R.; Wu, S.; Hu, Z. Research progress in architecture and application of RRAM with computing-in-memory. Nanoscale Adv. 2023, 5, 1559–1573. [Google Scholar] [CrossRef]
  128. Bravo, R.A.; Patti, T.L.; Najafi, K.; Gao, X.; Yelin, S.F. Expressive quantum perceptrons for quantum neuromorphic computing. Quantum Sci. Technol. 2025, 10, 015063. [Google Scholar] [CrossRef]
  129. Moreno, A.; Rodríguez, J.J.; Beltrán, D.; Sikora, A.; Jorba, J.; César, E. Designing a benchmark for the performance evaluation of agent-based simulation applications on HPC. J. Supercomput. 2019, 75, 1524–1550. [Google Scholar] [CrossRef]
  130. Mohammed, A.; Eleliemy, A.; Ciorba, F.M.; Kasielke, F.; Banicescu, I. An approach for realistically simulating the performance of scientific applications on high performance computing systems. Future Gener. Comput. Syst. 2020, 111, 617–633. [Google Scholar] [CrossRef]
  131. Netti, A.; Shin, W.; MOtt, M.; Wilde, T.; Bates, N. A Conceptual Framework for HPC Operational Data Analytics. In Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER), Portland, OR, USA, 7–10 September 2021; pp. 596–603. [Google Scholar]
  132. Jonas, E.; Schleier-Smith, J.; Sreekanti, V.; Tsai, C.-C.; Khandelwal, A.; Pu, Q.; Shankar, V.; Carreira, J.; Krauth, K.; Yadwadkar, N.; et al. Cloud programming simplified: A berkeley view on serverless computing. arXiv 2019, arXiv:1902.03383. [Google Scholar]
  133. Thottempudi, P.; Acharya, B.; Moreira, F. High-Performance Real-Time Human Activity Recognition Using Machine Learning. Mathematics 2024, 12, 3622. [Google Scholar] [CrossRef]
  134. Dakić, V.; Kovač, M.; Slovinac, J. Evolving High-Performance Computing Data Centers with Kubernetes, Performance Analysis, and Dynamic Workload Placement Based on Machine Learning Scheduling. Electronics 2024, 13, 2651. [Google Scholar] [CrossRef]
  135. Morgan, N.; Yenusah, C.; Diaz, A.; Dunning, D.; Moore, J.; Heilman, E.; Lieberman, E.; Walton, S.; Brown, S.; Holladay, D.; et al. Enabling Parallel Performance and Portability of Solid Mechanics Simulations Across CPU and GPU Architectures. Information 2024, 15, 716. [Google Scholar] [CrossRef]
  136. Sheikh, A.M.; Islam, M.R.; Habaebi, M.H.; Zabidi, S.A.; Bin Najeeb, A.R.; Kabbani, A. A Survey on Edge Computing (EC) Security Challenges: Classification, Threats, and Mitigation Strategies. Future Internet 2025, 17, 175. [Google Scholar] [CrossRef]
  137. Huerta, E.A.; Khan, A.; Davis, E.; Bushell, C.; Gropp, W.D.; Katz, D.S.; Kindratenko, V.; Koric, S.; Kramer, W.T.; McGinty, B.; et al. Convergence of artificial intelligence and high performance computing on NSF-supported cyberinfrastructure. J. Big Data 2020, 7, 88. [Google Scholar] [CrossRef]
  138. Liao, X.K.; Lu, K.; Yang, C.Q.; Li, J.W.; Yuan, Y.; Lai, M.C.; Huang, L.B.; Lu, P.J.; Fang, J.B.; Ren, J.; et al. Moving from exascale to zettascale computing: Challenges and techniques. Front. Inf. Technol. Electron. Eng. 2018, 19, 1236–1244. [Google Scholar] [CrossRef]
  139. Zhang, B.; Chen, S.; Gao, S.; Gao, Z.; Wang, D.; Zhang, X. Combination Balance Correction of Grinding Disk Based on Improved Quantum Genetic Algorithm. IEEE Trans. Instrum. Meas. 2022, 72, 1000112. [Google Scholar] [CrossRef]
  140. Deepak; Upadhyay, M.K.; Alam, M. Edge Computing: Architecture, Application, Opportunities, and Challenges. In Proceedings of the 3rd International Conference on Technological Advancements in Computational Sciences (ICTACS), Tashkent, Uzbekistan, 1–3 November 2023. [Google Scholar]
  141. Wu, M.; Mi, Z.; Xia, Y. A survey on serverless computing and its implications for jointcloud computing. In Proceedings of the 2020 IEEE International Conference on Joint Cloud Computing, Oxford, UK, 3–6 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 94–101. [Google Scholar]
  142. Shafiei, H.; Khonsari, A.; Mousavi, P. Serverless computing: A survey of opportunities, challenges and applications. arXiv 2019, arXiv:1911.01296. [Google Scholar] [CrossRef]
Figure 1. Number of investigated documents from each publisher in the proposed survey.
Figure 1. Number of investigated documents from each publisher in the proposed survey.
Electronics 14 02476 g001
Figure 2. PRISMA flow diagram.
Figure 2. PRISMA flow diagram.
Electronics 14 02476 g002
Figure 3. Filtering and screening process.
Figure 3. Filtering and screening process.
Electronics 14 02476 g003
Figure 4. Classification of HPC-related emerging computing technologies.
Figure 4. Classification of HPC-related emerging computing technologies.
Electronics 14 02476 g004
Figure 5. Taxonomy of quantum computing.
Figure 5. Taxonomy of quantum computing.
Electronics 14 02476 g005
Figure 6. A sample adiabatic CMOS circuit.
Figure 6. A sample adiabatic CMOS circuit.
Electronics 14 02476 g006
Figure 7. Classification of adiabatic computing.
Figure 7. Classification of adiabatic computing.
Electronics 14 02476 g007
Figure 8. Taxonomy of biological computing.
Figure 8. Taxonomy of biological computing.
Electronics 14 02476 g008
Figure 9. Nanocomputing taxonomy.
Figure 9. Nanocomputing taxonomy.
Electronics 14 02476 g009
Figure 10. Classification of neuromorphic computing.
Figure 10. Classification of neuromorphic computing.
Electronics 14 02476 g010
Figure 11. Classification of in-memory computing.
Figure 11. Classification of in-memory computing.
Electronics 14 02476 g011
Figure 12. Serverless computing classification.
Figure 12. Serverless computing classification.
Electronics 14 02476 g012
Table 1. Comparison of the present article with existing studies.
Table 1. Comparison of the present article with existing studies.
Computing TechnologyFuture Research
Directions
Mapping for CategorizingClassificationYearReference Number
Quantum Computing2021[10]
Quantum Computing, Serverless2021[11]
Serverless Edge Computing2021[12]
Biological Computing2021[13]
Neuromorphic Computing2021[14]
Quantum Computing2024[15]
High-Performance Computing2019[16,17]
Near-Memory Computing2019[18]
Quantum Computing2018[19]
High-Performance Computing2024[20]
Quantum Computing2024[21]
Neuromorphic Computing2025[22]
Adiabatic Quantum Computing2024[23]
Neuromorphic Computing2024[24]
Current Paper - -
Table 2. Inclusion/exclusion criteria of existing studies.
Table 2. Inclusion/exclusion criteria of existing studies.
Inclusion CriterionExclusion Criterion
Performing a holistic reviewNot Performing a holistic review
HPC-relatedNot HPC-related
Contain practical and empirical analysisDoes not contain practical and empirical analysis
Research period is recent [2018–2025]Research time is too old
Table 3. Key features of emerging HPC-related computing technologies.
Table 3. Key features of emerging HPC-related computing technologies.
Computing
Technology
Global
Research
Focus
Scientific ChallengesInterdisciplinarity LevelPotential ImpactMaturity LevelFundamental Innovation Level
Quantum ComputingVery highVery highVery high (Physics, Mathematics, and Electrical Engineering)Very highLowVery high
NanocomputingHighHighHigh (Nano, Materials, and Electronics)HighMediumHigh
In-Memory ArchitecturesGrowingHighHigh (Electrical engineering, Memory, and computer)HighLowHigh
Neuromorphic ComputingHighVery highVery high (Neuroscience, Engineering, and Machine learning)HighLowVery high
Serverless ComputingVery highMediumMedium (Mainly software engineering)HighHighMedium
Adiabatic ComputingMediumVery highVery high (Physics, Heat, and Computing)Very highVery lowVery high
Other Bio-Inspired SolutionsMediumHighVery high (Biology, Algorithms, and Circuits)HighLowVery high
Table 4. The difference between quantum computing and classical computing.
Table 4. The difference between quantum computing and classical computing.
FeatureSecurityParallel ProcessingStatesUnit of
Infomation
Classical ComputingBreakableLimited0 or 1bit
Quantum ComputingNew, Faster, and Robust AlgorithmsExtensive (based on superposition 0 and 1 at the same timequbit
Table 5. Comparing traditional architecture with in-memory architecture.
Table 5. Comparing traditional architecture with in-memory architecture.
In-Memory ArchitectureTraditional Architecture
Combining the processing unit and memoryLong distance between the CPU and memory
Data processing in placeHigh energy for data transfer
Use of fast memories such as RAM or non-volatile memory (NVRAM)Low I/O speed
Reduce data round trips between the CPU and DRAMLateness in data access
Table 6. Statistical analysis of emerging HPC-related computing technologies.
Table 6. Statistical analysis of emerging HPC-related computing technologies.
Computing TechnologyMaterials ScienceChemistryPhysics and AstronomyEngineeringComputer ScienceNumber of BooksNumber of Book ChaptersNumber of ConferencesNumber of Articles
Biological Computing001946
Quantum Computing1362621104953
Nanocomputing106865
In-Memory Computing21633572
Neuromorphic Computing3328961717
Serverless Computing90409119
Adiabatic Computing202845
Table 7. Comparing emerging HPC-related computing technologies.
Table 7. Comparing emerging HPC-related computing technologies.
Computing
Technology
Main ProvidersChallengesBenefitsApplicationsFuture Research
Quantum Computing
  • Google: Sycamore quantum computer and uses superconducting quantum hardware.
  • IBM: Quantum platform and uses superconducting quantum hardware.
  • D-Wave: Quantum annealing processors
  • IonQ: Uses ion traps.
  • Rigetti: Cloud quantum platform and uses superconducting quantum hardware.
  • Microsoft: Developing topological qubits.
  • High error in quantum operations [25,31].
  • Requirement of very low temperatures (close to absolute zero) [35].
  • Rapid decay of quantum states (decoherence) [30,33].
  • Complexity in scalability [28].
  • Massively parallel processing using superposition [28]
  • Ability to solve problems intractable for classical computers [32,34]
  • Applications in cryptography, quantum simulation, and optimization [30]
  • Quantum cryptography [30]
  • Molecular simulation in chemistry and pharmacy [34]
  • Optimization of complex systems (e.g., traffic and supply chain) [37]
  • Artificial intelligence and quantum machine learning [35]
  • Modeling of physical and biological systems [33,34,35]
  • Development of reliable and commercial quantum computers [34].
  • Creation of quantum networks (quantum Internet) [35,36].
  • Replacement of some applications of classical computers with quantum computers [32].
  • Hybrid cooperation between classical and quantum computing (hybrid quantum–classical systems) [36].
Nanocomputing
  • Carbon nanotubes (CNTs): Used as transistors, wires, or electron channels.
  • Nanocrystals and quantum dots: Used to store data or perform calculations.
  • Nanotransistors: Transistors a few nanometers in size that are made from materials that are alternatives to silicon.
  • DNA Computing: Uses biological structures such as DNA to perform computational operations.
  • Switching Molecules: Molecules that can be switched between different states and can be used in calculations.
  • Precision fabrication and assembly: It is very difficult to fabricate nanoscale circuits with high precision [64].
  • Stability and durability: Thermal fluctuations and noise affect it at the nanoscale [64].
  • Repeatability: Nanofabrication processes may not give uniform results [65].
  • Standardization: There is still no global standard for nanosystems [64].
  • Biological and environmental issues: The widespread use of nanomaterials requires careful risk assessments [66].
  • Very small size [64]
  • Low power consumption [64]
  • Ability to work in conditions that are not possible for traditional computers [65]
  • Potential for very high speeds [64]
  • Faster processing with less energy [65]
  • Development of wearable and implantable computers [65]
  • Artificial intelligence at the nanoscale [64]
  • Smart biosensors [65]
  • Bio- and bionic computers [66]
  • Ongoing research to replace CMOS with nanotechnology [65].
  • Development of DNA or nanotube processors for massively parallel processing [65].
  • Convergence of nanocomputing with quantum and biocomputing [64].
In-Memory Architecture
  • SRAM-based IMC: Fast and high power.
  • DRAM-based IMC: Cheap and suitable for main memory.
  • RRAM (resistive RAM): Non-volatile and high density.
  • PCM (phase-change memory): Accurate and non-volatile.
  • MRAM: Stable and low power.
  • FeFET: Fast and non-volatile memory.
  • More complex hardware design [88,91].
  • Mass production and price challenges [88,89].
  • Need to redesign algorithms for effective operation [93].
  • Reduce data access latency [88]
  • Increase energy efficiency [88]
  • Suitable for memory-intensive applications [88]
  • Artificial Intelligence and Machine Learning: Fast processing of matrices and vectors
  • Big data [88]
  • Fast in-memory analysis without I/O [91]
  • Scientific simulations: Dynamic modeling with large amounts of data [88,92]
  • Edge Computing and IoT [90]
  • Fast processing on a device with limited resources [88]
  • Combination with quantum and neuromorphic computing [88].
  • Design of AI chips with memory-centric architecture like IBM NorthPole chips [90,91].
  • Development of operating systems and programming languages for active memory management [92].
Biologically Inspired Solutions
  • Artificial Neural Networks: Simulating brain neurons for learning and decision-making.
  • Evolutionary Computing: Genetic algorithms and natural selection for optimization.
  • DNA Computing: Using DNA to perform parallel computation.
  • Artificial Immune Systems: Modeling the body’s immune system for security and detection
  • Brain-Inspired Computing: Algorithms and hardware such as spiking neurons
  • Swarm Intelligence: Solving complex problems by modeling the collective behavior of organisms
  • Accurate modeling of complex natural behaviors [54].
  • Uncertainty and need to adjust many parameters [53,54,55].
  • Slower speed of some algorithms compared to classical methods [60].
  • Physical implementation of biocomputing such as DNA is still in its infancy [59].
  • Ethical issues in some branches (such as brain simulation or genetic engineering) [62].
  • High adaptability to dynamic and changing environments [62]
  • Ability to solve complex and multi-objective problems [60]
  • Natural parallel processing [59]
  • Fault tolerance [59]
  • Learning and gradual evolution [54,59]
  • Inspiration from nature: Proven algorithms [60]
  • Ability to develop and adapt to large scales [56]
  • No need for complete knowledge of the exact mathematical model [58]
  • Robustness against noise and incomplete data [60]
  • Possibility of combining with other models [61]
  • Engineering optimization and industrial planning [59]
  • Artificial intelligence and machine learning [61]
  • Autonomous and multi-agent robotics [60]
  • Intrusion detection and cybersecurity [54,58]
  • Biological and genetic data analysis [54]
  • Microprocessor development and molecular computing [55]
  • Development of brain-inspired architectures (neuromorphic computing) [56,57,58].
  • Integration of living systems with machines (bio-hybrid systems) [57].
  • Wider application in medicine, disease diagnosis, and drug design [58].
  • Increasing the power of evolutionary algorithms in solving complex global problems [59].
  • Application of DNA computing in biological and pharmaceutical simulations [56].
Adiabatic Technologies
  • MIT: Research on adiabatic design in low-power systems.
  • IBM Research: Exploring adiabatic transistors for the future of computing.
  • Nanoelectronics Research Initiative: Developing adiabatic logic prototypes.
  • Universities: Stanford and Berkeley.
  • Academic papers on adiabatic circuits.
  • More complex design than conventional CMOS [50].
  • Slower processing speed in some cases. Need for special voltage sources (voltages with special waveforms, not simple square signals) [52].
  • Scalability and widespread industrial implementation problem [52].
  • Timing and synchronization complexity.
  • Energy recovery efficiency drops with scale.
  • Lack of standard design methodologies and tools.
  • Interfacing with conventional logic.
  • Thermodynamic and physical limits.
  • Limited application benchmarks.
  • Drastic reduction in energy consumption [52]
  • Greater thermal stability [52]
  • Less noise in the system [50]
  • Simplification of some algorithms [52]
  • Better compatibility with specific quantum hardware [52]
  • Scalability in optimization problems [51]
  • Compatibility with hybrid systems
  • Increased life of components and chips [51]
  • Ultra-low-power processors (e.g., in medical implants or the Internet of Things) [52]
  • Ultra-low-power embedded systems [51]
  • Low-loss quantum processing [52]
  • Research in reversible computing [52]
  • Adiabatic technologies are considered part of post-CMOS computing, along with other emerging technologies such as quantum computing, neuromorphic computing, nanocomputing, and DNA computing [52].
Neuromorphic Systems
  • Traditional CMOS Circuits: Implementing neurons with transistors.
  • Memristors: Information-processing memories suitable for learning synapses.
  • Neuromorphic Photonics: Using light to build fast neurons with low energy consumption.
  • Magnetic And Quantum Materials: For building smart synapses.
  • Complexity in hardware and algorithm design [80].
  • Incompatibility with many traditional programming languages [86].
  • Need for a complete change in computing paradigm [80,83].
  • Still in the research or semi-commercial stages [80].
  • Parallel and simultaneous processing [80]
  • Very low power consumption (in the microwatt range) [83]
  • Capable of learning in real time and adapting to the environment [90]
  • Suitable for environments without access to high energy (such as space or wearable devices) [80]
  • Ultra-low-power audio and video processing at the edge [81]
  • Adaptive robotics and environmental interaction [82]
  • Real-time analysis of complex patterns [82]
  • Human brain simulation for neuroscience [80]
  • Security applications and threat analysis [80,81,82,83,84]
  • Combining neuromorphic with quantum computing to achieve greater power and adaptability [85].
  • Development of neuromorphic operating systems and programming languages [80].
  • Applications in cognitive computing, personalized medicine, and artificial general intelligence (AGI) [86].
  • Growth of memristors for the commercialization of neuromorphic in smart devices [80].
Serverless Computing
  • AWS Lambda: Pioneer in FaaS, supporting multiple programming languages and Azure functions.
  • Deep integration with the Microsoft ecosystem and Google Cloud.
  • Functions are simple and suitable for rapid development.
  • Initial execution of the function may be delayed [95].
  • Limitations on the time and resources of the function execution [95].
  • Management of dependencies and software packages [95].
  • Harder to debug than traditional programs [95].
  • Complexity in the architecture of large systems (such as communication between functions) [95].
  • Lower costs [95]
  • Scalability
  • Accelerate time to market
  • Scale rapidly with serverless computing
  • Improved flexibility
  • No server management
  • Improved security
  • Productivity
  • Real-time image, video, and file processing [99]
  • Building lightweight and scalable APIs [98]
  • Event processing in the Internet of Things (IoT) [97,99]
  • Cleansing or processing data when entering the database [95]
  • Automation and management of scheduled tasks (Cron jobs) [96]
  • Support for messaging bots and natural language processing [95,96]
  • Growing adoption of microservice architectures [95].
  • Better integration with DevOps and CI/CD tools [97].
  • Development of hybrid serverless services with persistent memory support [96].
  • Improved cold start management and real-time execution [100].
  • Key role in edge computing and IoT [97].
Table 8. Future research direction in emerging HPC-related computing technologies.
Table 8. Future research direction in emerging HPC-related computing technologies.
Future Research DirectionReferencesTechnology Gap
Integration of emerging HPC-related technologies with machine learning and artificial intelligenceOhja et al. [108], Baldini et al. [109], Buyya et al. [110], Castro et al. [111]Maturity in the use of ML and AI in emerging HPC-related technologies
Scalability analysis in emerging HPC-related technologiesWooley et al. [112], Verma et al. [113], Scuman et al. [114], Adhianto et al. [115], Ellingson et al. [116], Chou et al. [117], Yang et al. [118], Yang et al. [119], Matsuo et al. [120], Kang et al. [121], Chen et al. [122]Scalability issues in emerging HPC-related technologies
Green HPC-related methods and mechanismsSilva et al. [123], Liu et al. [124], Zhang et al. [125]Reducing the carbon footprint emerging from HPC-related technologies
Development of hybrid systems that integrate multiple emerging technologies (e.g., quantum + neuromorphic or in-memory + nanoscale computing) to simultaneously leverage their respective advantages.Marković et al. [126], Wang et al. [127], Bravo et al. [128]Design and development of hybrid HPC systems
Creation of standardized simulation models and performance evaluation frameworks for emerging HPC-related technologiesMoreno et al. [129], Mohammed et al. [130], Netti et al. [131]Need to standardize emerging HPC-related technologies and propose efficient performance evaluation frameworks
Exploration of complementary trends (edge computing, AI-HPC convergence, exascale and beyond computing, cloud + HPC hybrid models, edge–HPC integration, GPU cloud computing, etc.)Jonas et al. [132], Thottempudi et al. [133], Dakić et al. [134], Morgan et al. [135], Chou et al. [122], Sheikh et al. [136], Huerta et al. [137], Liao et al. [138], Zhang et al. [139], Deepak et al. [140], We et al. [141], Shafiei et al. [142]More focus on complementary trends in HPC technologies
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arianyan, E.; Gholipour, N.; Maleki, D.; Ghorbani, N.; Sepahvand, A.; Goudarzi, P. A Systematic Review and Classification of HPC-Related Emerging Computing Technologies. Electronics 2025, 14, 2476. https://doi.org/10.3390/electronics14122476

AMA Style

Arianyan E, Gholipour N, Maleki D, Ghorbani N, Sepahvand A, Goudarzi P. A Systematic Review and Classification of HPC-Related Emerging Computing Technologies. Electronics. 2025; 14(12):2476. https://doi.org/10.3390/electronics14122476

Chicago/Turabian Style

Arianyan, Ehsan, Niloofar Gholipour, Davood Maleki, Neda Ghorbani, Abdolah Sepahvand, and Pejman Goudarzi. 2025. "A Systematic Review and Classification of HPC-Related Emerging Computing Technologies" Electronics 14, no. 12: 2476. https://doi.org/10.3390/electronics14122476

APA Style

Arianyan, E., Gholipour, N., Maleki, D., Ghorbani, N., Sepahvand, A., & Goudarzi, P. (2025). A Systematic Review and Classification of HPC-Related Emerging Computing Technologies. Electronics, 14(12), 2476. https://doi.org/10.3390/electronics14122476

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop