Next Article in Journal
Parameter Optimization of Wireless Power Transfer Based on Machine Learning
Previous Article in Journal
A Solid-State Marx Generator with Prevention of through Current for Rectangular Pulses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancements in Artificial Intelligence Circuits and Systems (AICAS)

by
Tymoteusz Miller
1,2,*,
Irmina Durlik
2,3,
Ewelina Kostecka
2,4,
Paulina Mitan-Zalewska
4,
Sylwia Sokołowska
2,
Danuta Cembrowska-Lech
2,5 and
Adrianna Łobodzińska
2,5
1
Institute of Marine and Environmental Sciences, University of Szczecin, Waska 13, 71-415 Szczecin, Poland
2
Polish Society of Bioinformatics and Data Science BIODATA, Popieluszki 4c, 71-214 Szczecin, Poland
3
Faculty of Navigation, Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
4
Faculty of Mechatronics and Electrical Engineering, Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
5
Institute of Biology, University of Szczecin, Felczaka 3c, 71-412 Szczecin, Poland
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(1), 102; https://doi.org/10.3390/electronics13010102
Submission received: 13 November 2023 / Revised: 18 December 2023 / Accepted: 22 December 2023 / Published: 26 December 2023
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))

Abstract

:
In the rapidly evolving landscape of electronics, Artificial Intelligence Circuits and Systems (AICAS) stand out as a groundbreaking frontier. This review provides an exhaustive examination of the advancements in AICAS, tracing its development from inception to its modern-day applications. Beginning with the foundational principles that underpin AICAS, we delve into the state-of-the-art architectures and design paradigms that are propelling the field forward. This review also sheds light on the multifaceted applications of AICAS, from optimizing energy efficiency in electronic devices to empowering next-generation cognitive computing systems. Key challenges, such as scalability and robustness, are discussed in depth, along with potential solutions and emerging trends that promise to shape the future of AICAS. By offering a comprehensive overview of the current state and potential trajectory of AICAS, this review serves as a valuable resource for researchers, engineers, and industry professionals looking to harness the power of AI in electronics.

1. Introduction

The incorporation of Artificial Intelligence (AI) in electronic circuits and systems opens up a realm of possibilities, pushing the boundaries of what can be achieved in modern computing and technology. This amalgamation has birthed a dynamic and evolving field known as Artificial Intelligence Circuits and Systems (AICAS). The revolutionary advancements in AICAS have become a cornerstone in addressing complex challenges faced by various sectors, bringing about a transformative change in how we perceive and interact with electronic systems. This section aims to provide a concise yet comprehensive introduction to AICAS, shedding light on its historical background, significance, and the scope of this review [1,2,3].

1.1. Background

The fusion of AI with electronic circuits can be traced back to the era when the idea of intelligent machines began to take shape. However, it is the exponential growth in data, coupled with advancements in machine learning algorithms and hardware capabilities over the past decade, that has truly propelled AICAS into the limelight. The inception of specialized hardware like Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) optimized for AI computations marked a significant milestone in the journey of AICAS. This synergy between hardware and software laid a solid foundation for the exploration and expansion of AICAS, making it a focal point of innovation in the realm of electronics and computing [4,5,6,7].
AICAS refers to a specialized domain that intersects the fields of artificial intelligence (AI) and electronic circuit design. This area focuses on the development and optimization of hardware systems specifically designed to facilitate AI operations. These systems include, but are not limited to, specialized processors like Tensor Processing Units (TPUs), Neural Processing Units (NPUs), and advanced GPU architectures. The primary goal of AICAS is to enhance the efficiency, performance, and capabilities of AI applications through tailored hardware solutions [4,5,6,7,8].
The role of AICAS in AI implementation is multifaceted. It encompasses the design and development of hardware that can efficiently process AI algorithms, the integration of AI capabilities into existing electronic systems, and the exploration of new architectures that can revolutionize how AI computations are performed. AICAS is pivotal in addressing the increasing demands for speed, efficiency, and adaptability in AI applications, particularly in areas such as deep learning, neural networks, and machine learning.

1.2. Background Significance of AICAS

AICAS embodies a groundbreaking frontier in electronics, acting as a catalyst for innovation and progress. The significance of AICAS is manifold:
  • Performance Enhancement: AICAS significantly augments the computational prowess of electronic systems, enabling real-time processing and analysis of massive datasets [9,10].
  • Energy Efficiency: By optimizing circuit designs for AI algorithms, AICAS plays a pivotal role in enhancing energy efficiency, a critical concern in modern electronics [8,9].
  • Enabling Next-Gen Applications: From autonomous vehicles to smart healthcare systems, AICAS is at the heart of enabling a plethora of next-generation applications [10,11].
  • Catalyzing Research and Development: The continuous evolution of AICAS drives a surge in research and development activities, pushing the envelope in what can be achieved in AI and electronics [12,13].

1.3. Scope of Review

This review endeavors to provide an exhaustive examination of the advancements in AICAS from its inception to its modern-day applications. The narrative will commence with a detailed discussion on the foundational principles that underpin AICAS, moving on to delve into the state-of-the-art architectures and design paradigms propelling the field forward. A comprehensive analysis of the multifaceted applications, challenges, and emerging trends in AICAS will follow, aiming to provide a holistic understanding of the domain. Through this review, we aspire to offer a valuable resource for researchers, engineers, and industry professionals looking to harness the power of AI in electronics, thereby contributing to the existing body of knowledge and fostering further innovation in the field.

2. Materials and Methods

The trajectory of Artificial Intelligence Circuits and Systems (AICAS) is a testament to the relentless endeavor of the scientific community to harness the potential of Artificial Intelligence (AI) in augmenting electronic systems. The evolution of AICAS is a blend of ingenious innovations in hardware, coupled with advancements in AI algorithms. This section intends to traverse through the historical pathway of AICAS, exploring its nascent stages, and highlighting the significant milestones that have shaped its contemporary landscape [14,15] (Table 1).

2.1. Early Developments

The embryonic phase of AICAS was marked by rudimentary attempts to integrate simplistic AI algorithms with electronic circuits. The initial endeavors were predominantly focused on creating basic logic circuits capable of simple decision-making processes. During this era, the hardware limitations were a significant bottleneck, restricting the complexity of AI algorithms that could be integrated [16,17].
Table 1. Historical Milestones in AICAS.
Table 1. Historical Milestones in AICAS.
YearMilestoneImpact/Significance
1950sInvention of the Integrated CircuitFoundation for modern computing and AI hardware [18]
1965Moore’s Law PredictionPredicted the exponential growth of computing power [19]
1980sRise of Personal ComputersExpanded the use of computing, setting the stage for advanced AI [20]
1997Deep Blue Defeats
Kasparov
Demonstrated AI’s potential in problem-solving and complex tasks [21]
2006Introduction of Multi-core ProcessorsEnhanced processing capabilities, crucial for AI applications [22]
2012Breakthrough in Deep Learning—AlexNetRevolutionized AI with deep neural networks, impacting various AI fields [23]
2019Development of Quantum ComputingPotential to dramatically increase processing power for AI [24]
2023Advancements in Neuromorphic ComputingMimicking human brain processes, leading to more efficient AI systems [18,24]
The initial spark of integrating AI into circuits saw its light with the inception of simple perceptron-based circuits, paving the way for more complex neural network-based systems. The early iterations of AICAS were primarily constrained to laboratories, with limited real-world applications due to the nascent stage of AI algorithms and hardware capabilities [3].

2.2. Milestones in AICAS Evolution

The journey of AICAS from a conceptual framework to a robust and dynamic field is marked by several pivotal milestones:
  • Advent of Specialized Hardware: The development of specialized hardware like Graphics Processing Units (GPUs) and, later, Tensor Processing Units (TPUs) marked a significant milestone. These hardware advancements provided the necessary computational power to handle complex AI algorithms, thus broadening the horizon of AICAS [2,25].
  • Neuromorphic Computing: Inspired by the human brain’s architecture and functioning, the advent of neuromorphic computing marked a significant stride in AICAS evolution. Neuromorphic chips like IBM’s TrueNorth and Intel’s Loihi have propelled the field toward creating efficient and powerful AI-driven circuit systems [26,27].
  • In-Memory Computing: The introduction of in-memory computing addressed the bottleneck of data movement between the processor and memory, significantly enhancing the efficiency and performance of AICAS [28,29].
  • Quantum Computing Circuits: The exploration of quantum computing circuits in the realm of AICAS has opened up new vistas, promising exponential growth in computational capabilities. Although in its infancy, quantum AICAS is a burgeoning field with the potential to redefine the paradigms of computing [30,31].
  • Open-source Software and Hardware Frameworks: The proliferation of open-source frameworks has democratized access to AICAS, fostering a collaborative environment for innovation and development [3,16].
These milestones, among others, have played a quintessential role in shaping the modern-day landscape of AICAS, continually pushing the boundaries and setting a precedent for future innovations in this exhilarating field.

3. Foundational Principles of AICAS

Artificial Intelligence Circuits and Systems (AICAS) epitomize a symbiotic amalgamation of AI algorithms and electronic circuitry, orchestrating a paradigm where the computational acumen of AI dovetails with the physical realm of electronics. The foundational principles of AICAS serve as the bedrock upon which the sophisticated architectures and design paradigms are conceived and nurtured. This section endeavors to elucidate the core underpinnings of AICAS, dissecting its basic architectures, core technologies and algorithms, and the pivotal principle of hardware–software co-design [18,32,33].

3.1. Basic Architectures

The architectural fabric of AICAS is woven with a myriad of design schematics tailored to accommodate the exigencies of AI computations [18] (Table 2). Below are some of the seminal architectures that have sculpted the landscape of AICAS:
  • Von Neumann Architecture: Traditional von Neumann architectures have been the starting point, albeit with inherent bottlenecks like the von Neumann bottleneck that hinders the seamless execution of AI algorithms [34,35].
  • Neuromorphic Architecture: Drawing inspiration from the neural networks of the human brain, neuromorphic architectures endeavor to emulate synaptic and neuronal functionalities, fostering low-power and efficient computation [36].
  • In-memory Computing Architecture: By integrating computation within memory units, this architecture alleviates the data movement bottleneck, significantly bolstering computational efficiency and speed [37,38].
  • Quantum Computing Architecture: Although nascent, quantum architectures herald a realm of exponential computational capabilities, offering a glimpse into the future trajectory of AICAS [30,39].
Table 2. Comparison of AI Circuit Technologies.
Table 2. Comparison of AI Circuit Technologies.
TechnologyPower ConsumptionSpeedScalability
CMOS [40]ModerateHighHigh
FinFET [41]LowVery HighVery High
Memristors [42]Very LowModerateModerate

3.2. Core Technologies and Algorithms

The essence of AICAS is distilled from a confluence of cutting-edge technologies and algorithms that furnish the necessary computational and analytical prowess [43]:
  • Machine Learning (ML) and Deep Learning (DL): ML and DL algorithms are the linchpins that drive the intelligence in AICAS, enabling data-driven learning and decision-making [44].
  • Optimization Algorithms: Optimization algorithms are cardinal in tuning the performance of AICAS, ensuring optimal utilization of resources and energy efficiency [45,46].
  • Data Analytics and Processing Technologies: The capability to process and analyze copious amounts of data in real time is facilitated by advanced data analytics and processing technologies [47].

3.3. Hardware-Software Co-Design

The principle of hardware–software co-design is a cornerstone in the evolution of AICAS [48]. This principle underscores a collaborative design approach where both hardware and software designs are intertwined, and orchestrated in tandem to achieve optimal performance, energy efficiency, and functionality:
  • Resource Allocation: Efficient allocation and utilization of hardware resources are meticulously planned to ensure the seamless execution of software algorithms [49,50].
  • Performance Optimization: The co-design approach facilitates a harmonized optimization of both hardware and software components, ensuring that the system performance is tuned to meet the desired benchmarks [51,52].
  • Scalability and Flexibility: Hardware–software co-design fosters a scalable and flexible system architecture, enabling AICAS to adeptly adapt to varying computational demands and application domains [52,53].
By delving into these foundational principles, one unravels the intricate design and operational paradigm that undergirds AICAS, providing a prism through which the advancements and potential of AICAS can be fully appreciated and explored.

4. State-of-the-Art AICAS Architectures

In the ever-evolving domain of Artificial Intelligence Circuits and Systems (AICAS), the influx of state-of-the-art architectures continuously molds and enriches the landscape. These avant-garde architectures epitomize the inexorable quest for escalated computational efficacy, diminished energy expenditure, and an expanded spectrum of applications. This section delineates several pioneering architectures that are spearheading the evolution of AICAS, traversing through the intricacies of Neuromorphic Computing, Quantum Computing Circuits, In-Memory Computing, and the advancements burgeoned in the realm of processing units including GPUs, TPUs, and NPUs [37,48,49,50].

4.1. Neuromorphic Computing

Neuromorphic Computing unveils a paradigm that meticulously emulates the architectural and operational principles inherent in the human brain within the fabric of electronic circuits [54] (Figure 1). This novel paradigm envelops several facets.
At the heart of Neuromorphic Computing lies Neuromorphic Chips such as Intel’s Loihi and IBM’s TrueNorth, which are engineered to mimic the intricacies of synaptic and neuronal behavior. These chips are heralded for promoting low-power, efficient, and real-time computing, embodying a significant stride toward bridging the chasm between conventional computing architectures and the computational efficiency akin to biological systems [55,56,57,58].
Segueing into the domain of Spiking Neural Networks (SNNs), these networks are architected to emulate the spike-based information processing characteristic of biological neural networks. The hallmark of SNNs resides in their capacity to offer avenues for low-power and event-driven computation, which is instrumental in edging closer to the energy efficiency exhibited by the human brain [59,60,61].
Pivoting to the facet of On-chip Learning, Neuromorphic architectures are lauded for facilitating this crucial capability, thereby significantly truncating the dependency on off-chip data transfer. This feature is instrumental in enhancing real-time learning and adaptation, which is quintessential in a myriad of real-world applications where the latency in data transfer can be a limiting factor [62,63,64].
The confluence of these facets within Neuromorphic Computing engenders a fertile ground for fostering advancements that are poised to significantly contribute to the broader narrative of AICAS evolution. Through the lens of Neuromorphic Computing, one can envisage a future where the convergence of biological and electronic computation paradigms could potentially usher in a new era of computational efficiency and capability.

4.2. Quantum Computing Circuits

Quantum Computing Circuits (Figure 2) unfurl an era of computational prowess that marries the enigmatic principles of quantum mechanics with the precision of modern electronics. At the vanguard of these circuits lies the innovative exploitation of superposition and entanglement—phenomena that allow for the simultaneous performance of a multitude of computations, leaving the linear trajectories of classical computing in their wake. The fabric of quantum computing is interwoven with quantum bits, or qubits, which defy the binary constraints of classical bits by existing in multiple states concurrently, thus forming the pulsating heart of quantum computing circuits. Through the lens of quantum algorithms—Shor’s algorithm being preeminent in factorization and Grover’s in database search tasks—the potential for computations that outstrip the capabilities of classical algorithms exponentially is not merely theoretical but within the grasp of contemporary research [62,63,64].

4.3. In-Memory Computing

In-memory Computing emerges as a formidable solution to the perennial bottleneck wrought by the data transfer schism between processing and memory units. This approach is a paradigm shift toward a more integrated system where computational tasks are intrinsic to memory elements themselves, slashing the volume of data movement and thereby surging energy efficiency. The utility of computational memory is a transformative stride in AICAS, where memory elements transcend their traditional roles and become active agents in computation. This breakthrough is further complemented by analog computation techniques that leverage the inherent analog properties of memory devices to execute computations. Such innovation paves the path for more efficient, compact computational frameworks, marking a significant evolution in the architecture of AICAS. Through these endeavors, In-Memory Computing is not merely a concept but a tangible reality, driving the future of efficient electronic design and sophisticated computing mechanisms [65,66].

4.4. Advanced Processing Units: GPUs, TPUs, and NPUs

In the dynamic realm of Artificial Intelligence Circuits and Systems (AICAS), the emergence of Advanced Processing Units marks a pivotal evolution, catering to the escalating demand for heightened computational power and efficiency essential for AI applications. This section delves into the intricacies of three cornerstone technologies: Graphics Processing Units (GPUs), Tensor Processing Units (TPUs), and Neural Processing Units (NPUs). Each of these units embodies a unique architectural approach and plays a critical role in the advancement of AI and machine learning [67,68,69,70,71].

4.4.1. Graphics Processing Units (GPUs)

In the domain of modern computational design, Graphics Processing Units (GPUs) epitomize a paradigm shift toward highly parallelized processing architectures. Distinct from their central processing unit (CPU) (Figure 3) counterparts, GPUs (Figure 4) are engineered with an intrinsic focus on concurrent processing capabilities. This architectural paradigm is fundamentally composed of an array of smaller, more efficient cores designed for handling multiple operations simultaneously, thus markedly enhancing computational throughput. The intricate design of GPUs facilitates a substantial elevation in the efficiency of executing a multitude of computationally intensive tasks in parallel. This capability is especially pivotal in scenarios necessitating the rapid processing of a vast number of simple, yet concurrent, operations, a common characteristic in graphical computations [72,73,74,75].
In the realm of Artificial Intelligence (AI) and Machine Learning (ML), GPUs have emerged as a cornerstone in facilitating the advancement of these technologies. The inherent architecture of GPUs, conducive to parallel data processing, aligns seamlessly with the computational demands of training deep neural networks. These networks, characterized by their deep layered structures and substantial neuron interconnectivity, benefit immensely from the parallel processing prowess of GPUs. In scenarios involving intricate algorithms and models, such as convolutional neural networks or recurrent neural networks, GPUs expedite computational processes, thereby reducing training times significantly. Moreover, GPUs have proven instrumental in managing and processing the vast datasets typical in machine learning, enabling more efficient data handling and computation [16,75].
The utilization of GPUs extends beyond the confines of theoretical applications in AI and ML, permeating into a multitude of practical and real-world applications. In the sphere of image and video processing, GPUs facilitate the rapid and efficient analysis and manipulation of visual data, a cornerstone in areas like digital media, surveillance, and medical imaging. Furthermore, the gaming industry has been profoundly transformed by the graphic-rendering capabilities of GPUs, enabling the creation of increasingly realistic and immersive virtual environments. In the burgeoning field of autonomous vehicles, GPUs play a critical role in processing the myriad of sensor inputs and in executing the complex algorithms required for real-time decision making. Looking toward the horizon, the evolution of GPU technology is poised to continue its trajectory of growth and innovation. The emergence of more advanced GPUs, with enhanced capabilities and efficiency, is anticipated to further propel the frontiers of AI, leading to more sophisticated, intelligent, and autonomous systems. This ongoing evolution underscores the pivotal role that GPUs will continue to play in shaping the future landscape of AI-driven technologies [16,76,77,78].

4.4.2. Tensor Processing Units (TPUs)

Tensor Processing Units (TPUs) (Figure 5) represent a quintessential advancement in processing unit architecture, tailored specifically for expediting operations in artificial intelligence (AI) and machine learning (ML). The architectural foundation of TPUs is meticulously optimized for matrix multiplication, a pivotal operation in neural network computations. This optimization is achieved through a matrix-processing-centric design, which fundamentally differs from the scalar and vector processing approaches of traditional Central Processing Units (CPUs) and Graphics Processing Units (GPUs), respectively. TPUs leverage a unique design philosophy that prioritizes data throughput and parallelism specific to tensor operations, thereby streamlining the computational processes integral to AI and ML workloads [65,66,79,80].
In the sphere of AI and ML, TPUs have emerged as a transformative force, particularly in the acceleration of neural network computations. These specialized units excel in handling the extensive matrix operations characteristic of deep learning models, especially in large-scale AI endeavors. TPUs offer a significant enhancement in computational speed, enabling faster training and inference times in complex neural networks. This acceleration is not merely a function of raw speed, it also encompasses remarkable efficiency gains. TPUs demonstrate a notable reduction in energy consumption per computation, a critical factor in sustainable and scalable AI development. This efficiency is paramount in scenarios where large-scale computations are routine, making TPUs an indispensable asset in the AI and ML landscape [65,81,82,83].
The practical applications and impact of TPUs are both profound and diverse, particularly in their integration within cloud computing infrastructures and data centers. In these environments, TPUs facilitate the deployment of sophisticated AI models, offering scalable and efficient processing capabilities. This integration plays a critical role in democratizing access to advanced AI computation, allowing for a wider range of entities to leverage deep learning technologies. Moreover, TPUs are instrumental in the processing of large datasets and in providing the computational backbone for complex AI services and applications [82,84,85].
Looking to the future, the continuous advancement in TPU technology promises to further amplify its impact on AI research and industry. Anticipated developments include enhancements in processing power, energy efficiency, and adaptability to a broader range of AI algorithms and models. As TPUs evolve, they are expected to unlock new possibilities in AI, potentially leading to more advanced, efficient, and accessible AI applications across various sectors. This trajectory underscores the growing significance of TPUs as a central component in the rapidly advancing field of AI and ML, heralding a new era of computational capability and innovation [86,87,88].

4.4.3. Neural Processing Units (NPUs)

Neural Processing Units (NPUs) (Figure 6) are at the forefront of specialized processor design, engineered specifically for the efficient execution of neural network algorithms. These units embody a targeted architectural approach, focusing on optimizing the specific computational patterns inherent in neural networks. A key aspect of NPU design is the harmonious balance between high-performance computation and minimal power consumption. This balance is critical, as it enables the deployment of NPUs in a diverse array of devices, ranging from high-powered servers to low-power consumer electronics. NPUs are architected to perform complex neural computations more efficiently than general-purpose CPUs, while simultaneously conserving energy, a feature particularly important in battery-powered and mobile devices [70,89,90,91,92].
In the realms of AI and ML, NPUs play a pivotal role in enabling on-device AI computations. This capability is particularly transformative in the context of smartphones and Internet of Things (IoT) devices, where computing resources and power availability are often limited. NPUs allow these devices to perform sophisticated AI tasks, such as image and speech recognition, directly on the device, thereby reducing the need for constant cloud connectivity and data transfer. This on-device processing capability is also a cornerstone of edge computing, where data are processed locally, enabling real-time data analysis and decision making. NPUs are instrumental in this paradigm, offering the necessary computational power to handle complex AI tasks at the edge, close to where data are generated [71,93,94,95].
The applications of NPUs are vast and varied, extending across both consumer electronics and industrial automation sectors. In consumer electronics, NPUs are embedded in smartphones, smart home devices, and wearables, enhancing user experience through features like facial recognition, augmented reality, and personalized voice assistants. In the realm of industrial automation, NPUs are integral in optimizing operations through predictive maintenance, quality control, and robotic automation. These applications demonstrate the versatility and utility of NPUs in both enhancing everyday consumer experiences and driving efficiencies in industrial processes [89,93,94].
Looking forward, the potential future developments in NPU technology are poised to have a significant impact on AI and machine learning ecosystems. Future NPUs are expected to offer even greater performance capabilities, higher energy efficiency, and more adaptable architectures, capable of handling a wider range of AI algorithms and models. Such advancements are anticipated to catalyze further integration of AI into diverse domains, leading to more intelligent, efficient, and autonomous systems. The evolution of NPU technology will not only enhance existing applications but also pave the way for innovative uses of AI, potentially reshaping various aspects of technology and society. This ongoing development underscores the critical role of NPUs in the broader narrative of AI and ML, marking them as a key player in the advancement of intelligent computing solutions [77,90,91,94].

5. Design Paradigms

Within the innovative sphere of Artificial Intelligence Circuits and Systems (AICAS), design paradigms serve as the guiding blueprints that shape the functionality and performance of these intricate systems. These paradigms reflect a confluence of principles aimed at achieving sustainability, adaptability, and resilience in AICAS [17,33].

5.1. Energy-Efficient Designs

Energy efficiency is the clarion call in the design of modern AICAS, necessitated by the dual imperatives of environmental sustainability and operational cost-effectiveness. The ethos of energy-efficient design is embedded in every facet of AICAS, from the selection of materials and components that exhibit low power dissipation to the deployment of algorithms that maximize computational output while minimizing energy input. Sophisticated power management techniques that judiciously allocate and conserve energy resources are also integral to this paradigm. This meticulous attention to energy consumption not only prolongs the operational lifespan of AICAS but also mitigates the environmental footprint of burgeoning computing demands [95,96,97,98].

5.2. Scalable and Modular Designs

The increasing complexity of tasks that AICAS are expected to perform necessitates a design paradigm that can gracefully scale with growing computational demands. Scalable and modular designs are at the heart of such a paradigm, ensuring that AICAS can evolve and expand without succumbing to the limitations of initial design constraints. By embracing modularity, individual components of AICAS can be designed to interface seamlessly with an array of others, facilitating easy upgrades and expansion. This flexibility allows for incremental enhancements, fostering longevity and adaptability in the face of ever-changing technological landscapes [43,95,99,100,101,102].

5.3. Robust and Fault-Tolerant Designs

The unforgiving nature of real-world applications where AICAS must operate demands designs that are not only robust but inherently fault-tolerant. Such designs imbue AICAS with the resilience to withstand and operate through hardware failures, environmental extremes, and unexpected operational anomalies. Fault tolerance is intricately woven into the fabric of AICAS through redundant systems, error detection and correction algorithms, and self-healing mechanisms that ensure continuity of operation. This paradigm ensures that AICAS maintain high reliability and continuous service availability, which is critical for mission-critical applications spanning from healthcare to autonomous navigation [103,104,105].
In summation, the design paradigms of AICAS encapsulate a forward-thinking approach that integrates energy efficiency, scalability, modularity, robustness, and fault tolerance into a harmonious whole. These paradigms not only reflect the current state of technological advancement but also lay the groundwork for future innovations that will continue to drive the field of AICAS toward new horizons.

6. Applications of AICAS

The domain of Artificial Intelligence Circuits and Systems (AICAS) has burgeoned into an extensive field with far-reaching implications, seeding innovation across a multitude of sectors [24]. This proliferation of AICAS into diverse applications stands as a testament to their transformative potential and multifaceted utility (Table 3).

6.1. Energy Efficiency Optimization

Energy efficiency optimization represents a prime arena where the influence of AICAS is particularly pronounced. By integrating intelligent algorithms and self-regulating circuits, AICAS have become pivotal in redefining the paradigms of energy consumption and conservation within electronic devices. These advanced systems boast the proficiency to monitor, analyze, and adeptly modulate energy use in an instantaneous manner, catering to the dynamic and often intricate exigencies of power management. This capacity extends across a broad spectrum of platforms, ranging from the intricacies of consumer electronics to the expansive and demanding requirements of industrial infrastructures. The consequential benefits of such optimization are twofold: a marked diminution in operational expenditures and a significant positive impact on environmental sustainability, epitomizing the role of AICAS in fostering a more energy-conscious society [122,123].

6.2. Next-Generation Cognitive Computing Systems

The inception of next-generation cognitive computing systems is a direct offshoot of the revolutionary strides made possible by AICAS. These sophisticated systems are ingeniously crafted to replicate the nuanced thought processes of the human mind within a digital framework. Such replication endows them with the capability to undertake complex problem-solving tasks and make informed decisions that were traditionally the sole purview of human intellect. Empowered by the advanced capabilities of AICAS, cognitive computing systems can now assimilate, process, and dissect vast and intricate datasets with a level of speed and precision that was once unfathomable [122,123,124,125]. This has significantly accelerated progress in diverse fields like natural language processing, where the subtleties of human language are decoded and utilized; image recognition, which now goes beyond mere patterns to interpret context and meaning; and semantic computing, where the interpretation of data becomes as important as the data themselves. Through these monumental advancements, AICAS is not only enhancing current computational methodologies but also paving the way for a future where the boundary between human cognition and machine intelligence becomes increasingly seamless [124,125,126].
The integration of Artificial Intelligence Circuits and Systems (AICAS) into the domains of real-time processing and edge computing has initiated a transformative shift in data handling and computational workflows. In an era where immediacy and data-driven decision making are paramount, AICAS have emerged as the linchpin in the optimization of computational tasks.

6.3. Real-Time Processing and Edge Computing

In the intricate dance of data streams, AICAS serve as the choreographers, ensuring that the tempo of information is maintained at the edge of the network—where immediacy is not a luxury, but a necessity. By situating processing power proximate to data origination points, AICAS significantly pare down latency to a mere whisper, allowing for real-time analytics that are both swift and localized. This immediate processing is not just about speed, it is about the capacity to interpret, decide, and act in a fraction of the time it once took. Autonomous vehicles exemplify this paradigm, utilizing AICAS to interpret vast arrays of sensor data for instant navigation decisions. Similarly, in the burgeoning realm of IoT devices, AICAS are the silent sentinels, constantly analyzing and responding to environmental stimuli, enabling smart homes and cities to become more than mere concepts [127,128,129,130].

6.4. Autonomous Systems and Robotics

The journey of autonomous systems and robotics has been dramatically propelled by the ingenious capabilities of AICAS. These systems have been endowed with a semblance of cognition—learning from their environments, adapting to new challenges, and taking on tasks that once required the nuanced touch of human hands. AICAS act as the cerebral cortex of these machines, enabling them to interpret complex data, make autonomous decisions, and execute tasks with a level of precision that rivals human dexterity. The applications are as varied as they are profound, ranging from the precision of robotic arms in manufacturing lines to the rugged exploratory missions of rovers on alien worlds. The intelligence infused by AICAS into these machines allows them to operate in environments that are inhospitable or inaccessible to humans, opening up new frontiers in exploration and industry [101,130,131,132].
Together, the advancements in real-time processing, edge computing, autonomous systems, and robotics underscore the pivotal role of AICAS in not just augmenting human capabilities but also in expanding the horizons of what machines can autonomously achieve. As these technologies continue to evolve and intertwine, they promise to unlock new levels of efficiency and discovery, charting a course toward a future where intelligent systems are ubiquitous and integral to our daily lives.

6.5. Healthcare and Bioinformatics

Healthcare and bioinformatics have been profoundly transformed by AICAS. In healthcare, AICAS facilitate the analysis of medical images, management of patient data, and the personalization of patient care through predictive analytics. In the field of bioinformatics, they assist in understanding biological patterns and structures, expediting drug discovery and genomics research. These systems’ ability to handle complex, multifaceted datasets has been pivotal in advancing precision medicine and improving patient outcomes [90,132,133].
Spanning these diverse fields, AICAS demonstrate a remarkable capacity to not only enhance existing applications but also to catalyze the creation of novel solutions to some of the most pressing challenges across industries. As AICAS continue to evolve, their potential applications are set to expand, embedding these systems deeper into the fabric of daily life and work.

7. Challenges and Solutions

In the dynamic and complex landscape of Artificial Intelligence Circuits and Systems (AICAS), practitioners and researchers face a myriad of challenges that must be surmounted to realize the full potential of these technologies [4,17]. These challenges span across scalability, robustness, and resource constraints, each presenting unique hurdles and necessitating innovative solutions.

7.1. Scalability Challenges

Scalability poses a significant challenge in the deployment of AICAS, as the systems must maintain performance and efficiency while handling increasingly large and complex datasets. To address this, solutions are being crafted in the form of advanced algorithms and architectures that allow for seamless expansion. The development of modular design strategies also plays a critical role, enabling systems to grow and adapt through the addition of resources or modules without a wholesale redesign. Furthermore, cloud-based services and distributed computing are being leveraged to provide the necessary infrastructure for scalable AICAS solutions, distributing the workload across multiple nodes to manage the increased demand [134,135].

7.2. Robustness and Reliability

Ensuring the robustness and reliability of AICAS in diverse and often unpredictable environments is another significant hurdle. To bolster the reliability of these systems, there is a concerted effort toward the development of fault-tolerant designs that can continue to operate effectively even when components fail. Redundancy is a key principle being employed, wherein critical components are duplicated to provide a backup in case of failure. Additionally, rigorous testing protocols and real-world simulations are integral to ensuring the robustness of AICAS, helping to identify and mitigate potential vulnerabilities before deployment [136,137,138].

7.3. Addressing Resource Constraints

Resource constraints, such as limitations in power, memory, and computational capacity, are perennial concerns in the advancement of AICAS. Innovations in hardware, such as the development of energy-efficient processors and compact, high-capacity memory solutions, are being explored to overcome these constraints. On the software front, optimization techniques that can reduce the computational load, such as pruning and quantization of neural networks, are gaining traction. Moreover, there is an increasing focus on edge computing, which seeks to process data locally to reduce the demand on central resources and decrease latency [125,139,140].
The interplay of these challenges and the ingenious solutions being developed to address them highlights the dynamic nature of the field of AICAS. As the technology continues to mature, the solutions are expected to become more sophisticated, paving the way for more robust, scalable, and resource-efficient AICAS that can meet the demands of the future.

8. Emerging Trends and Future Directions

The horizon of Artificial Intelligence Circuits and Systems (AICAS) is continually expanding, with emerging trends and future directions being shaped by new materials, technological integration, and the evolving landscape of policy and ethics [141]. In this context, the contributions of global research institutions play a pivotal role (Table 4).
These institutions not only contribute to the technological advancements in AICAS but also influence the development of ethical guidelines and policies, ensuring that the growth of this field is both responsible and sustainable.

8.1. New Materials and Technologies

In the quest for enhanced performance and functionality, the exploration of new materials and technologies is pivotal. Innovations such as two-dimensional materials beyond graphene, like transition metal dichalcogenides, offer exceptional electrical, thermal, and mechanical properties that could redefine the capabilities of AICAS. Nanotechnology is also playing a crucial role, with nanoscale devices enabling a new wave of ultra-compact and efficient AICAS components. Additionally, the advent of spintronics, which exploits the spin property of electrons, presents a promising alternative to traditional charge-based electronics, potentially leading to faster and more energy-efficient systems [142,143].

8.2. Integrating AICAS with Other Emerging Technologies

The integration of AICAS with other burgeoning technologies is setting the stage for multidisciplinary advancements. For instance, the convergence of AICAS with quantum computing could unlock new paradigms in processing power and efficiency. Similarly, synergies between AICAS and biotechnology are fostering novel approaches in bioinformatics and medical diagnostics, where AI-driven systems can detect patterns and anomalies beyond human capability. The fusion of AICAS with blockchain technology could also ensure greater security and transparency in data handling, particularly in IoT devices [144].

8.3. Policy and Ethical Considerations

As AICAS continue to evolve, they increasingly intersect with policy and ethical considerations. The formulation of policies that govern the development and deployment of AICAS is critical to ensuring that these technologies are used responsibly. This includes establishing standards for data privacy, security, and the ethical use of AI. Moreover, there is a growing discourse on the societal impact of AICAS, including issues of workforce displacement, algorithmic bias, and the need for equitable access to technology. Addressing these concerns is essential for fostering public trust and facilitating the sustainable and ethical growth of AICAS [145,146,147,148].
In summary, the trajectory of AICAS is being charted by groundbreaking materials and technologies, interdisciplinary integrations, and a conscientious approach to policy and ethics. These elements together are not only driving innovation within the field but are also ensuring that the advancement of AICAS aligns with broader societal values and needs. As we look to the future, it is clear that AICAS will continue to be at the forefront of technological progress, with their full potential realized through thoughtful and strategic evolution.

9. Conclusions

The comprehensive exploration of Artificial Intelligence Circuits and Systems (AICAS) has unveiled a spectrum of advancements, challenges, and emerging trends that underscore the field’s dynamic and transformative nature.

9.1. Summary of Key Findings

Key findings from the review of AICAS reveal a technological ecosystem that is rapidly evolving, marked by innovative designs and applications. Energy-efficient and scalable architectures, such as neuromorphic computing and in-memory computing, have emerged, offering a new dimension to how electronic systems are conceptualized and implemented. The application of AICAS in areas such as cognitive computing, real-time processing, and autonomous systems illustrates the vast potential of these systems to revolutionize multiple industries. However, challenges related to scalability, robustness, and resource constraints persist, prompting continuous research and development efforts. The field is also witnessing the introduction of new materials and the integration with other cutting-edge technologies, poised to further enhance the capabilities of AICAS.

9.2. Implications and Recommendations for Future Research

The implications of these findings are profound, indicating that AICAS will play a central role in shaping the future of technology and society. Future research should focus on addressing the existing challenges by fostering advancements in hardware and software that prioritize scalability, energy efficiency, and robustness. Emphasis should be placed on the ethical design and deployment of AICAS, ensuring that these systems are inclusive, equitable, and aligned with societal values. Additionally, interdisciplinary collaborations are recommended to harness the synergistic potential of AICAS with other emerging fields such as quantum computing and biotechnology. Policymakers and researchers must work together to navigate the ethical landscape, developing frameworks that promote responsible innovation while mitigating potential risks associated with AI.
In conclusion, while the journey of AICAS is ongoing, the progress made thus far is impressive, laying a solid foundation for future breakthroughs. The continuous refinement and advancement of AICAS will undoubtedly contribute to their enduring impact on technology and their increasing integration into the fabric of everyday life.

Author Contributions

Conceptualization, T.M. and I.D.; methodology, T.M.; investigation, I.D., E.K. and S.S.; resources, T.M., I.D., D.C.-L. and A.Ł.; data curation, T.M., P.M.-Z., D.C.-L. and A.Ł.; writing—original draft preparation, T.M., A.Ł. and D.C.-L.; writing—review and editing, I.D. and E.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhao, S.; Blaabjerg, F.; Wang, H. An Overview of Artificial Intelligence Applications for Power Electronics. IEEE Trans. Power Electron. 2021, 36, 4633–4658. [Google Scholar] [CrossRef]
  2. Shastri, B.J.; Tait, A.N.; Ferreira de Lima, T.; Pernice, W.H.P.; Bhaskaran, H.; Wright, C.D.; Prucnal, P.R. Photonics for Artificial Intelligence and Neuromorphic Computing. Nat. Photonics 2021, 15, 102–114. [Google Scholar] [CrossRef]
  3. Chang, R.C.-H.; Lee, G.G.C.; Delbruck, T.; Valle, M. Introduction to the Special Issue on the 1st IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS 2019). IEEE J. Emerg. Sel. Top. Circuits Syst. 2019, 9, 595–597. [Google Scholar] [CrossRef]
  4. Hong, T.; Wang, P. Artificial Intelligence for Load Forecasting: History, Illusions, and Opportunities. IEEE Power Energy Mag. 2022, 20, 14–23. [Google Scholar] [CrossRef]
  5. Gams, M.; Kolenik, T. Relations between Electronics, Artificial Intelligence and Information Society through Information Society Rules. Electronics 2021, 10, 514. [Google Scholar] [CrossRef]
  6. Khan, F.H.; Pasha, M.A.; Masud, S. Advancements in Microprocessor Architecture for Ubiquitous AI—An Overview on History, Evolution, and Upcoming Challenges in AI Implementation. Micromachines 2021, 12, 665. [Google Scholar] [CrossRef]
  7. Sanni, K.A.; Andreou, A.G. A Historical Perspective on Hardware AI Inference, Charge-Based Computational Circuits and an 8 Bit Charge-Based Multiply-Add Core in 16 Nm FinFET CMOS. IEEE J. Emerg. Sel. Top. Circuits Syst. 2019, 9, 532–543. [Google Scholar] [CrossRef]
  8. Tomazzoli, C.; Scannapieco, S.; Cristani, M. Internet of Things and Artificial Intelligence Enable Energy Efficiency. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 4933–4954. [Google Scholar] [CrossRef]
  9. Himeur, Y.; Ghanem, K.; Alsalemi, A.; Bensaali, F.; Amira, A. Artificial Intelligence Based Anomaly Detection of Energy Consumption in Buildings: A Review, Current Trends and New Perspectives. Appl. Energy 2021, 287, 116601. [Google Scholar] [CrossRef]
  10. Mishra, A.; Ray, A.K. A Novel Layered Architecture and Modular Design Framework for Next-Gen Cyber Physical System. In Proceedings of the 2022 International Conference on Computer Communication and Informatics (ICCCI), Chiba, Japan, 1–3 July 2022; IEEE: New York, NY, USA, 2022; pp. 1–8. [Google Scholar]
  11. Wang, Y.; Kinsner, W.; Kwong, S.; Leung, H.; Lu, J.; Smith, M.H.; Trajkovic, L.; Tunstel, E.; Plataniotis, K.N.; Yen, G.G. Brain-Inspired Systems: A Transdisciplinary Exploration on Cognitive Cybernetics, Humanity, and Systems Science Toward Autonomous Artificial Intelligence. IEEE Syst. Man. Cybern. Mag. 2020, 6, 6–13. [Google Scholar] [CrossRef]
  12. Zador, A.; Escola, S.; Richards, B.; Ölveczky, B.; Bengio, Y.; Boahen, K.; Botvinick, M.; Chklovskii, D.; Churchland, A.; Clopath, C.; et al. Catalyzing Next-Generation Artificial Intelligence through NeuroAI. Nat. Commun. 2023, 14, 1597. [Google Scholar] [CrossRef]
  13. Xu, Y.; Liu, X.; Cao, X.; Huang, C.; Liu, E.; Qian, S.; Liu, X.; Wu, Y.; Dong, F.; Qiu, C.-W.; et al. Artificial Intelligence: A Powerful Paradigm for Scientific Research. Innovation 2021, 2, 100179. [Google Scholar] [CrossRef]
  14. Zhao, W.; Ma, X.; Ju, J.; Zhao, Y.; Wang, X.; Li, S.; Sui, Y.; Sun, Q. Association of Visceral Adiposity Index with Asymptomatic Intracranial Arterial Stenosis: A Population-Based Study in Shandong, China. Lipids Health Dis. 2023, 22, 64. [Google Scholar] [CrossRef]
  15. Fayazi, M.; Colter, Z.; Afshari, E.; Dreslinski, R. Applications of Artificial Intelligence on the Modeling and Optimization for Analog and Mixed-Signal Circuits: A Review. IEEE Trans. Circuits Syst. I Regul. Pap. 2021, 68, 2418–2431. [Google Scholar] [CrossRef]
  16. Talib, M.A.; Majzoub, S.; Nasir, Q.; Jamal, D. A Systematic Literature Review on Hardware Implementation of Artificial Intelligence Algorithms. J. Supercomput. 2021, 77, 1897–1938. [Google Scholar] [CrossRef]
  17. Serrano-Gotarredona, T.; Valle, M.; Conti, F.; Li, H. Introduction to the Special Issue on the 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS 2020). IEEE J. Emerg. Sel. Top. Circuits Syst. 2020, 10, 403–405. [Google Scholar] [CrossRef]
  18. Xu, S. AICA Development Challenges. In Autonomous Intelligent Cyber Defense Agent (AICA); Springer: Cham, Switzerland, 2023; pp. 367–394. [Google Scholar]
  19. Costa, D.; Costa, M.; Pinto, S. Train Me If You Can: Decentralized Learning on the Deep Edge. Appl. Sci. 2022, 12, 4653. [Google Scholar] [CrossRef]
  20. Golder, A.; Raychowdhury, A. PCB Identification Based on Machine Learning Utilizing Power Consumption Variability. In Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–4. [Google Scholar]
  21. de Goede, D.; Kampert, D.; Varbanescu, A.L. The Cost of Reinforcement Learning for Game Engines. In Proceedings of the 2022 ACM/SPEC on International Conference on Performance Engineering, Beijing, China, 9–13 April 2022; ACM: New York, NY, USA; pp. 145–152. [Google Scholar]
  22. Fariselli, M.; Rusci, M.; Cambonie, J.; Flamand, E. Integer-Only Approximated MFCC for Ultra-Low Power Audio NN Processing on Multi-Core MCUs. In Proceedings of the 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington, DC, USA, 6–9 June 2021; IEEE: New York, NY, USA, 2021; pp. 1–4. [Google Scholar]
  23. Agyeman, M.; Guerrero, A.F.; Vien, Q.-T. Classification Techniques for Arrhythmia Patterns Using Convolutional Neural Networks and Internet of Things (IoT) Devices. IEEE Access 2022, 10, 87387–87403. [Google Scholar] [CrossRef]
  24. Mladenov, V. AICAS—PAST, PRESENT, AND FUTURE. Electronics 2023, 12, 1483. [Google Scholar] [CrossRef]
  25. Berggren, K.; Xia, Q.; Likharev, K.K.; Strukov, D.B.; Jiang, H.; Mikolajick, T.; Querlioz, D.; Salinga, M.; Erickson, J.R.; Pi, S.; et al. Roadmap on Emerging Hardware and Technology for Machine Learning. Nanotechnology 2021, 32, 012002. [Google Scholar] [CrossRef]
  26. Miranda, E.; Suñé, J. Memristors for Neuromorphic Circuits and Artificial Intelligence Applications. Materials 2020, 13, 938. [Google Scholar] [CrossRef]
  27. Sun, B.; Guo, T.; Zhou, G.; Ranjan, S.; Jiao, Y.; Wei, L.; Zhou, Y.N.; Wu, Y.A. Synaptic Devices Based Neuromorphic Computing Applications in Artificial Intelligence. Mater. Today Phys. 2021, 18, 100393. [Google Scholar] [CrossRef]
  28. Kim, D.; Yu, C.; Xie, S.; Chen, Y.; Kim, J.-Y.; Kim, B.; Kulkarni, J.P.; Kim, T.T.-H. An Overview of Processing-in-Memory Circuits for Artificial Intelligence and Machine Learning. IEEE J. Emerg. Sel. Top. Circuits Syst. 2022, 12, 338–353. [Google Scholar] [CrossRef]
  29. Ielmini, D.; Pedretti, G. Device and Circuit Architectures for In-Memory Computing. Adv. Intell. Syst. 2020, 2, 2000040. [Google Scholar] [CrossRef]
  30. Kusyk, J.; Saeed, S.M.; Uyar, M.U. Survey on Quantum Circuit Compilation for Noisy Intermediate-Scale Quantum Computers: Artificial Intelligence to Heuristics. IEEE Trans. Quantum Eng. 2021, 2, 2501616. [Google Scholar] [CrossRef]
  31. Mangini, S.; Tacchino, F.; Gerace, D.; Bajoni, D.; Macchiavello, C. Quantum Computing Models for Artificial Neural Networks. Europhys. Lett. 2021, 134, 10002. [Google Scholar] [CrossRef]
  32. Norlander, A. Command in AICA-Intensive Operations. In Autonomous Intelligent Cyber Defense Agent (AICA) A Comprehensive Guide; Springer: Cham, Switzerland, 2023; pp. 311–339. [Google Scholar] [CrossRef]
  33. Theron, P. Alternative Architectural Approaches. In Autonomous Intelligent Cyber Defense Agent (AICA) A Comprehensive Guide; Springer: Cham, Switzerland, 2023; pp. 17–46. [Google Scholar] [CrossRef]
  34. Yayla, M.; Thomann, S.; Buschjager, S.; Morik, K.; Chen, J.-J.; Amrouch, H. Reliable Binarized Neural Networks on Unreliable Beyond Von-Neumann Architecture. IEEE Trans. Circuits Syst. I Regul. Pap. 2022, 69, 2516–2528. [Google Scholar] [CrossRef]
  35. Coluccio, A.; Vacca, M.; Turvani, G. Logic-in-Memory Computation: Is It Worth It? A Binary Neural Network Case Study. J. Low Power Electron. Appl. 2020, 10, 7. [Google Scholar] [CrossRef]
  36. Mack, J.; Purdy, R.; Rockowitz, K.; Inouye, M.; Richter, E.; Valancius, S.; Kumbhare, N.; Hassan, M.S.; Fair, K.; Mixter, J.; et al. RANC: Reconfigurable Architecture for Neuromorphic Computing. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2021, 40, 2265–2278. [Google Scholar] [CrossRef]
  37. Gebregiorgis, A.; Du Nguyen, H.A.; Yu, J.; Bishnoi, R.; Taouil, M.; Catthoor, F.; Hamdioui, S. A Survey on Memory-Centric Computer Architectures. ACM J. Emerg. Technol. Comput. Syst. 2022, 18, 1–50. [Google Scholar] [CrossRef]
  38. Shanbhag, N.R.; Roy, S.K. Benchmarking In-Memory Computing Architectures. IEEE Open J. Solid-State Circuits Soc. 2022, 2, 288–300. [Google Scholar] [CrossRef]
  39. Zhu, D.; Linke, N.M.; Benedetti, M.; Landsman, K.A.; Nguyen, N.H.; Alderete, C.H.; Perdomo-Ortiz, A.; Korda, N.; Garfoot, A.; Brecque, C.; et al. Training of Quantum Circuits on a Hybrid Quantum Computer. Sci. Adv. 2019, 5, eaaw9918. [Google Scholar] [CrossRef] [PubMed]
  40. Marvania, D.B.; Parikh, D.S.; Patel, D.P. Comparative Performance of CMOS Active Inductor. In Proceedings of the International e-Conference on Intelligent Systems and Signal Processing; Springer: Singapore, 2022; pp. 391–401. [Google Scholar] [CrossRef]
  41. Navaneetha, A.; Bikshalu, K. FinFET Based Comparison Analysis of Power and Delay of Adder Topologies. Mater. Today Proc. 2021, 46, 3723–3729. [Google Scholar] [CrossRef]
  42. Mladenov, V. Application of Metal Oxide Memristor Models in Logic Gates. Electronics 2023, 12, 381. [Google Scholar] [CrossRef]
  43. Yousefzadeh, A.; van Schaik, G.-J.; Tahghighi, M.; Detterer, P.; Traferro, S.; Hijdra, M.; Stuijt, J.; Corradi, F.; Sifalakis, M.; Konijnenburg, M. SENeCA: Scalable Energy-Efficient Neuromorphic Computer Architecture. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Inchon, Republic of Korea, 13–15 June 2022; IEEE: New York, NY, USA, 2022; pp. 371–374. [Google Scholar] [CrossRef]
  44. Neuman, S.M.; Plancher, B.; Duisterhof, B.P.; Krishnan, S.; Banbury, C.; Mazumder, M.; Prakash, S.; Jabbour, J.; Faust, A.; de Croon, G.C.H.E.; et al. Tiny Robot Learning: Challenges and Directions for Machine Learning in Resource-Constrained Robots. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Inchon, Republic of Korea, 13–15 June 2022; IEEE: New York, NY, USA, 2022; pp. 296–299. [Google Scholar]
  45. Lin, W.-F.; Tsai, D.-Y.; Tang, L.; Hsieh, C.-T.; Chou, C.-Y.; Chang, P.-H.; Hsu, L. ONNC: A Compilation Framework Connecting ONNX to Proprietary Deep Learning Accelerators. In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, Taiwan, 18–20 March 2019; IEEE: New York, NY, USA, 2019; pp. 214–218. [Google Scholar]
  46. Huang, J.; Kelber, F.; Vogginger, B.; Wu, B.; Kreutz, F.; Gerhards, P.; Scholz, D.; Knobloch, K.; Mayr, C.G. Efficient Algorithms for Accelerating Spiking Neural Networks on MAC Array of SpiNNaker 2. In Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
  47. Theron, P.; Kott, A. When Autonomous Intelligent Goodware Will Fight Autonomous Intelligent Malware: A Possible Future of Cyber Defense. In Proceedings of the MILCOM 2019—2019 IEEE Military Communications Conference (MILCOM), Norfolk, VA, USA, 12–14 November 2019; IEEE: Washington, DC, USA, 2019; pp. 1–7. [Google Scholar]
  48. Wang, H.; Cao, S.; Xu, S. A Real-Time Face Recognition System by Efficient Hardware-Software Co-Design on FPGA SoCs. In Proceedings of the 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington, DC, USA, 6–9 June 2021; IEEE: New York, NY, USA, 2021; pp. 1–2. [Google Scholar]
  49. Jiang, Z.; Yang, K.; Ma, Y.; Fisher, N.; Audsley, N.; Dong, Z. I/O-GUARD: Hardware/Software Co-Design for I/O Virtualization with Guaranteed Real-Time Performance. In Proceedings of the 2021 58th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 5–9 December 2021; IEEE: New York, NY, USA, 2021; pp. 1159–1164. [Google Scholar]
  50. Jayakodi, N.K.; Doppa, J.R.; Pande, P.P. A General Hardware and Software Co-Design Framework for Energy-Efficient Edge AI. In Proceedings of the 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD), Munich, Germany, 1–4 November 2021; IEEE: New York, NY, USA, 2021; pp. 1–7. [Google Scholar]
  51. Dubey, A.; Cammarota, R.; Varna, A.; Kumar, R.; Aysu, A. Hardware-Software Co-Design for Side-Channel Protected Neural Network Inference. In Proceedings of the 2023 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), San Jose, CA, USA, 1–4 May 2023; IEEE: New York, NY, USA, 2023; pp. 155–166. [Google Scholar]
  52. Huang, P.; Wang, C.; Liu, W.; Qiao, F.; Lombardi, F. A Hardware/Software Co-Design Methodology for Adaptive Approximate Computing in Clustering and ANN Learning. IEEE Open J. Comput. Soc. 2021, 2, 38–52. [Google Scholar] [CrossRef]
  53. Wang, J.; Chen, Z.; Chen, Y.; Xu, Y.; Wang, T.; Yu, Y.; Narayanan, V.; George, S.; Yang, H.; Li, X. WeightLock: A Mixed-Grained Weight Encryption Approach Using Local Decrypting Units for Ciphertext Computing in DNN Accelerators. In Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar] [CrossRef]
  54. Marković, D.; Mizrahi, A.; Querlioz, D.; Grollier, J. Physics for Neuromorphic Computing. Nat. Rev. Phys. 2020, 2, 499–510. [Google Scholar] [CrossRef]
  55. Roy, K.; Jaiswal, A.; Panda, P. Towards Spike-Based Machine Intelligence with Neuromorphic Computing. Nature 2019, 575, 607–617. [Google Scholar] [CrossRef] [PubMed]
  56. Davies, M.; Wild, A.; Orchard, G.; Sandamirskaya, Y.; Guerra, G.A.F.; Joshi, P.; Plank, P.; Risbud, S.R. Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook. Proc. IEEE 2021, 109, 911–934. [Google Scholar] [CrossRef]
  57. Cho, S.W.; Kwon, S.M.; Kim, Y.-H.; Park, S.K. Recent Progress in Transistor-Based Optoelectronic Synapses: From Neuromorphic Computing to Artificial Sensory System. Adv. Intell. Syst. 2021, 3, 2000162. [Google Scholar] [CrossRef]
  58. Ha, M.; Sim, J.; Moon, D.; Rhee, M.; Choi, J.; Koh, B.; Lim, E.; Park, K. CMS: A Computational Memory Solution for High-Performance and Power-Efficient Recommendation System. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Inchon, Republic of Korea, 13–15 June 2022; IEEE: New York, NY, USA, 2022; pp. 491–494. [Google Scholar] [CrossRef]
  59. Srinivasan, G.; Lee, C.; Sengupta, A.; Panda, P.; Sarwar, S.S.; Roy, K. Training Deep Spiking Neural Networks for Energy-Efficient Neuromorphic Computing. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; IEEE: New York, NY, USA, 2020; pp. 8549–8553. [Google Scholar] [CrossRef]
  60. Rathi, N.; Chakraborty, I.; Kosta, A.; Sengupta, A.; Ankit, A.; Panda, P.; Roy, K. Exploring Neuromorphic Computing Based on Spiking Neural Networks: Algorithms to Hardware. ACM Comput. Surv. 2023, 55, 243. [Google Scholar] [CrossRef]
  61. Li, Y.; Xuan, Z.; Lu, J.; Wang, Z.; Zhang, X.; Wu, Z.; Wang, Y.; Xu, H.; Dou, C.; Kang, Y.; et al. One Transistor One Electrolyte-Gated Transistor Based Spiking Neural Network for Power-Efficient Neuromorphic Computing System. Adv. Funct. Mater. 2021, 31, 2100042. [Google Scholar] [CrossRef]
  62. van Doremaele, E.R.W.; Ji, X.; Rivnay, J.; van de Burgt, Y. A Retrainable Neuromorphic Biosensor for On-Chip Learning and Classification. Nat. Electron. 2023, 6, 765–770. [Google Scholar] [CrossRef]
  63. Baumgartner, S.; Renner, A.; Kreiser, R.; Liang, D.; Indiveri, G.; Sandamirskaya, Y. Visual Pattern Recognition with on On-Chip Learning: Towards a Fully Neuromorphic Approach. In Proceedings of the 2020 IEEE International Symposium on Circuits and Systems (ISCAS), Seville, Spain, 12–14 October 2020; IEEE: New York, NY, USA, 2020; pp. 1–5. [Google Scholar] [CrossRef]
  64. Yoo, J.; Shoaran, M. Neural Interface Systems with On-Device Computing: Machine Learning and Neuromorphic Architectures. Curr. Opin. Biotechnol. 2021, 72, 95–101. [Google Scholar] [CrossRef]
  65. Hsu, K.-C.; Tseng, H.-W. Accelerating Applications Using Edge Tensor Processing Units. In Proceedings of the 2021 International Conference for High Performance Computing, Networking, Storage and Analysis, St. Louis, MI, USA, 14–19 November 2021; ACM: New York, NY, USA, 2021; pp. 1–14. [Google Scholar] [CrossRef]
  66. Kochura, Y.; Gordienko, Y.; Taran, V.; Gordienko, N.; Rokovyi, A.; Alienin, O.; Stirenko, S. Batch Size Influence on Performance of Graphic and Tensor Processing Units During Training and Inference Phases. In Advances in Computer Science for Engineering and Education II; Springer: Cham, Switzerland, 2020; pp. 658–668. [Google Scholar] [CrossRef]
  67. Adjoua, O.; Lagardère, L.; Jolly, L.-H.; Durocher, A.; Very, T.; Dupays, I.; Wang, Z.; Inizan, T.J.; Célerse, F.; Ren, P.; et al. Tinker-HP: Accelerating Molecular Dynamics Simulations of Large Complex Systems with Advanced Point Dipole Polarizable Force Fields Using GPUs and Multi-GPU Systems. J. Chem. Theory Comput. 2021, 17, 2034–2053. [Google Scholar] [CrossRef]
  68. Seritan, S.; Bannwarth, C.; Fales, B.S.; Hohenstein, E.G.; Isborn, C.M.; Kokkila-Schumacher, S.I.L.; Li, X.; Liu, F.; Luehr, N.; Snyder, J.W.; et al. A Graphical Processing Unit Electronic Structure Package forAb Initio Molecular Dynamics. WIREs Comput. Mol. Sci. 2021, 11, e1494. [Google Scholar] [CrossRef]
  69. Schölkopf, B. Causality for machine learning. In Probabilistic and Causal Inference: The Works of Judea Pearl; Association for Computing Machinery: New York, NY, USA, 2022; pp. 765–804. [Google Scholar] [CrossRef]
  70. Ishida, K.; Byun, I.; Nagaoka, I.; Fukumitsu, K.; Tanaka, M.; Kawakami, S.; Tanimoto, T.; Ono, T.; Kim, J.; Inoue, K. SuperNPU: An Extremely Fast Neural Processing Unit Using Superconducting Logic Devices. In Proceedings of the 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), Athens, Greece, 17–21 October 2020; IEEE: New York, NY, USA, 2020; pp. 58–72. [Google Scholar]
  71. Lee, K.J. Architecture of Neural Processing Unit for Deep Neural Networks. Adv. Comput. 2021, 122, 217–245. [Google Scholar]
  72. Fang, Q.; Yan, S. Graphics Processing Unit-Accelerated Mesh-Based Monte Carlo Photon Transport Simulations. J. Biomed. Opt. 2019, 24, 1. [Google Scholar] [CrossRef]
  73. Kussmann, J.; Laqua, H.; Ochsenfeld, C. Highly Efficient Resolution-of-Identity Density Functional Theory Calculations on Central and Graphics Processing Units. J. Chem. Theory Comput. 2021, 17, 1512–1521. [Google Scholar] [CrossRef]
  74. Boeken, T.; Feydy, J.; Lecler, A.; Soyer, P.; Feydy, A.; Barat, M.; Duron, L. Artificial Intelligence in Diagnostic and Interventional Radiology: Where Are We Now? Diagn. Interv. Imaging 2023, 104, 1–5. [Google Scholar] [CrossRef]
  75. Raschka, S.; Patterson, J.; Nolet, C. Machine Learning in Python: Main Developments and Technology Trends in Data Science, Machine Learning, and Artificial Intelligence. Information 2020, 11, 193. [Google Scholar] [CrossRef]
  76. Sharma, S.; Krishna, C.R.; Kumar, R. Android Ransomware Detection Using Machine Learning Techniques: A Comparative Analysis on GPU and CPU. In Proceedings of the 2020 21st International Arab Conference on Information Technology (ACIT), Giza, Egypt, 28–30 November 2020; IEEE: New York, NY, USA, 2020; pp. 1–6. [Google Scholar]
  77. Reuther, A.; Michaleas, P.; Jones, M.; Gadepally, V.; Samsi, S.; Kepner, J. Survey and Benchmarking of Machine Learning Accelerators. In Proceedings of the 2019 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 24–26 September 2019; IEEE: New York, NY, USA, 2019; pp. 1–9. [Google Scholar]
  78. Patel, P.; Thakkar, A. The Upsurge of Deep Learning for Computer Vision Applications. Int. J. Electr. Comput. Eng. 2020, 10, 538. [Google Scholar] [CrossRef]
  79. Zhang, Y.; Yu, J.; Chen, Y.; Yang, W.; Zhang, W.; He, Y. Real-Time Strawberry Detection Using Deep Neural Networks on Embedded System (Rtsd-Net): An Edge AI Application. Comput. Electron. Agric. 2022, 192, 106586. [Google Scholar] [CrossRef]
  80. Pandey, P.; Basu, P.; Chakraborty, K.; Roy, S. GreenTPU. In Proceedings of the 56th Annual Design Automation Conference 2019, Las Vegas, NV, USA, 2–6 June 2019; ACM: New York, NY, USA, 2019; pp. 1–6. [Google Scholar]
  81. You, Y.; Zhang, Z.; Hsieh, C.-J.; Demmel, J.; Keutzer, K. Fast Deep Neural Network Training on Distributed Systems and Cloud TPUs. IEEE Trans. Parallel Distrib. Syst. 2019, 30, 2449–2462. [Google Scholar] [CrossRef]
  82. Ravikumar, A.; Sriraman, H.; Sai Saketh, P.M.; Lokesh, S.; Karanam, A. Effect of Neural Network Structure in Accelerating Performance and Accuracy of a Convolutional Neural Network with GPU/TPU for Image Analytics. PeerJ Comput. Sci. 2022, 8, e909. [Google Scholar] [CrossRef] [PubMed]
  83. Shahid, A.; Mushtaq, M. A Survey Comparing Specialized Hardware and Evolution in TPUs for Neural Networks. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; IEEE: New York, NY, USA, 2020; pp. 1–6. [Google Scholar]
  84. Ji, Y.; Wang, Q.; Li, X.; Liu, J. A Survey on Tensor Techniques and Applications in Machine Learning. IEEE Access 2019, 7, 162950–162990. [Google Scholar] [CrossRef]
  85. Sharma, N.; Sharma, R.; Jindal, N. Machine Learning and Deep Learning Applications-A Vision. Glob. Transit. Proc. 2021, 2, 24–28. [Google Scholar] [CrossRef]
  86. Jouppi, N.; Kurian, G.; Li, S.; Ma, P.; Nagarajan, R.; Nai, L.; Patil, N.; Subramanian, S.; Swing, A.; Towles, B.; et al. TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings. In Proceedings of the 50th Annual International Symposium on Computer Architecture, Orlando, FL, USA, 17–21 June 2023; ACM: New York, NY, USA, 2023; pp. 1–14. [Google Scholar]
  87. Mrozek, D.; Gȯrny, R.; Wachowicz, A.; Małysiak-Mrozek, B. Edge-Based Detection of Varroosis in Beehives with IoT Devices with Embedded and TPU-Accelerated Machine Learning. Appl. Sci. 2021, 11, 11078. [Google Scholar] [CrossRef]
  88. Alibabaei, K.; Assunção, E.; Gaspar, P.D.; Soares, V.N.G.J.; Caldeira, J.M.L.P. Real-Time Detection of Vine Trunk for Robot Localization Using Deep Learning Models Developed for Edge TPU Devices. Future Internet 2022, 14, 199. [Google Scholar] [CrossRef]
  89. Oh, Y.H.; Kim, S.; Jin, Y.; Son, S.; Bae, J.; Lee, J.; Park, Y.; Kim, D.U.; Ham, T.J.; Lee, J.W. Layerweaver: Maximizing Resource Utilization of Neural Processing Units via Layer-Wise Scheduling. In Proceedings of the 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Seoul, Republic of Korea, 27 February–3 March 2021; IEEE: New York, NY, USA, 2021; pp. 584–597. [Google Scholar]
  90. Choi, Y.; Rhu, M. PREMA: A Predictive Multi-Task Scheduling Algorithm For Preemptible Neural Processing Units. In Proceedings of the 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA), San Diego, CA, USA, 22–26 February 2020; IEEE: New York, NY, USA, 2020; pp. 220–233. [Google Scholar]
  91. Jeon, W.; Lee, J.; Kang, D.; Kal, H.; Ro, W.W. PIMCaffe: Functional Evaluation of a Machine Learning Framework for In-Memory Neural Processing Unit. IEEE Access 2021, 9, 96629–96640. [Google Scholar] [CrossRef]
  92. Tan, T.; Cao, G. Deep Learning on Mobile Devices Through Neural Processing Units and Edge Computing. In Proceedings of the IEEE INFOCOM 2022—IEEE Conference on Computer Communications, New York, NY, USA, 2–5 May 2022; IEEE: New York, NY, USA, 2022; pp. 1209–1218. [Google Scholar]
  93. Lee, S.; Kim, J.; Na, S.; Park, J.; Huh, J. TNPU: Supporting Trusted Execution with Tree-Less Integrity Protection for Neural Processing Unit. In Proceedings of the 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Seoul, Republic of Korea, 2–6 April 2022; IEEE: New York, NY, USA, 2022; pp. 229–243. [Google Scholar]
  94. Park, J.-S.; Park, C.; Kwon, S.; Jeon, T.; Kang, Y.; Lee, H.; Lee, D.; Kim, J.; Kim, H.-S.; Lee, Y.; et al. A Multi-Mode 8k-MAC HW-Utilization-Aware Neural Processing Unit With a Unified Multi-Precision Datapath in 4-Nm Flagship Mobile SoC. IEEE J. Solid-State Circuits 2023, 58, 189–202. [Google Scholar] [CrossRef]
  95. Verhelst, M.; Murmann, B. Machine Learning at the Edge. In NANO-CHIPS 2030: On-Chip AI for an Efficient Data-Driven World; Springer: Cham, Switzerland, 2020; pp. 293–322. [Google Scholar] [CrossRef]
  96. Jobst, M.; Partzsch, J.; Liu, C.; Guo, L.; Walter, D.; Rehman, S.-U.; Scholze, S.; Hoppner, S.; Mayr, C. ZEN: A Flexible Energy-Efficient Hardware Classifier Exploiting Temporal Sparsity in ECG Data. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Inchon, Republic of Korea, 13–15 June 2022; IEEE: New York, NY, USA, 2022; pp. 214–217. [Google Scholar]
  97. Hu, J.; Leow, C.S.; Goh, W.L.; Gao, Y. Energy Efficient Software-Hardware Co-Design of Quantized Recurrent Convolutional Neural Network for Continuous Cardiac Monitoring. In Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
  98. Wan, Z.; Zhang, Y.; Raychowdhury, A.; Yu, B.; Zhang, Y.; Liu, S. An Energy-Efficient Quad-Camera Visual System for Autonomous Machines on FPGA Platform. In Proceedings of the 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington, DC, USA, 6–9 June 2021; IEEE: New York, NY, USA, 2021; pp. 1–4. [Google Scholar]
  99. Zhou, S.; Chen, X.; Kim, K.; Liu, S.-C. High-Accuracy and Energy-Efficient Acoustic Inference Using Hardware-Aware Training and a 0.34nW/Ch Full-Wave Rectifier. In Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
  100. Zimmer, B.; Venkatesan, R.; Shao, Y.S.; Clemons, J.; Fojtik, M.; Jiang, N.; Keller, B.; Klinefelter, A.; Pinckney, N.; Raina, P.; et al. A 0.32–128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 Nm. IEEE J. Solid-State Circuits 2020, 55, 920–932. [Google Scholar] [CrossRef]
  101. Hao, C.; Chen, D. Software/Hardware Co-Design for Multi-Modal Multi-Task Learning in Autonomous Systems. In Proceedings of the 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington, DC, USA, 6–9 June 2021; IEEE: New York, NY, USA, 2021; pp. 1–5. [Google Scholar] [CrossRef]
  102. Wu, Y.; Ding, B.; Xu, Q.; Chen, S. Fault-Tolerant-Driven Clustering for Large Scale Neuromorphic Computing Systems. In Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August 2020–2 September 2020; IEEE: New York, NY, USA, 2020; pp. 238–242. [Google Scholar] [CrossRef]
  103. Li, X.; Yan, G.; Liu, C. Fault-Tolerant Deep Learning Processors. In Built-in Fault-Tolerant Computing Paradigm for Resilient Large-Scale Chip Design; Springer Nature: Singapore, 2023; pp. 243–302. [Google Scholar] [CrossRef]
  104. Gao, Z.; Zhang, H.; Wei, X.; Xiao, J.; Zeng, S.; Ge, G.; Wang, Y.; Reviriego, P. Ensemble of Pruned Networks for Reliable Classifiers. In Proceedings of the 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington, DC, USA, 6–9 June 2021; IEEE: New York, NY, USA, 2021; pp. 1–4. [Google Scholar] [CrossRef]
  105. Liu, C.; Chu, C.; Xu, D.; Wang, Y.; Wang, Q.; Li, H.; Li, X.; Cheng, K.-T. HyCA: A Hybrid Computing Architecture for Fault-Tolerant Deep Learning. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2022, 41, 3400–3413. [Google Scholar] [CrossRef]
  106. Panayides, A.S.; Amini, A.; Filipovic, N.D.; Sharma, A.; Tsaftaris, S.A.; Young, A.; Foran, D.; Do, N.; Golemati, S.; Kurc, T.; et al. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J. Biomed. Health Inform. 2020, 24, 1837–1857. [Google Scholar] [CrossRef] [PubMed]
  107. Ma, Y.; Wang, Z.; Yang, H.; Yang, L. Artificial Intelligence Applications in the Development of Autonomous Vehicles: A Survey. IEEE/CAA J. Autom. Sin. 2020, 7, 315–329. [Google Scholar] [CrossRef]
  108. Pallathadka, H.; Ramirez-Asis, E.H.; Loli-Poma, T.P.; Kaliyaperumal, K.; Ventayen, R.J.M.; Naved, M. Applications of Artificial Intelligence in Business Management, e-Commerce and Finance. Mater. Today Proc. 2023, 80, 2610–2613. [Google Scholar] [CrossRef]
  109. Gupta, S.; Modgil, S.; Lee, C.-K.; Sivarajah, U. The Future Is Yesterday: Use of AI-Driven Facial Recognition to Enhance Value in the Travel and Tourism Industry. Inf. Syst. Front. 2023, 25, 1179–1195. [Google Scholar] [CrossRef]
  110. Yang, L.W.Y.; Ng, W.Y.; Foo, L.L.; Liu, Y.; Yan, M.; Lei, X.; Zhang, X.; Ting, D.S.W. Deep Learning-Based Natural Language Processing in Ophthalmology: Applications, Challenges and Future Directions. Curr. Opin. Ophthalmol. 2021, 32, 397–405. [Google Scholar] [CrossRef]
  111. Trivedi, K.S. Fundamentals of Natural Language Processing. In Microsoft Azure AI Fundamentals Certification Companion: Guide to Prepare for the AI-900 Exam; Apress: Berkeley, CA, USA, 2023; pp. 119–180. [Google Scholar] [CrossRef]
  112. Mah, P.M.; Skalna, I.; Muzam, J. Natural Language Processing and Artificial Intelligence for Enterprise Management in the Era of Industry 4.0. Appl. Sci. 2022, 12, 9207. [Google Scholar] [CrossRef]
  113. Aldunate, Á.; Maldonado, S.; Vairetti, C.; Armelini, G. Understanding Customer Satisfaction via Deep Learning and Natural Language Processing. Expert. Syst. Appl. 2022, 209, 118309. [Google Scholar] [CrossRef]
  114. Johnson, K.B.; Wei, W.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision Medicine, AI, and the Future of Personalized Health Care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef]
  115. Dalzochio, J.; Kunst, R.; Pignaton, E.; Binotto, A.; Sanyal, S.; Favilla, J.; Barbosa, J. Machine Learning and Reasoning for Predictive Maintenance in Industry 4.0: Current Status and Challenges. Comput. Ind. 2020, 123, 103298. [Google Scholar] [CrossRef]
  116. Haenlein, M.; Kaplan, A.; Tan, C.-W.; Zhang, P. Artificial Intelligence (AI) and Management Analytics. J. Manag. Anal. 2019, 6, 341–343. [Google Scholar] [CrossRef]
  117. Rahmani, A.M.; Rezazadeh, B.; Haghparast, M.; Chang, W.-C.; Ting, S.G. Applications of Artificial Intelligence in the Economy, Including Applications in Stock Trading, Market Analysis, and Risk Management. IEEE Access 2023, 11, 80769–80793. [Google Scholar] [CrossRef]
  118. Rasouli, J.J.; Shao, J.; Neifert, S.; Gibbs, W.N.; Habboub, G.; Steinmetz, M.P.; Benzel, E.; Mroz, T.E. Artificial Intelligence and Robotics in Spine Surgery. Global Spine J. 2021, 11, 556–564. [Google Scholar] [CrossRef]
  119. Tambare, P.; Meshram, C.; Lee, C.-C.; Ramteke, R.J.; Imoize, A.L. Performance Measurement System and Quality Management in Data-Driven Industry 4.0: A Review. Sensors 2021, 22, 224. [Google Scholar] [CrossRef]
  120. Pistrui, B.; Kostyal, D.; Matyusz, Z. Dynamic Acceleration: Service Robots in Retail. Cogent Bus. Manag. 2023, 10, 2289204. [Google Scholar] [CrossRef]
  121. Villar, A.S.; Khan, N. Robotic Process Automation in Banking Industry: A Case Study on Deutsche Bank. J. Bank. Financ. Technol. 2021, 5, 71–86. [Google Scholar] [CrossRef]
  122. Barbuto, V.; Savaglio, C.; Chen, M.; Fortino, G. Disclosing Edge Intelligence: A Systematic Meta-Survey. Big Data Cogn. Comput. 2023, 7, 44. [Google Scholar] [CrossRef]
  123. Wen, S.-C.; Huang, P.-T. Design Exploration of An Energy-Efficient Acceleration System for CNNs on Low-Cost Resource-Constraint SoC-FPGAs. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Inchon, Republic of Korea, 13–15 June 2022; IEEE: New York, NY, USA, 2022; pp. 234–237. [Google Scholar] [CrossRef]
  124. Abbasi, M.; Cardoso, F.; Silva, J.; Martins, P. Scalable and Energy-Efficient Deep Learning for Distributed AIoT Applications Using Modular Cognitive IoT Hardware. In International Conference on Disruptive Technologies, Tech Ethics and Artificial Intelligence; Springer: Cham, Switzerland, 2023; pp. 85–96. [Google Scholar] [CrossRef]
  125. Wan, Z.; Lele, A.; Yu, B.; Liu, S.; Wang, Y.; Reddi, V.J.; Hao, C.; Raychowdhury, A. Robotic Computing on FPGAs: Current Progress, Research Challenges, and Opportunities. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Inchon, Republic of Korea, 13–15 June 2022; IEEE: New York, NY, USA, 2022; pp. 291–295. [Google Scholar] [CrossRef]
  126. Gruel, A.; Vitale, A.; Martinet, J.; Magno, M. Neuromorphic Event-Based Spatio-Temporal Attention Using Adaptive Mechanisms. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Inchon, Republic of Korea, 13–15 June 2022; IEEE: New York, NY, USA, 2022; pp. 379–382. [Google Scholar] [CrossRef]
  127. Sengupta, J.; Kubendran, R.; Neftci, E.; Andreou, A. High-Speed, Real-Time, Spike-Based Object Tracking and Path Prediction on Google Edge TPU. In Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August 2020–2 September 2020; IEEE: New York, NY, USA, 2020; pp. 134–135. [Google Scholar] [CrossRef]
  128. Qin, M.; Liu, T.; Hou, B.; Gao, Y.; Yao, Y.; Sun, H. A Low-Latency RDP-CORDIC Algorithm for Real-Time Signal Processing of Edge Computing Devices in Smart Grid Cyber-Physical Systems. Sensors 2022, 22, 7489. [Google Scholar] [CrossRef]
  129. Zou, Z.; Jin, Y.; Nevalainen, P.; Huan, Y.; Heikkonen, J.; Westerlund, T. Edge and Fog Computing Enabled AI for IoT-An Overview. In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, Taiwan, 18–20 March 2019; IEEE: New York, NY, USA, 2019; pp. 51–56. [Google Scholar] [CrossRef]
  130. Chuang, Y.-T.; Hung, Y.-T. A Real-Time and ACO-Based Offloading Algorithm in Edge Computing. J. Parallel Distrib. Comput. 2023, 179, 104703. [Google Scholar] [CrossRef]
  131. Lee, J.; Kim, C.; Han, D.; Kim, S.; Kim, S.; Yoo, H.-J. Energy-Efficient Deep Reinforcement Learning Accelerator Designs for Mobile Autonomous Systems. In Proceedings of the 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington, DC, USA, 6–9 June 2021; IEEE: New York, NY, USA, 2021; pp. 1–4. [Google Scholar] [CrossRef]
  132. Lee, J.; Jo, W.; Park, S.-W.; Yoo, H.-J. Low-Power Autonomous Adaptation System with Deep Reinforcement Learning. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Inchon, Republic of Korea, 13–15 June 2022; IEEE: New York, NY, USA, 2022; pp. 300–303. [Google Scholar] [CrossRef]
  133. Faraone, A.; Delgado-Gonzalo, R. Convolutional-Recurrent Neural Networks on Low-Power Wearable Platforms for Cardiac Arrhythmia Detection. In Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August 2020–2 September 2020; IEEE: New York, NY, USA, 2020; pp. 153–157. [Google Scholar] [CrossRef]
  134. Dave, S.; Dave, A.; Radhakrishnan, S.; Das, J.; Dave, S. Biosensors for Healthcare: An Artificial Intelligence Approach. In Biosensors for Emerging and Re-Emerging Infectious Diseases; Elsevier: Amsterdam, The Netherlands, 2022; pp. 365–383. [Google Scholar] [CrossRef]
  135. Li, J.; Liu, J.; Hu, X.; Zhang, Y.; Yu, G.; Qian, S.; Mao, W.; Du, L.; Li, Y.; Du, Y. Grand Challenge on Software and Hardware Co-Optimization for E-Commerce Recommendation System. In Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar] [CrossRef]
  136. de Moura, R.F.; Carro, L. Scalable and Energy-Efficient NN Acceleration with GPU-ReRAM Architecture. In International Symposium on Applied Reconfigurable Computing; Sringer: Cham, Switzerland, 2023; pp. 230–244. [Google Scholar] [CrossRef]
  137. Zanghieri, M.; Benatti, S.; Conti, F.; Burrello, A.; Benini, L. Temporal Variability Analysis in SEMG Hand Grasp Recognition Using Temporal Convolutional Networks. In Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August 2020–2 September 2020; IEEE: New York, NY, USA, 2020; pp. 228–232. [Google Scholar] [CrossRef]
  138. Sakai, Y.; Pedroni, B.U.; Joshi, S.; Akinin, A.; Cauwenberghs, G. DropOut and DropConnect for Reliable Neuromorphic Inference under Energy and Bandwidth Constraints in Network Connectivity. In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, Taiwan, 18–20 March 2019; IEEE: New York, NY, USA, 2019; pp. 76–80. [Google Scholar] [CrossRef]
  139. Liang, D.; Kreiser, R.; Nielsen, C.; Qiao, N.; Sandamirskaya, Y.; Indiveri, G. Robust Learning and Recognition of Visual Patterns in Neuromorphic Electronic Agents. In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, Taiwan, 18–20 March 2019; IEEE: New York, NY, USA, 2019; pp. 71–75. [Google Scholar] [CrossRef]
  140. Rüegg, T.; Giordano, M.; Magno, M. KP2Dtiny: Quantized Neural Keypoint Detection and Description on the Edge. In Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar] [CrossRef]
  141. Yoon, M.; Choi, J. Architecture-Aware Optimization of Layer Fusion for Latency-Optimal CNN Inference. In Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–4. [Google Scholar] [CrossRef]
  142. Gill, S.S.; Xu, M.; Ottaviani, C.; Patros, P.; Bahsoon, R.; Shaghaghi, A.; Golec, M.; Stankovski, V.; Wu, H.; Abraham, A.; et al. AI for next Generation Computing: Emerging Trends and Future Directions. Internet Things 2022, 19, 100514. [Google Scholar] [CrossRef]
  143. Rasch, M.J.; Moreda, D.; Gokmen, T.; Le Gallo, M.; Carta, F.; Goldberg, C.; El Maghraoui, K.; Sebastian, A.; Narayanan, V. A Flexible and Fast PyTorch Toolkit for Simulating Training and Inference on Analog Crossbar Arrays. In Proceedings of the 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington, DC, USA, 6–9 June 2021; IEEE: New York, NY, USA, 2021; pp. 1–4. [Google Scholar] [CrossRef]
  144. Zanotti, T.; Puglisi, F.M.; Pavan, P. Smart Logic-in-Memory Architecture For Ultra-Low Power Large Fan-In Operations. In Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August 2020–2 September 2020; IEEE: New York, NY, USA, 2020; pp. 31–35. [Google Scholar] [CrossRef]
  145. Yang, J.; Li, N.; Chen, Y.-H.; Sawan, M. Towards Intelligent Noninvasive Closed-Loop Neuromodulation Systems. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Inchon, Republic of Korea, 13–15 June 2022; IEEE: New York, NY, USA, 2022; pp. 194–197. [Google Scholar] [CrossRef]
  146. Kott, A. Autonomous Intelligent Cyber Defense Agent (AICA): A Comprehensive Guide; Springer Nature: Cham, Switzerland, 2023; Volume 87, ISBN 3031292693. [Google Scholar]
  147. Sun, Q.; Wang, Q.; Wang, X.; Ji, X.; Sang, S.; Shao, S.; Zhao, Y.; Xiang, Y.; Xue, Y.; Li, J.; et al. Prevalence and Cardiovascular Risk Factors of Asymptomatic Intracranial Arterial Stenosis: The Kongcun Town Study in Shandong, China. Eur. J. Neurol. 2020, 27, 729–735. [Google Scholar] [CrossRef]
  148. Caballero-Rico, F.C.; Roque-Hernández, R.V.; de la Garza Cano, R.; Arvizu-Sánchez, E. Challenges for the Integrated Management of Priority Areas for Conservation in Tamaulipas, México. Sustainability 2022, 14, 494. [Google Scholar] [CrossRef]
Figure 1. Example of neuromorphic architecture.
Figure 1. Example of neuromorphic architecture.
Electronics 13 00102 g001
Figure 2. Simplified Quantum Computing Circuit.
Figure 2. Simplified Quantum Computing Circuit.
Electronics 13 00102 g002
Figure 3. Simplified CPU Architecture.
Figure 3. Simplified CPU Architecture.
Electronics 13 00102 g003
Figure 4. Simplified GPU architecture.
Figure 4. Simplified GPU architecture.
Electronics 13 00102 g004
Figure 5. Simple TPU architecture.
Figure 5. Simple TPU architecture.
Electronics 13 00102 g005
Figure 6. Simplified NPU architecture.
Figure 6. Simplified NPU architecture.
Electronics 13 00102 g006
Table 3. AI Applications and Their Industry Impact.
Table 3. AI Applications and Their Industry Impact.
AI ApplicationHealthcare ImpactAutomotive ImpactRetail ImpactFinance Impact
Image RecognitionDiagnostic Imaging, Patient Data Analysis [106]Autonomous Driving, Vehicle Inspection [107]Customer Behavior Analysis, Inventory Management [108]Fraud Detection, Customer Identification [109]
Natural Language Processing (NLP)Patient Interaction, Clinical Documentation [110]Voice Commands, In-Car Assistance [111] Chatbots, Customer Service [112]Sentiment Analysis, Automated Customer Support [113]
Predictive AnalyticsDisease Prediction, Treatment Personalization [114]Predictive Maintenance, Design Optimization [115]Sales Forecasting, Stock Optimization [116]Risk Assessment, Algorithmic Trading [117]
RoboticsSurgical Assistance, Patient Care Robotics [118]Manufacturing Automation, Quality Control [119] Warehouse Automation, In-Store Robotics [120]Process Automation, Compliance Monitoring [121]
Table 4. Global Research Institutions and Their Contributions to AICAS.
Table 4. Global Research Institutions and Their Contributions to AICAS.
InstitutionLocationContributions
Massachusetts Institute of Technology (MIT)United StatesPioneering work in neural networks and cognitive science
Stanford UniversityUnited StatesResearch in machine learning algorithms and robotics
University of California BerkeleyUnited StatesAdvancements in computer vision and deep learning
Tsinghua UniversityChinaInnovations in AI chip design and quantum computing
ETH ZurichSwitzerlandBreakthroughs in machine learning and AI ethics
University of OxfordUnited KingdomDevelopment of AI in healthcare and ethical AI
National University of Singapore (NUS)SingaporeLeading research in AI and computer engineering
Technical University of Munich (TUM)GermanyResearch in artificial intelligence and robotics
University of TokyoJapanAdvancements in robotics and computer vision
Indian Institute of Technology (IIT)IndiaFocus on machine learning and AI applications in healthcare
University of TorontoCanadaNotable research in deep learning and neural networks
Korea Advanced Institute of Science and Technology (KAIST)South KoreaEngaged in research in robotics and machine intelligence
Imperial College LondonUnited KingdomSignificant contributions in AI and machine learning
École Polytechnique Fédérale de Lausanne (EPFL)SwitzerlandWork in machine learning and AI ethics
Australian National University (ANU)AustraliaEngages in AI research, especially in machine learning and AI ethics
University of São PauloBrazilFocus on computational intelligence and data science
Sorbonne UniversityFranceResearch in artificial intelligence and computational science
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Miller, T.; Durlik, I.; Kostecka, E.; Mitan-Zalewska, P.; Sokołowska, S.; Cembrowska-Lech, D.; Łobodzińska, A. Advancements in Artificial Intelligence Circuits and Systems (AICAS). Electronics 2024, 13, 102. https://doi.org/10.3390/electronics13010102

AMA Style

Miller T, Durlik I, Kostecka E, Mitan-Zalewska P, Sokołowska S, Cembrowska-Lech D, Łobodzińska A. Advancements in Artificial Intelligence Circuits and Systems (AICAS). Electronics. 2024; 13(1):102. https://doi.org/10.3390/electronics13010102

Chicago/Turabian Style

Miller, Tymoteusz, Irmina Durlik, Ewelina Kostecka, Paulina Mitan-Zalewska, Sylwia Sokołowska, Danuta Cembrowska-Lech, and Adrianna Łobodzińska. 2024. "Advancements in Artificial Intelligence Circuits and Systems (AICAS)" Electronics 13, no. 1: 102. https://doi.org/10.3390/electronics13010102

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop