Previous Article in Journal
Design of an ASIC Vector Engine for a RISC-V Architecture
Previous Article in Special Issue
Highly Versatile Photonic Integration Platform on an Indium Phosphide Membrane
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Neuromorphic Photonic On-Chip Computing

by
Sujal Gupta
1 and
Jolly Xavier
1,2,3,*
1
Optics and Photonics Centre, Indian Institute of Technology Delhi, New Delhi 110016, India
2
SeNSE, Indian Institute of Technology Delhi, New Delhi 110016, India
3
Department of Physics and Astronomy, University of Exeter, Exeter EX4 4QD, UK
*
Author to whom correspondence should be addressed.
Chips 2025, 4(3), 34; https://doi.org/10.3390/chips4030034 (registering DOI)
Submission received: 21 October 2024 / Revised: 30 May 2025 / Accepted: 19 June 2025 / Published: 7 August 2025
(This article belongs to the Special Issue Silicon Photonic Integrated Circuits: Advancements and Challenges)

Abstract

Drawing inspiration from biological brains’ energy-efficient information-processing mechanisms, photonic integrated circuits (PICs) have facilitated the development of ultrafast artificial neural networks. This in turn is envisaged to offer potential solutions to the growing demand for artificial intelligence employing machine learning in various domains, from nonlinear optimization and telecommunication to medical diagnosis. In the meantime, silicon photonics has emerged as a mainstream technology for integrated chip-based applications. However, challenges still need to be addressed in scaling it further for broader applications due to the requirement of co-integration of electronic circuitry for control and calibration. Leveraging physics in algorithms and nanoscale materials holds promise for achieving low-power miniaturized chips capable of real-time inference and learning. Against this backdrop, we present the State of the Art in neuromorphic photonic computing, focusing primarily on architecture, weighting mechanisms, photonic neurons, and training, while giving an overall view of recent advancements, challenges, and prospects. We also emphasize and highlight the need for revolutionary hardware innovations to scale up neuromorphic systems while enhancing energy efficiency and performance.

1. Introduction

The determined pursuit of computational systems mirroring the efficiency of the human brain has served as a compelling impetus with significant industrial implications right from the emergence of artificial intelligence (AI) dating back to the 1950s [1,2]. It has undergone an initial surge of optimism, followed by decades of setbacks, primarily attributed to the dearth of computing power and hardware implementation complexity during that period. Inspired by the human brain’s neurosynaptic framework (Box 1), AI endeavors to attain the level of precision equivalent to human performance in tasks that pose difficulties for conventional computing systems yet come naturally to humans. The brain’s computational power has an efficiency of around aJ/MAC (multiply–accumulate) operation [3]. In contrast, conventional computers generally bound by von Neumann architecture struggled with the architectural scaling, typically consuming 100 pJ/MAC operation [3].
In the present era of prolific data generation and escalating automation, AI has emerged as a transformative force (Box 2) [4], reshaping industries and redefining human–machine interactions through machine learning (ML) [5] and deep learning (DL) [6,7]. These developments are influenced by the advent of computing power reflected in the modern Nvidia graphic processing units (GPUs) [8] and Google’s tensor processing units (TPUs) [9], consuming only around 20 pJ/MAC operation, sustaining Moore’s Law. Some of the current applications of AI extend across various sectors, encompassing autonomous driving, robotic vision, remote sensing, microscopy, surveillance, deterrence, and the Internet of Things [7,10,11,12]. The significant accomplishments of AI algorithms, particularly neural networks (NNs) [13,14], have influenced numerous facets of our daily lives, ranging from language translation [9] to cancer diagnosis [15].
Box 1. Artificial neuron.
The intricate mechanisms of biological neurons have long served as a wellspring of inspiration for neural engineering frameworks [16] to mimic cognitive processes. The Hodgkin–Huxley model [17] explains the working of biological neurons, which are inherently more complex than their artificial counterparts—inspiring models like the Leaky Integrate and Fire (continuous in time) [18] or the Izhikevich (spiking) [19]. Artificial neurons simplify the complexities of biological neurons into functional units comprising the weighted sum and nonlinear activation functions, as shown in Figure below. It mimics synapses and dendrites for weighting and summation, soma for leaky integration, axon hillocks for thresholding, and axon terminals for activation. Direct implementation of neural models in hardware offers unmatched speeds and efficiencies compared to software implementations [3]. This drive toward hardware acceleration for on-chip neuromorphic computing finds isomorphism in electronics [20] and photonics [21,22,23,24]. Artificial neurons mirror the behavior of their natural counterparts, wherein inputs, Ψ i , are processed through weighted sums, U i , and nonlinear activation functions, A j , to produce outputs, Ψ j . The activation function could be a continuous time [25,26,27,28] or spiking [18,29,30,31,32], which defines the versatility of neural networks tailored to different computational tasks. Here, W j is the strength between neurons i and j; τ is a leaky integration time constant; A t h r e s h o l d and A r e s e t are the spiking neuron’s threshold and resetting voltages; t is time; and t k is the spike occurrence point in time when A crosses the A t h r e s h o l d .
Chips 04 00034 i001
Biological neuron (left) and its mathematical equivalent model of artificial neuron (right) [33].

1.1. Neuromorphic Electronic Computing: On-Chip AI

The technology requirements for AI have evolved [34,35]. NN, the backbone of neuromorphic computing for AI algorithms, consists of artificial neurons arranged in layers connected to others through synapses (Box 1), allowing these networks to adapt and learn from data and exhibiting inherent parallelism and distributed computing features that confront significant hurdles when implemented in digital electronics [20]. This misalignment with the sequential nature of von Neumann architecture with separate central processing units and dynamic random-access memory in digital computers has led to a shift toward on-chip electronics, driven by its potential to enhance computational speed and efficiency by co-locating computing and storage units [36,37,38]. State-of-the-Art neuromorphic electronic architectures, which rely on transmission lines and active buffering techniques, face challenges in meeting the demands of massive parallel signal fan-out and fan-in [39]. Architectures resort to digital time-multiplexing, enabling the construction of more extensive NNs at the expense of bandwidth that detours the limitations of electronic writing [20].
Box 2. Neuromorphic Photonics: Through a time-tagged journey.
Chips 04 00034 i002
Timeline of artificial intelligence and optical implementation related to neuromorphic photonics—selected publications [2,5,13,14,16,22,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60].
Industry and research initiatives emphasize the viability of on-chip electronics. Significant achievements in designing application-specific integrated circuits (ASICs) have been observed in noteworthy projects, including IBM’s TrueNorth [61], Neurogrid [38], SpiNNaker [36], Tianjic [11], Microsoft Azure Maia AI Accelerator [62], and Intel’s Loihi [37]. Co-locating computing units and storage units within the same chip addresses the inefficiencies of traditional architectures [3]. However, challenges persist: interconnect density [63] and bandwidth limitations [20], scalability concerns [39], and power consumption issues [64] underscore the need for continued innovation. The future envisions more sophisticated applications, ranging from nonlinear programming [65] to real-time learning [58] and intelligent signal processing [66], evidenced by recent applications, including The Human Brain Project [67], large language model (LLM)–ChatGPT [68], and Tesla’s Dojo Chip [12]. Applications prioritizing bandwidth and low latency necessitate shifting toward direct, non-digital photonic broadcast interconnects [69].

1.2. Photonics: A Bright Solution Looking for a Dark Problem!

Traditional electronics often overshadow the historical development of photonics due to a lack of advantageous parallels in photonic digital logic [70]. Nevertheless, photonics has achieved remarkable milestones in data transmission [71] and communication [72] and is experiencing a renaissance due to its potential to revolutionize information processing [66]. Photonics, harnessing the power of light for communication and computation, presents a promising alternative to overcome the limitations posed by electronic interconnects [63]. Unlike electrons, photons provide diverse dimensions to control, such as wavelength (wavelength-division multiplexing (WDM) allows different wavelengths of light to carry independent information simultaneously), polarization (polarization multiplexing uses orthogonal polarizations to transmit distinct data streams in the same optical channel), and spatial mode (mode multiplexing leverages different transverse modes in multimode waveguides as transmission channels to encode additional parallel data channels). These advantages of photonics properties allow for faster data processing, reduced power consumption, and inherent parallelism at every level of integrated photonic circuits, positioning it as a transformative solution for contemporary AI [33]. Non-digital computing models, particularly those enabled by photonics, promise solutions to challenges like low latency [3], high bandwidth [39], and low energies [64], giving rise to a field of neuromorphic photonics at the intersection of photonics and the neural-engineering framework [53].

1.3. Neuromorphic Photonics in the Background of AI Technology Revolution

Neuromorphic photonics aims to facilitate matching hardware to the algorithm, supporting many-to-many communication and overcoming the trade-offs inherent in electronic approaches, which are power-hungry [64] and trade-off bandwidths for interconnectivity [20]. With its interconnectivity [69] and matrix multiplication advantages [73], photonic channels are promising candidates, as they do not suffer from the limiting features of their electronic counterpart, though the latter has its own merit as well. The maturity of enabling technologies in this domain provides a pathway to achieve cascadability and nonlinearity in neuromorphic systems. In particular, demonstrated scalable silicon photonic devices, including waveguide, Mach–Zehnder Interferometer (MZI) [74], micro-ring resonator (MRR) [75], micro comb [76], photonic crystals [77,78], and phase-change material (PCM) [22], contribute and emerged as a promising technology. Leveraging existing Complementary Metal–Oxide–Semiconductor (CMOS) compatibility [79], silicon photonics can be seamlessly integrated with available lightspeed active optoelectronic and CMOS circuits without requiring additional complex processes [80]. This co-design with mature electronics provides a versatile component sensitivity [81] for large-scale photonic fabrication and integration platforms [79], along with calibration and control [82]. In specific applications, such as autonomous driving, the combination of neuromorphic photonics facilitates real-time decision-making with unparalleled efficiency. In healthcare, the ability of photonics to handle vast amounts of medical data accelerates diagnostic processes [15]. Intelligent systems for smart cities leverage photonics’ speed and bandwidth advantages for seamless data processing [66].
As neuromorphic photonics continues to push boundaries, research and technology will play pivotal roles. Challenges arise in storing and retrieving neuron weights within on-chip memory [83,84]. Ongoing efforts in optical memories, particularly “in-memory” computing, have been explored [83,85,86,87]; however, their limitations include difficulties in high-frequency read and write operations [22]. The concept of electronic–photonic co-design emerges as a highly promising approach for implementing neuromorphic photonic systems [33]. This integration should leverage the characteristics of memory types (volatile and non-volatile) in digital or analog domains tailored to ASIC and computational requirements. The envisaged future demands of AI, marked by increased complexity and diverse applications [88], align seamlessly with the strengths of photonics [81]. Research efforts into architectures and algorithms for neuromorphic photonic processors (NPPs) [33,89] signal new directions that hold tremendous promise for the trajectory of AI with a potential of aJ per MAC energy efficiencies and P-MAC/s/mm2 processing speed [3]. However, nurturing this synergy will require collaborative efforts to overcome integration, scalability, and compatibility challenges and explore novel applications to propel the field toward new frontiers in intelligent computing on chips. Here, we explore the transformative domain of neuromorphic photonics, with a focus on neuromorphic photonic processing node (NPPN), from signal-weighting mechanisms and photonic neuron (PN) intricacies to the architecture of neuromorphic photonic networks (NPNs) and training. Additionally, recent advancements in experimentally demonstrated on-chip neuromorphic photonic approaches are discussed, along with potential applications and the hurdles they encounter in scaling up for widespread deployment. There is a strong emphasis on the necessity for groundbreaking on-chip hardware innovations to be compatible with fabrication technology to enhance energy efficiency, performance, and scalability within neuromorphic systems. This extensive exploration aims to provide valuable insights into the present status, challenges, and future possibilities of photonic and neuromorphic computing on-chip.

2. Neuromorphic Photonics Processing Node

The convergence of neuromorphic photonics potentially revolutionizes the computing paradigm [53], where the complexities of brain-inspired processing merge with the capabilities of light-based technologies [45] by harnessing the optical system’s inherent direct multiplication [73], multiplexing techniques [90], energy efficiency [64], and speed [3]. At the heart of this endeavor lies the design of a NPPN (Figure 1) which is inspired by the fundamental components and scheme of the brain’s neural architecture given in Box 1 [33]. Designing a NPPN that encapsulates these functionalities necessitates a multidisciplinary approach, integrating insights from neuroscience, materials science, photonics engineering, fabrication technology, system integration, platform co-integration, and packaging [34,91].
Different hardware implementations of neuromorphic photonics have been demonstrated, with each having been designed for specific application classes. These implementations typically involve NPPNs, which consist of nonlinear PN interconnected through configurable photonic synapses (further referred as synapses) for linear weighting. These models differ in PN signal representation (continuous or spiking) and weight configuration (tunable criterion [5] or fixed [92]) to accomplish a particular task per the learning algorithm. In general, continuous-variable nonlinear neurons can be trained using backpropagation [5], while spike-timing-dependent update rules are well-suited for spiking neurons [93,94].

2.1. Neuromorphic Photonic Processing-Node Architecture

NPPN architecture is designed to emulate the functionality of traditional neurons while operating in the optical domain (Box 1). At its core, the NPPN comprises two main components: linear operation weighting and nonlinear activation, supplemented by interconnects, memory, and photonic components. Ideal NPPN should consume minimal energy; possess high endurance; enable easy addressing in large, interconnected networks; provide signal gain and on-chip memory; and offer tunability, active dynamism, reconfigurability, and multifunctionality. Additionally, they should support large fan-in and fan-out, extensive interconnectivity, self-assembly capability, formation of 3D interconnects, and easy manufacturability in large quantities, all at a low cost [91].
In the NPPN architecture (Figure 1), input signals are received as optical pulses (Ψj) from a multiplexer after adding the bias (Bi) signal to the actual input signal (Xi), representing incoming information. Optical synapses adjust the strength of connections through externally controlled feedback tuning to matrix multiply, simulating synaptic weights that may have on-chip synaptic memory (volatile or non-volatile) for storage. The weighted signals are then linearly summed (Wkj) to control PN dynamics. The PN executes the nonlinear activation, generating optical output signals (Ψo) based on modulation and interaction within the components. Moreover, the NPPN architecture exhibits similarities to an artificial neural network (ANN) [58] and reservoir computing (RC) [49] when serving as a reservoir in absence of dashed interconnects, due to the inherent parallelism supported in photonics through multiplexing. This versatility underscores the adaptability and potential of the NPPN architecture to serve as a robust platform for advanced neuromorphic computing tasks. Demonstrated NPNs utilized a variety of PNs and synapses. However, one particular interest involves the creation of synapses and PNs using identical technologies to facilitate seamless integration within expansive systems and enable the implementation of NN algorithms leveraging the innate physics of optical components.

2.2. Weights (Synapses): Linear Operation

Photonic synapses play a pivotal role in neuromorphic photonic computing, enabling the manipulation and processing of network signals. These synapses govern the strength of connections between neurons by assigning scalar multipliers, known as synaptic weights, to the signals transmitted across them. The weighted signals from upstream neurons are aggregated and modulated by the synaptic weights before being transmitted to downstream neurons, facilitating information and computation integration within the network.
Various integrated photonic devices have been developed to implement weighted interconnection (Figure 2). These implementations can be broadly categorized into two groups based on wavelength multiplexing and interference in optical modes. However, they are not limited to and are extensively explored within the scientific community, particularly in RC. Mode-based approaches leverage interference between different optical paths and coherent input light to implement unitary matrix transforms for weighted interconnection (Figure 2a) [51]. These techniques utilize beam splitters, phase shifters, and MZIs to modulate the power and phase of light signals, enabling control over synaptic weights and interconnection matrices [74,95]. Additionally, advancements in cryogenic architectures and index-tuning mechanisms have expanded the scope of mod-based approaches (Figure 2b) [96], offering enhanced performance and flexibility in weight configuration. On the other hand, in the wavelength-based approach, signals are weighted in parallel using WDM techniques [90] and tunable filters [82,97,98], such as MRRs [99]. Although they differ in weighting mechanism, these approaches, exemplified by architectures like broadcast-and-weight (B&W, Figure 2c) [82,100] and photonic cross-connect using indium phosphide (InP) semiconductor optical amplifiers (SOAs, Figure 2d) [101], offer efficient methods for multiwavelength synapses for NPPN, with different architectures employing variations in weighted addition and attenuation mechanisms.
RC uses weighted transient states of a nonlinear element with delayed feedback or time-division multiplexing (Figure 2e) [102]. Non-volatile synapse implementations represent another significant advancement in neuromorphic photonic approaches (Figure 2f) [85], offering solutions that eliminate the need for electrical inputs for tuning. These approaches depend on optically induced alterations in materials like chalcogenides to regulate light propagation within waveguides. By leveraging the unique properties of non-volatile optical materials, these implementations address challenges related to electrical input/output and heat dissipation, paving the way for more efficient and robust NPNs. A variety of weighting mechanisms are explored [71,97,103,104,105,106,107] for developing weighted interconnects in neuromorphic photonics, realizing high-performance and energy-efficient computing systems [73,90]. By leveraging the capabilities of integrated photonic circuits and novel materials, researchers are unlocking new possibilities for designing and implementing advanced NPNs with unprecedented functionality and adaptability.

2.3. Photonic Neuron: Nonlinear Activation

The PN represents a critical component to emulate the functionality of biological neurons for advanced information-processing tasks [75], which still face the challenge of achieving nonlinear activation while ensuring compatibility with network architectures, including fan-in [39] and cascadability [39]. Researchers have explored the multifaceted devices utilized for implementing PNs (Figure 3 and Figure 4), signifying two categories per the physical representation of signals: all-optical and optical/electrical/optical (O/E/O). In the all-optical PN design, the neuron signal is represented solely by material properties or changes in optical susceptibilities. While all-optical neurons offer inherent speed advantages over O/E/O implementations, they face significant challenges in achieving sufficient output strength to drive subsequent neurons.
Solutions to this challenge involve integrating carrier regeneration mechanisms [108], wherein each neuron produces a renewed carrier wave modulated by its output signal, thereby enhancing its strength for downstream transmission. This approach has been demonstrated through various techniques, such as semiconductor carrier populations using cross-gain modulation (Figure 3a) [28], and structural phase transitions (Figure 3b) [22], enabling the realization of all-optical PN with enhanced functionality and cascadability. Despite introducing a new challenge of differentiating controller signals from controlled signals, carrier regeneration enables the amplification of output signals to drive downstream PN efficiently. Another avenue of exploration in PN design involves the O/E/O signal pathway, and optical signals are transduced into electrical currents and back into optical signals within the primary signal pathway (Figure 3c–g). The nonlinear dynamics of PNs using a superconducting electronic signal pathway (Figure 3c) [24,27] and a photodetector–modulator PN for MZI meshes (Figure 3d) [23] were proposed and implemented. Not only that, but this pathway enables the implementation of nonlinearities either in the electronic domain or through optical–electrical conversion stages utilizing modulators (Figure 3e,f) [21,25] or lasers (Figure 3g) [26]. By leveraging electronic components for nonlinear processing, O/E/O neurons can achieve high-bandwidth nonlinear transfer functions unconstrained by the characteristics of input signals to facilitate the generation of output signals more potent than the input essential for neural computation.
Furthermore, spiking laser neurons (Figure 4), classified as all-optical or O/E/O, have showcased robust nonlinearity, carrier regeneration, and neural dynamics within a singular device. This is achieved by harnessing gain, cavity, and saturable processes. Spiking neurons have been demonstrated utilizing various technologies, including mode competition (Figure 4a,b) [30,31], saturable semiconductor media (Figure 4c–e) [18,32,59,109], and resonant tunneling diodes (Figure 4f) [29].
When comparing all-optical and O/E/O implementations of PNs, the apparent advantage of all-optical approaches lies in their intrinsic speed, often ascribed to the comparatively sluggish carrier drift and current flow phases in O/E/O designs. However, recent developments have challenged this notion, revealing that analog O/E/O devices can exhibit comparable or even superior bandwidth and energy performance [64] compared to their all-optical counterparts. It is worth noting that in PNs applying nonlinearities in the digital domain, the maximum system bandwidth is often dictated by the efficiency of the digital subsystem, underscoring the importance of efficient digital processing. Moreover, in PN design, a crucial consideration is the ability to configure and customize nonlinear transfer functions to align with specific NPN tasks. Analog PNs offer flexibility in configuring transfer functions through electrical biasing [21].
Conversely, digital counterparts offer the flexibility to implement arbitrary transfer functions. Surprisingly, recent research advancements have demonstrated that programming techniques and neural training can adapt to analog photonic devices’ inherent transfer functions, showcasing these devices’ potential for efficient and adaptable neural computation [110,111]. Therefore, integrating PN with silicon photonics platforms holds promise for scalable and cost-effective implementation of large-scale on-chip neuromorphic photonic integrated circuits (nPICs) for information processing. Silicon photonics provide a stable ecosystem for research and development, facilitating advancements in technology road mapping, standardized fabrication processes, and broad accessibility to academic research [34]. This integration enables harnessing the scalability and feasibility of photonic integrated circuits for realizing advanced NPNs.
Figure 4. Spiking photonic neurons. All-optical: (a) A semiconductor-ring laser featuring an electrically pumped group III–V MRR coupled to a waveguide (top) [112]. Bistability arises from excitable behavior (bottom) [113] when the symmetry of the two counter-propagating (clockwise or anticlockwise) modes per frequency is disrupted [30]. (b) On the left, an InP-based two-dimensional photonic crystal features a L3 cavity (three holes removed, incorporating quantum wells) that leverages fast third-order nonlinearity to achieve excitability. On the right, hysteresis cycles demonstrate bistability with varying detuning values relative to the cavity resonance, displayed in arbitrary units [31]. (c) On the left is an optically pumped group III–V micropillar laser with a SA. The amplitude response to a single-pulse perturbation versus perturbation energy is depicted for bias pump power relative to the self-pulsing threshold on the right. This illustrates the differentiation between the excitable and self-pulsing thresholds [109]. (d) Pre-weighted optically encoded input is injected into the vertical-cavity surface-emitting laser (VCSEL)–neuron and integrated over time (top)—recorded spike response of multiple pulses with varying amplitude close to threshold operation points (bottom) [59]. Optical–electronic–optical: (e) A two-section gain–SA setup on the left, functioning as an integrate-and-fire mechanism. At the bottom is a micrograph of an electrically injected excitable distributed feedback laser used to selectively disturb the gain, driven by a balanced photodetector pair. On the right, the measured excitable power of the input pulses is displayed at the top, while the laser output is at the bottom [18]. (f) On the left is a resonant-tunneling diode layer stack, photodetector, and laser diode (RTD-LD), constituting an excitable optoelectronic device. On the right, excitability is attained by biasing a double-barrier quantum well within the RTD in the negative differential resistance region of its direct current–voltage curve [29]. Figures adapted with permission from Ref. [112], Elsevier (a, bottom); and Ref. [114], IEEE (e). Figures reproduced with permission from Ref. [113], APS (a, top); Ref. [31], APS (b); Ref. [109], APS (c); Ref. [59], author(s) (d); and Ref. [29], OPG (f).
Figure 4. Spiking photonic neurons. All-optical: (a) A semiconductor-ring laser featuring an electrically pumped group III–V MRR coupled to a waveguide (top) [112]. Bistability arises from excitable behavior (bottom) [113] when the symmetry of the two counter-propagating (clockwise or anticlockwise) modes per frequency is disrupted [30]. (b) On the left, an InP-based two-dimensional photonic crystal features a L3 cavity (three holes removed, incorporating quantum wells) that leverages fast third-order nonlinearity to achieve excitability. On the right, hysteresis cycles demonstrate bistability with varying detuning values relative to the cavity resonance, displayed in arbitrary units [31]. (c) On the left is an optically pumped group III–V micropillar laser with a SA. The amplitude response to a single-pulse perturbation versus perturbation energy is depicted for bias pump power relative to the self-pulsing threshold on the right. This illustrates the differentiation between the excitable and self-pulsing thresholds [109]. (d) Pre-weighted optically encoded input is injected into the vertical-cavity surface-emitting laser (VCSEL)–neuron and integrated over time (top)—recorded spike response of multiple pulses with varying amplitude close to threshold operation points (bottom) [59]. Optical–electronic–optical: (e) A two-section gain–SA setup on the left, functioning as an integrate-and-fire mechanism. At the bottom is a micrograph of an electrically injected excitable distributed feedback laser used to selectively disturb the gain, driven by a balanced photodetector pair. On the right, the measured excitable power of the input pulses is displayed at the top, while the laser output is at the bottom [18]. (f) On the left is a resonant-tunneling diode layer stack, photodetector, and laser diode (RTD-LD), constituting an excitable optoelectronic device. On the right, excitability is attained by biasing a double-barrier quantum well within the RTD in the negative differential resistance region of its direct current–voltage curve [29]. Figures adapted with permission from Ref. [112], Elsevier (a, bottom); and Ref. [114], IEEE (e). Figures reproduced with permission from Ref. [113], APS (a, top); Ref. [31], APS (b); Ref. [109], APS (c); Ref. [59], author(s) (d); and Ref. [29], OPG (f).
Chips 04 00034 g004

3. Neuromorphic Photonic Networks

Data-centric AI-driven applications predict the urgent need for high-efficiency and ultralow-power-consumption solutions [115]. On-chip neuromorphic photonics has garnered attention as a complementary approach [33,116]. However, synergetic research is needed to identify optimized network topology and algorithms for NPNs [117]. ML and DL breakthroughs have propelled advancements across the hierarchy of networks, driven by various training algorithms that enable networks to adapt to diverse tasks [6,48]. Within this landscape, computational models inspired by the human brain’s structure and function, such as NNs [5,14], have gained prominence, leading to the exploration of various architectures and algorithms to enhance their capabilities. Moreover, integrating photonics into NN design has opened new avenues for achieving high-speed, wide-bandwidth, energy-efficient computing supporting massive parallelism [117]. Figure 5 illustrates the evolution from essential ANNs to cutting-edge spiking neural networks (SNNs) and photonic architectures, highlighting the algorithms and architectures that drive their functionality.
At the heart of ML is the concept of ANNs, computational models comprising interconnected nodes, or neurons, organized into layers. ANNs can learn complex patterns and relationships from data through a process known as training. Backpropagation, a fundamental algorithm ANNs use, adjusts synaptic weights based on the error gradient concerning each weight during training [5,52]. This iterative optimization process minimizes the difference between calculated and actual outputs, allowing the network to learn from labeled data. SNNs emulate the event-driven processing of the human brain, utilizing spikes to encode information and achieve efficient computation [16,22,93]. Neurons in SNNs integrate input spikes over time, generating output spikes when the membrane potential exceeds a threshold. This threshold-based firing mechanism, combined with spike-timing-dependent plasticity (STDP) [94], enables SNNs to learn from temporal patterns in data and exhibit robust adaptive behavior. DL represents a paradigm shift in ML, leveraging NN with multiple hidden layers (GAN, autoencoders, LLMs, etc.) to extract intricate features from raw data. Training through stochastic gradient descent (SGD) and its variants optimize the complex parameters of deep neural networks (DNNs) by efficiently navigating the high-dimensional parameter space [6,118]. Convolution neural network (CNNs) are prime examples of DL architectures widely employed in image-recognition and computer-vision tasks [7,55]. CNNs utilize convolutional layers to detect hierarchical features in input data and then pool layers for dimensionality reduction and fully connected layers for classification. Recurrent NNs (RNNs) excel in sequential data by maintaining internal states or memory across time steps–long short-term memory, making them suitable for natural-language-processing [9] and time-series-prediction tasks [118]. RC offers a unique approach to neural network training, focusing on adapting only the readout layer while keeping the internal parameters fixed, simplifying the training process, and enhancing scalability, particularly in hardware-constrained environments [92,102]. RC architectures, characterized by interconnected nonlinear nodes forming a fixed RNN, have shown promise in emulating complex behaviors and achieving efficient information processing.
Supervised [51], unsupervised [22], and reinforced [119] learning represent fundamental techniques in ML, with each offering unique approaches to training. These methodologies provide versatile frameworks for optimizing the NN to accomplish specific tasks within various network topologies (Figure 5). Labeled data guides the NN to establish correlations between input data and corresponding output labels in supervised learning, encompassing classification [57], regression, and sequence-prediction tasks. Unsupervised learning, conversely, entails training the NN on unlabeled data to unveil latent patterns or structures within the dataset, beneficial for tasks such as clustering, dimensionality reduction, and anomaly detection, enabling the extraction of meaningful representations from the data without predefined labels [22]. Reinforcement learning operates within a framework where an agent learns to navigate an environment by taking actions and receiving feedback through rewards or penalties [119]. This adaptive approach is instrumental in training NN for tasks requiring sequential decision-making, such as game-playing, robotics, and autonomous systems. As we navigate the landscape of network topologies, from ANNs and SNNs to NPNs, we witness the convergence of AI-driven technologies and cutting-edge research [116]. By embracing the complexity of NNs and harnessing the power of photonics, researchers are poised to unlock new frontiers in intelligent computing.

3.1. Neuromorphic Photonic Network: A Combined Structure

The NPN represents a pioneering advancement in DL architectures specifically tailored for photonic applications harnessing NPPNs as the fundamental building blocks. This innovative NN architecture integrates principles from CNNs [7,55] and RNNs [52,120], leveraging the unique properties of light for efficient computation. The network consists of two primary components: convolutional layers with activation functions, followed by pooling layers and a recurrent feedback loop. The architecture is structured to exploit spatial and temporal correlations in input data, making it well-suited but not limited to tasks such as pattern recognition, classification, and sequential data processing. The schematic of the approach to recurrent convolutional neuromorphic photonic networks (ReConN, Figure 5) has been combined and framed here in the context of neuromorphic photonic computing:
  • Instead of traditional convolutional layers, the ReConN employs NPPNs to perform convolutional operations directly on input photonic signals, enabling them to extract spatial features from the input data and apply nonlinear activation functions simultaneously.
  • NPPNs in RC mode, followed by unit weighting within the synapse for pooling operations (Figure 1), eliminate the need for different variety of layers. NPPNs dynamically aggregate information across spatial dimensions of the input data, facilitating downsampling and feature selection, while preserving the advantages of photonic processing.
  • A defining feature of the ReConN’s is its incorporation of a recurrent feedback loop enabled by the inherent memory (synaptic or delayed) properties of NPPNs. The output from the post-NPPNs is fed back into the pre-NPPNs in the network, allowing for iterative refinement of representations over multiple time steps and capturing temporal dependencies in the input data.
  • Following the recurrent processing stage, the output is flattened for processing (say classification). This final layer utilizes standard classification techniques to map the learned features to specific output classes, enabling the network to make accurate predictions based on the input data.

3.2. Neuromorphic Photonic Approaches

State-of-the-Art neuromorphic photonic approaches represent AI, leveraging the principles of neuroscience and photonics to develop energy-efficient and high-performance computing systems. Various NPNs proposed and experimentally demonstrated (Figure 6) the potential across different architectures. These approaches encompass different topologies (NPNs), weighting (synapse), and photonic signal representations (PNs). Each approach offers unique achievements and challenges, paving the way for innovative solutions in neuromorphic computing (Table 1, “Remarks” column). Solutions to the challenges of these approaches are promising for revolutionizing computing paradigms and enabling advanced AI applications in various domains.
RC leverages the complex dynamics of optical systems to perform computation. By utilizing interconnected delays between the fixed nonlinear nodes with multiple feedback loops, RC architectures demonstrate remarkable capabilities in emulating complex behaviors and processing information efficiently (Figure 6a) [49]. Recent advancements in RC have focused on integrating passive photonic elements, such as waveguides and resonators, to achieve scalable and high-performance computing [121]. Experimental demonstrations have showcased the ability of photonic RC to perform application-specific tasks such as spoken-digit recognition, time-series prediction, and signal optimization, making it a promising candidate for various real-world applications. However, it cannot be generalized to complex problems [102,122].
Superconducting optoelectronic circuits (SOCs) combine the advantages of superconducting electronics and photonics to achieve ultrafast and energy-efficient computing optimized for scalability (Figure 6b) [50]. These circuits exploit the superconducting nanowires, single-photon detectors, and capacitive micro-electromechanical system (MEMS)-based modulators to perform unprecedentedly efficient NN operations on a chip, paving the way for scalable and high-performance neuromorphic computing platforms with an energy cost associated with cooling at cryogenic temperature necessary for operation with maximum bandwidth limited to 1GHz [24,96,123].
The MZI-based coherent nanophotonic circuits (CNCs) leverage coherent light manipulation to perform complex computational tasks (Figure 6c) [51]. Integrating beamsplitters and phase shifters enables these circuits to control optical signals and facilitate high-speed processing, but they are bulky and require high driving voltages [23,74,95]. Experimental demonstrations have shown the potential of CNC for implementing neuromorphic algorithms such as in situ backpropagation [58], self-calibration [110], and asymptotically fault-tolerant programable photonic circuits [111], opening new avenues for ultrafast and energy-efficient computing [124,125,126].
B&W utilizes the concept of WDM to perform matrix–vector multiplication efficiently (Figure 6d) [52]. These networks can achieve parallel processing of NN operations by modulating optical signals at specific wavelengths and tuning MRRs [82,98]. Experimental implementations have demonstrated its scalability and energy efficiency for solving differential equations, making them promising candidates for nPIC for computing, but their bandwidth is limited to 1GHz [21,100].
Multiwavelength photonic neurosynaptic networks (MNs) integrate PCM–germanium–antimony–tellurium (GST)-based PN (Figure 6e) [22] and synapses [85] on silicon nitride on a silicon dioxide platform with inherent synaptic memory associated with PCM with no energy requirement to maintain its states in case of offline learning. Online learning can be challenging, with individual PCM devices’ endurance up to 104 switching cycles requiring nJ of energy per cycle with sub-nanosecond operation speed. These networks leverage the optical mode in a controlled manner to perform complex cognitive tasks. Experimental demonstrations have shown the feasibility of implementing MN for tasks such as pattern recognition and associative memory with supervised and unsupervised learning, highlighting their potential for next-generation nPIC [55,118].
On-chip diffractive neural networks (DO) harness the principles of diffractive optics to perform computational operations passively. By integrating diffractive elements in silicon slots filled with silicon dioxide (SSSD), these networks can achieve parallel processing of optical signals with minimal power consumption (Figure 6f) [57]. Experimental implementations have demonstrated the ability of DO to perform image classification [57,126]. DO is highly susceptible to noise and fabrication errors, requiring algorithmic compensation.
Vertical-cavity surface-emitting laser optoelectronic neural network (VNN) architecture utilizing spatial–temporal multiplexing and coherent detection mechanisms is a method that achieves significant breakthroughs in energy efficiency and compute density (Figure 6g). The VNN has compact device footprints and near-instantaneous inline nonlinearity through homodyne photoelectric multiplication. Challenges involve enhancing photonic integration to reduce phase instability and scaling VCSEL arrays for even larger neural network models [60].

3.3. Algorithms and Methods for Training Neuromorphic Photonic Networks

NPNs represent a burgeoning frontier ripe with promise. However, training algorithms face unique challenges due to the nonlinear nature of optical components and the need for efficient optimization techniques for a delicate balance between processing and memory access [116,117]. Unlike conventional AI algorithms deployed in software applications, developing customized training algorithms tailored for photonic hardware implementation could herald a transformative shift in this domain (Table 2). One of the key challenges in training NPNs lies in achieving precision (Table 2, “Networks and Training” column). Traditional backpropagation algorithms, which are highly effective for training deep AI networks, require precise adjustments of weights based on slight variations in error gradients. However, most nanodevices used in NPN are inherently noisy, making it challenging to achieve such fine adjustments. Novel training algorithms that can adapt to the imperfections and variability of physical devices can cash in on the inherent properties of photonics. Several strategies are proposed to address the precision challenge in photonic training, including the photonic generative network that harnesses noises of photonic hardware [56], a heuristic photonic recurrent algorithm for the Ising problem [127], and the photonic analog of the backpropagation leveraging adjoint variable methods that can significantly reduce complexity by simplifying the mathematical model associated with training [58] algorithms.
Additionally, statistical optimization tools, including genetic algorithms [132], Bayesian optimization [133], nonlinearity inversion [120], and equilibrium propagation [134], have been investigated for optimizing weights in NPNs. These gradient-free algorithms show promise for training NPNs efficiently, particularly for classification tasks with different datasets [135]. Another essential consideration in NPN training is the need for weight-independent variations using external signals (electrical or optical). Furthermore, modifying existing algorithms to work effectively despite device imperfections is crucial, as demonstrated in self-calibrated [110] and asymptotically fault-tolerant [111] programmable NPNs. Unsupervised learning algorithms are particularly intriguing for NPNs and are highly adaptable to device imperfections. STDP is a popular unsupervised learning rule inspired by neuroscience [119]. In STDP, synaptic weights are modified based on the temporal relationship between pre-synaptic and post-synaptic activities, making it suitable for implementation in NPNs (Table 2, “Networks and Training” column) [22,50]. In addition to precision and unsupervised learning, optimizing nPIC presents challenges. Mitigating the complexity of probing local optical intensities across the circuit is essential for efficient training. Recent advancements propose on-chip reconfigurable [136] and control-free hardware-aware [135] training for NPNs. In contrast, Boolean learning via coordinate descent offers a practical and efficient alternative to error backpropagation [119], enabling high-performance NPNs with programmable connections. These novel training schemes (Table 2), proposed or experimentally validated for NPNs, underscore a dynamic landscape characterized by notable achievements, persistent challenges, and promising prospects also highlighted in State-of-the-Art NPN demonstrations [118,128,129,130]. By addressing the challenges and exploring innovative training algorithms, NPNs hold the potential to revolutionize neuromorphic computing paradigms, offering energy-efficient and high-performance solutions for a wide range of applications (Figure 7).

4. Discussion

The demand for AI and its multifaceted applications is accelerating rapidly, exemplified by recent advancements such as DALL-E3 and Sora [88], which require massive computational power for training models with billions of parameters. Such tasks have relied on large-scale clusters of accelerators, like GPUs or TPUs [115]. Neuromorphic photonics holds the potential to support these power-hungry processors, particularly for ASIC demands. Photonics leverage its inherent bandwidth, parallelism [39], picosecond latencies [64], and energy efficiency (measured in pJ/FLOP) [3]. Mainstream silicon photonic platforms offer a fundamental device library encompassing modulators, waveguides, and detectors, crucial for constructing signal pathways within diverse neuromorphic architectures [79,80,81]. Innovations in conventional manufacturing processes, including the integration of PCMs [22,55,130,137] or vanadium dioxide [86], superconducting electronics [24,27,50,123] and magneto-optic (MO) memory cells comprising heterogeneously integrated cerium-substituted yttrium iron garnet (Ce:YIG) on silicon MRR [87], are being investigated to broaden the range of achievable architectures.

4.1. Exploring the Current State of the Art: Challenges and Solutions

4.1.1. Synergistic Co-Integration of Photonics with Electronics

Silicon photonics has its advantages but still faces challenges yet to be solved. Coherent on-chip architectures utilizing components like MZIs are susceptible to thermal and fabrication processes, leading to variations in phase, interference, and diffraction. This results in inaccuracies in NPNs, emphasizing the need for strategies to mitigate these effects during training and tuning. Non-coherent architectures face challenges such as heterodyne crosstalk in MRRs and the requirement for numerous modulation resources, limiting scalability and energy efficiency. Efforts to address these challenges include exploring parallel arrangements of MRRs, leveraging microdisks, and facilitating increased integration density on-chip. Variations and reliability issues also present significant hurdles in silicon photonic devices, affecting their performance and stability. Therefore, electronic controllers for feedback and algorithm (calibration), direct-current analog signal for biasing, analog to digital, and digital to analog for trans-impedance amplification pose a solution in managing photonic devices and ensuring the stable operation of NPNs [138]. However, high latencies and frequency mismatches between electronic controllers and optical networks pose real-time, high-speed operation challenges.
Thus, integrating on-chip active electronics becomes crucial, and a significant challenge is the need for more complex on-chip electronic circuitry, which demands a higher density of electrical ports than optical ports, with the number of electrical ports scaling quadratically with the optical ports. Despite the challenges, the co-integration of CMOS electronics and photonics holds promise for overcoming many limitations by leveraging the advantages of both technologies while mitigating their drawbacks. Technologies including flip-chip bonding, wire bonding, and monolithic fabrication enable the integration of CMOS electronics optimized for digital control with silicon photonic chips [79,81], offering advantages such as increased interconnection density and reduced parasitic impedance.

4.1.2. On-Chip Light Sources on Semiconductor Platform

Power efficiency is critical in neuromorphic photonics, with O/E/O conversions consuming considerable power. At the same time, all-optical neurons offer better power efficiency. Additionally, off-chip lasers contribute significantly to power consumption, necessitating research into power-efficient on-chip light sources. Efforts to tackle this challenge involve integrating light sources directly into the silicon waveguide layer [139], employing methods such as rare-earth-element doping, strain engineering of germanium, and all-silicon emissive defects [24]. This is advantageous, as it eliminates the need to send the optical signal off-chip for computation, which is particularly beneficial for neuromorphic photonics systems employing NPNs. The selection of lasers for neuromorphic photonics depends on the type of neuron involved. Multi-die techniques are applicable for systems utilizing modulator-class neurons, where the light source can be positioned outside the chip. However, precise integration of optical gain onto waveguides is essential for laser-class neurons.

4.2. Advancements and Future Directions in Scientific Inquiry

4.2.1. Fabrication Challenges

An essential aspect of advancing neuromorphic photonics involves enhancing the robustness of systems to environmental fluctuations and overcoming fabrication challenges. Analog circuits, typical in neuromorphic processors, often require trimming to address manufacturing variabilities and environmental sensitivities. In integrated photonics, resonant devices like MRRs pose challenges due to their sensitivity to variations [99]. One approach to mitigate these challenges is resonance trimming, which involves inducing changes in the refractive index of waveguides. Active trimming methods, including heating waveguides to environmental variability, require constant power input and fast response times. Alternatively, passive trimming methods utilize permanent or non-volatile techniques to adjust the refractive index of devices. These methods, including electron beam-induced compaction [140], ion implantation [141], and strain of oxide cladding [142], offer solutions for correcting fabrication variations or preprogramming circuits to default states. Moreover, the integration of field-programmable PCMs allows for in-place reconfiguration [84], further enhancing robustness to fabrication discrepancies. In addition to addressing variability, advancements in neuromorphic photonics necessitate the development of analog-aware compilers to map application tasks to photonic hardware effectively [143]. Unlike traditional compilers for electronic systems, photonic compilers must account for idiosyncrasies inherent in representing signals in WDM light waves, including nonlinear distortion and limited dynamic range. Collaborative efforts within the academic community are underway to develop these compilers tailored for neuromorphic and programmable photonics [51,66,110,144], enabling efficient task mapping and optimization for photonic systems. By addressing these challenges and embracing innovative solutions, the field of neuromorphic photonics can enhance its resilience to environmental fluctuations and fabrication complexities, thereby facilitating the realization of robust and reliable neuromorphic computing systems.

4.2.2. Integration of Photonic Components

Efficient integration of diverse material systems in photonic circuits is pivotal for advancing neuromorphic photonic computing [145]. The primary challenge is seamlessly combining materials like silicon, III-V semiconductors (e.g., GaAs and InP), and lithium niobate-on-insulator (LNOI) to create a unified, high-performance platform [101,146,147]. Silicon photonics is the cornerstone of passive components, leveraging CMOS compatibility to enable mass production and scalability [81]. However, silicon’s indirect bandgap limits its ability to generate or amplify light, necessitating the integration of III-V materials for efficient light sources, like lasers and photodetectors. With their direct bandgap properties, these materials excel in light emission and detection but face cost, CMOS compatibility, and integration-density challenges [148].
Three integration approaches address these challenges: hybrid, heterogeneous, and monolithic integration. Hybrid integration involves physically aligning separate III-V and silicon chips through photonic wire bonding or flip-chip packaging. While effective for pre-tested components, its alignment precision and limited density hinder scalability. Heterogeneous integration advances this by wafer bonding III-V materials onto silicon, enabling high-density integration and efficient vertical coupling. For instance, distributed feedback lasers integrated heterogeneously on silicon have achieved ultra-narrow linewidths and high power efficiency. Monolithic integration, the ultimate goal, involves directly growing III-V materials on silicon substrates. Though promising in eliminating coupling losses, it faces significant challenges from lattice mismatches and thermal-expansion differences [81,148].
LNOI introduces another dimension to photonic integration, offering exceptional electro-optic modulation capabilities. Its high refractive-index contrast facilitates tightly confined optical modes, while its low-voltage-length product enables energy-efficient operations. Advanced fabrication techniques, such as smart-cut and wafer bonding, now allow the integration of submicron-thick lithium niobate films onto silicon substrates. This combination enhances the functionality of photonic circuits, making them suitable for high-speed, low-power applications in neuromorphic photonics [147].
The development of integrated photonic systems is not limited to material integration. However, it extends to the functional integration of components like photonic digital-to-analog converters (DACs), frequency comb sources, and multifunctional photonic memory cells. Photonic DACs, utilizing techniques like optical intensity weighting with MRRs, provide high precision, reduced distortion, and immunity to electromagnetic noise [149]. Similarly, frequency combs based on soliton microcombs offer chip-scale solutions for generating broadband, stable, and precisely aligned optical signals, essential for neuromorphic architectures [76]. Multifunctional photonic memory cells represent yet another innovation, where PCMs are integrated with silicon PN junctions or non-reciprocal magneto-optic materials wafer bonded on silicon-on-insulator (SOI) technology. These cells support nonvolatile optical weighting and high-speed tuning, enabling dynamic in situ training and adaptability in neuromorphic systems. Such devices demonstrate how heterogeneous integration combines materials and leverages their unique properties to enhance system-level performance.
Achieving full photonic integration requires addressing key challenges, including minimizing thermal dissipation, optimizing optical coupling, and reducing fabrication variability [138]. Advances in hybrid and heterogeneous integration and innovations in packaging and alignment are essential for creating a cohesive platform. Furthermore, the co-integration of electronics and photonics through co-packaged architectures offers a pathway to achieve energy-efficient, high-performance systems capable of supporting nPIC’s demands [79,81].

4.2.3. Synaptic Memory

The integration of synaptic memory stands out as a critical area of exploration. Conventional approaches have typically depended on a blend of specialized photonic devices controlled by generalized electronic circuits. However, the absence of standard building blocks like high-level compilers, logic gates, and memory within current photonic platforms necessitates innovative strategies to incorporate memory into neuromorphic processors effectively. For specific ML and neuromorphic applications, such as DL inference, synaptic weights, once trained, may not require frequent updates. In these scenarios, non-volatile analog memory, such as in-memory computing utilizing PCMs and non-reciprocal phase shift in magneto-optic materials, presents a promising solution [83,87,137]. By interfacing with digital electronic drivers, these memory systems enable real-time NN operation, precomputed weight storage, and direct inference task execution on the hardware [55]. Nevertheless, challenges persist in scenarios where temporary storage of neuron outputs is required, as seen in long short-term memory RNNs. In such cases, alternative memory technologies, including digital or short-term analogue electronic memory with electro-optic interfaces to analog photonics, may be better suited. While analog memory may exhibit limitations in precision and noise compared to digital memory, recent studies have shown that even low precision can support effective deep RNN operations. As neuromorphic photonics progress, they are expected to adopt heterogeneous memory technologies, akin to modern computers, comprising various memory types within a single system [144,150]. This evolution may involve integrating electronic memory components alongside novel photonic memory technologies, offering non-volatile and reconfigurable capabilities essential for dynamic neural network operations.

4.3. Envisioning the Future of Neuromorphic Photonics: A Visionary Perspective

Neuromorphic photonics represents a cutting-edge fusion of advanced technologies and innovative architectural designs, offering transformative solutions pivotal for constructing NNs for emerging applications. In LLMs, parallel matrix–vector multiplications are enabled, which is critical for efficient training and inference [73]. For digital twins, neuromorphic photonics supports on-chip, real-time simulations, enhancing predictive modeling and decision-making [57,136]. In data centers, it eliminates the need for electronic-to-optical signal conversions by achieving seamless communication through WDM, significantly reducing energy consumption [3,73,90]. Furthermore, the extreme demands of 6G communications—such as ultra-high bandwidth, minimal latency, and efficient signal processing—are met through photonic broadcast interconnects, which deliver low latency and high throughput while conserving energy [39,46,63,145].
These unique attributes position neuromorphic photonics as a game-changing paradigm for tackling the challenges of modern computational and communication technologies. A prominent concept, the NPP [33], embodies this integration by leveraging State-of-the-Art photonic packaging and emerging integrated photonics. The NPP seamlessly merges optical and electronic components within a system-in-package framework, enabling versatile signal processing and control functionalities. Here, we deal with two fundamental architectures, the NPPN and the ReConN, as contributions to the evolving field of neuromorphic photonics, offering blueprints for realizing large-scale NNs. The convergence of photonics and neuromorphic computing signifies a transformative era of technological advancement and scientific exploration. Embracing perspectives and pioneering research endeavors will unlock the vast potential of neuromorphic photonics, reshaping the landscape of on-chip neuromorphic computing for application-specific AI.

5. Conclusions

Neuromorphic photonics based photonic integrated circuits represents a transformative approach to optoelectronic (grabbing the lower-hanging fruit first) or all-optical (too early to demand, requires technological advancement) hardware design, aiming to create systems that mirror the structure and functionality of NNs. This isomorphic relationship between NPNs and their biological counterparts promises remarkable capabilities and has sparked significant technological and societal interest. Over recent years, research in NPNs has witnessed rapid growth, leading to the exploration of diverse architectural concepts, PN models, training techniques, and network topologies. This diversity highlights the dynamic nature of the field, with ongoing efforts aimed at identifying optimal applications where photonics can outperform traditional electronic computing methods. Real-time applications requiring rapid decision-making are particularly promising areas for deploying neuromorphic photonics. A key focus will be on scaling the integration of PN within single networks. Despite challenges such as the co-packaging of control electronics and light sources, advancements in scalable photonics platforms offer promising avenues for overcoming these obstacles. With modern integrated platforms and innovative ideas and devices for on-chip functionality, neuromorphic photonics is poised to push the boundaries of ML and information processing, unlocking new frontiers in AI and computational capabilities.

Author Contributions

Conceptualization, S.G. and J.X.; investigation, S.G. and J.X.; writing—original draft preparation, S.G.; writing—review and editing, S.G. and J.X.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable.

Acknowledgments

SG gratefully acknowledges the PhD research supporting award of the INSPIRE Fellowship (DST/INSPIRE/03/2021/001134) from the Department of Science and Technology, India.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Batra, G.; Jacobson, Z.; Madhav, S.; Queirolo, A.; Santhanam, N. Artificial-Intelligence Hardware: New Opportunities for Semiconductor Companies; McKinsey&Company: New York, NY, USA, 2019. [Google Scholar]
  2. Rosenblatt, F. The Perceptron, a Perceiving and Recognizing Automaton Project Para; Cornell Aeronautical Laboratory: Buffalo, NY, USA, 1957. [Google Scholar]
  3. Nahmias, M.A.; de Lima, T.F.; Tait, A.N.; Peng, H.-T.; Shastri, B.J.; Prucnal, P.R. Photonic Multiply-Accumulate Operations for Neural Networks. IEEE J. Sel. Top. Quantum Electron. 2020, 26, 1–18. [Google Scholar] [CrossRef]
  4. Wetzstein, G.; Ozcan, A.; Gigan, S.; Fan, S.; Englund, D.; Soljačić, M.; Denz, C.; Miller, D.A.B.; Psaltis, D. Inference in Artificial Intelligence with Deep Optics and Photonics. Nature 2020, 588, 39–47. [Google Scholar] [CrossRef]
  5. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-Propagating Errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  6. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  7. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Trans. Neural Netw. Learn Syst. 2022, 33, 6999–7019. [Google Scholar] [CrossRef]
  8. Choquette, J. NVIDIA Hopper H100 GPU: Scaling Performance. IEEE Micro 2023, 43, 9–17. [Google Scholar] [CrossRef]
  9. Yonghui, W.; Mike, S.; Zhifeng, C.; Quoc, V.L.; Mohammad, N.; Wolfgang, M. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv 2016, arXiv:1609.08144. [Google Scholar] [CrossRef]
  10. Huang, C.; Sorger, V.J.; Miscuglio, M.; Al-Qadasi, M.; Mukherjee, A.; Lampe, L.; Nichols, M.; Tait, A.N.; Ferreira de Lima, T.; Marquez, B.A.; et al. Prospects and Applications of Photonic Neural Networks. Adv. Phys. X 2022, 7, 1981155. [Google Scholar] [CrossRef]
  11. Pei, J.; Deng, L.; Song, S.; Zhao, M.; Zhang, Y.; Wu, S.; Wang, G.; Zou, Z.; Wu, Z.; He, W.; et al. Towards Artificial General Intelligence with Hybrid Tianjic Chip Architecture. Nature 2019, 572, 106–111. [Google Scholar] [CrossRef]
  12. Tesla Dojo Technology. A Guide to Tesla’s Configurable Floating Point Formats & Arithmetic. Available online: https://digitalassets.tesla.com/tesla-contents/image/upload/tesla-dojo-technology.pdf (accessed on 1 March 2024).
  13. LeCun, Y.; Boser, B.; Denker, J.; Henderson, D.; Howard, R.; Hubbard, W.; Jackel, L. Handwritten Digit Recognition with a Back-Propagation Network. Adv. Neural Inf. Process. Syst. 1989, 2, 396–404. [Google Scholar]
  14. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  15. Capper, D.; Jones, D.T.W.; Sill, M.; Hovestadt, V.; Schrimpf, D.; Sturm, D.; Koelsche, C.; Sahm, F.; Chavez, L.; Reuss, D.E.; et al. DNA Methylation-Based Classification of Central Nervous System Tumours. Nature 2018, 555, 469–474. [Google Scholar] [CrossRef]
  16. Stewart, T.C.; Eliasmith, C. Large-Scale Synthesis of Functional Spiking Neural Circuits. Proc. IEEE 2014, 102, 881–898. [Google Scholar] [CrossRef]
  17. Hodgkin, A.L.; Huxley, A.F. A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef] [PubMed]
  18. Nahmias, M.A.; Shastri, B.J.; Tait, A.N.; Prucnal, P.R. A Leaky Integrate-and-Fire Laser Neuron for Ultrafast Cognitive Computing. IEEE J. Sel. Top. Quantum Electron. 2013, 19, 1–12. [Google Scholar] [CrossRef]
  19. Izhikevich, E.M. Simple Model of Spiking Neurons. IEEE Trans. Neural. Netw. 2003, 14, 1569–1572. [Google Scholar] [CrossRef]
  20. Merolla, P.A.; Arthur, J.V.; Alvarez-Icaza, R.; Cassidy, A.S.; Sawada, J.; Akopyan, F.; Jackson, B.L.; Imam, N.; Guo, C.; Nakamura, Y.; et al. A Million Spiking-Neuron Integrated Circuit with a Scalable Communication Network and Interface. Science 2014, 345, 668–673. [Google Scholar] [CrossRef]
  21. Tait, A.N.; Ferreira De Lima, T.; Nahmias, M.A.; Miller, H.B.; Peng, H.T.; Shastri, B.J.; Prucnal, P.R. Silicon Photonic Modulator Neuron. Phys. Rev. Appl. 2019, 11. [Google Scholar] [CrossRef]
  22. Feldmann, J.; Youngblood, N.; Wright, C.D.; Bhaskaran, H.; Pernice, W.H.P. All-Optical Spiking Neurosynaptic Networks with Self-Learning Capabilities. Nature 2019, 569, 208–214. [Google Scholar] [CrossRef]
  23. Williamson, I.A.D.; Hughes, T.W.; Minkov, M.; Bartlett, B.; Pai, S.; Fan, S. Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks. IEEE J. Sel. Top. Quantum Electron. 2020, 26, 1–12. [Google Scholar] [CrossRef]
  24. Buckley, S.; Chiles, J.; McCaughan, A.N.; Moody, G.; Silverman, K.L.; Stevens, M.J.; Mirin, R.P.; Nam, S.W.; Shainline, J.M. All-Silicon Light-Emitting Diodes Waveguide-Integrated with Superconducting Single-Photon Detectors. Appl. Phys. Lett. 2017, 111. [Google Scholar] [CrossRef]
  25. Amin, R.; George, J.K.; Sun, S.; Ferreira de Lima, T.; Tait, A.N.; Khurgin, J.B.; Miscuglio, M.; Shastri, B.J.; Prucnal, P.R.; El-Ghazawi, T.; et al. ITO-Based Electro-Absorption Modulator for Photonic Neural Activation Function. APL Mater. 2019, 7. [Google Scholar] [CrossRef]
  26. Nahmias, M.A.; Tait, A.N.; Tolias, L.; Chang, M.P.; Ferreira de Lima, T.; Shastri, B.J.; Prucnal, P.R. An Integrated Analog O/E/O Link for Multi-Channel Laser Neurons. Appl. Phys. Lett. 2016, 108. [Google Scholar] [CrossRef]
  27. McCaughan, A.N.; Verma, V.B.; Buckley, S.M.; Allmaras, J.P.; Kozorezov, A.G.; Tait, A.N.; Nam, S.W.; Shainline, J.M. A Superconducting Thermal Switch with Ultrahigh Impedance for Interfacing Superconductors to Semiconductors. Nat. Electron. 2019, 2, 451–456. [Google Scholar] [CrossRef]
  28. Rosenbluth, D.; Kravtsov, K.; Fok, M.P.; Prucnal, P.R. A High Performance Pulse Processing Device. Opt. Express 2009, 17, 22767. [Google Scholar] [CrossRef]
  29. Romeira, B.; Javaloyes, J.; Ironside, C.N.; Figueiredo, J.M.L.; Balle, S.; Piro, O. Excitability and Optical Pulse Generation in Semiconductor Lasers Driven by Resonant Tunneling Diode Photo-Detectors. Opt. Express 2013, 21, 20931. [Google Scholar] [CrossRef]
  30. Coomans, W.; Gelens, L.; Beri, S.; Danckaert, J.; Van der Sande, G. Solitary and Coupled Semiconductor Ring Lasers as Optical Spiking Neurons. Phys. Rev. E 2011, 84, 036209. [Google Scholar] [CrossRef]
  31. Brunstein, M.; Yacomotti, A.M.; Sagnes, I.; Raineri, F.; Bigot, L.; Levenson, A. Excitability and Self-Pulsing in a Photonic Crystal Nanocavity. Phys. Rev. A 2012, 85, 031803. [Google Scholar] [CrossRef]
  32. Gupta, S.; Gahlot, S.; Roy, S. Design of Optoelectronic Computing Circuits with VCSEL-SA Based Neuromorphic Photonic Spiking. Optik 2021, 243, 167344. [Google Scholar] [CrossRef]
  33. Shastri, B.J.; Tait, A.N.; Ferreira de Lima, T.; Pernice, W.H.P.; Bhaskaran, H.; Wright, C.D.; Prucnal, P.R. Photonics for Artificial Intelligence and Neuromorphic Computing. Nat. Photonics 2021, 15, 102–114. [Google Scholar] [CrossRef]
  34. Shekhar, S.; Bogaerts, W.; Chrostowski, L.; Bowers, J.E.; Hochberg, M.; Soref, R.; Shastri, B.J. Roadmapping the next Generation of Silicon Photonics. Nat. Commun. 2024, 15, 751. [Google Scholar] [CrossRef] [PubMed]
  35. Berggren, K.; Xia, Q.; Likharev, K.K.; Strukov, D.B.; Jiang, H.; Mikolajick, T.; Querlioz, D.; Salinga, M.; Erickson, J.R.; Pi, S.; et al. Roadmap on Emerging Hardware and Technology for Machine Learning. Nanotechnology 2021, 32, 012002. [Google Scholar] [CrossRef]
  36. Furber, S.B.; Galluppi, F.; Temple, S.; Plana, L.A. The SpiNNaker Project. Proc. IEEE 2014, 102, 652–665. [Google Scholar] [CrossRef]
  37. Davies, M.; Srinivasa, N.; Lin, T.-H.; Chinya, G.; Cao, Y.; Choday, S.H.; Dimou, G.; Joshi, P.; Imam, N.; Jain, S.; et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro 2018, 38, 82–99. [Google Scholar] [CrossRef]
  38. Benjamin, B.V.; Gao, P.; McQuinn, E.; Choudhary, S.; Chandrasekaran, A.R.; Bussat, J.-M.; Alvarez-Icaza, R.; Arthur, J.V.; Merolla, P.A.; Boahen, K. Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations. Proc. IEEE 2014, 102, 699–716. [Google Scholar] [CrossRef]
  39. Goodman, J.W. Fan-in and Fan-out with Optical Interconnections. Opt. Acta Int. J. Opt. 1985, 32, 1489–1496. [Google Scholar] [CrossRef]
  40. Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; Psychology Press: London, UK, 2005. [Google Scholar]
  41. Widrow, B.; Hoff, M.E. Others Adaptive Switching Circuits. In Proceedings of the IRE WESCON Convention Record; Institute of Radio Engineers: New York, NY, USA, 1960; Volume 4, pp. 96–104. [Google Scholar]
  42. Kohonen, T. Self-Organized Formation of Topologically Correct Feature Maps. Biol. Cybern. 1982, 43, 59–69. [Google Scholar] [CrossRef]
  43. Lugt, A.V. Signal Detection by Complex Spatial Filtering. IEEE Trans. Inf. Theory 1964, 10, 139–145. [Google Scholar] [CrossRef]
  44. Hopfield, J.J. Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  45. Farhat, N.H.; Psaltis, D.; Prata, A.; Paek, E. Optical Implementation of the Hopfield Model. Appl. Opt. 1985, 24, 1469. [Google Scholar] [CrossRef] [PubMed]
  46. Goodman, J.W.; Leonberger, F.J.; Kung, S.-Y.; Athale, R.A. Optical Interconnections for VLSI Systems. Proc. IEEE 1984, 72, 850–866. [Google Scholar] [CrossRef]
  47. Psaltis, D.; Brady, D.; Gu, X.-G.; Lin, S. Holography in Artificial Neural Networks. Nature 1990, 343, 325–330. [Google Scholar] [CrossRef]
  48. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. Adv. Neural. Inf. Process Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef]
  49. Vandoorne, K.; Mechet, P.; Van Vaerenbergh, T.; Fiers, M.; Morthier, G.; Verstraeten, D.; Schrauwen, B.; Dambre, J.; Bienstman, P. Experimental Demonstration of Reservoir Computing on a Silicon Photonics Chip. Nat. Commun. 2014, 5, 4541. [Google Scholar] [CrossRef] [PubMed]
  50. Shainline, J.M.; Buckley, S.M.; Mirin, R.P.; Nam, S.W. Superconducting Optoelectronic Circuits for Neuromorphic Computing. Phys. Rev. Appl. 2017, 7, 034013. [Google Scholar] [CrossRef]
  51. Shen, Y.; Harris, N.C.; Skirlo, S.; Prabhu, M.; Baehr-Jones, T.; Hochberg, M.; Sun, X.; Zhao, S.; Larochelle, H.; Englund, D.; et al. Deep Learning with Coherent Nanophotonic Circuits. Nat. Photonics 2017, 11, 441–446. [Google Scholar] [CrossRef]
  52. Tait, A.N.; De Lima, T.F.; Zhou, E.; Wu, A.X.; Nahmias, M.A.; Shastri, B.J.; Prucnal, P.R. Neuromorphic Photonic Networks Using Silicon Photonic Weight Banks. Sci. Rep. 2017, 7, 7430. [Google Scholar] [CrossRef]
  53. Nahmias, M.A.; Shastri, B.J.; Tait, A.N.; Ferreira de Lima, T.; Prucnal, P.R. Neuromorphic Photonics. Opt. Photonics News. 2018, 29, 34. [Google Scholar] [CrossRef]
  54. Lin, X.; Rivenson, Y.; Yardimci, N.T.; Veli, M.; Luo, Y.; Jarrahi, M.; Ozcan, A. All-Optical Machine Learning Using Diffractive Deep Neural Networks. Science 2018, 361, 1004–1008. [Google Scholar] [CrossRef]
  55. Feldmann, J.; Youngblood, N.; Karpov, M.; Gehring, H.; Li, X.; Stappers, M.; Le Gallo, M.; Fu, X.; Lukashchuk, A.; Raja, A.S.; et al. Parallel Convolutional Processing Using an Integrated Photonic Tensor Core. Nature 2021, 589, 52–58. [Google Scholar] [CrossRef]
  56. Wu, C.; Yang, X.; Yu, H.; Peng, R.; Takeuchi, I.; Chen, Y.; Li, M. Harnessing Optoelectronic Noises in a Photonic Generative Network. Sci. Adv. 2022, 8, abm2956. [Google Scholar] [CrossRef] [PubMed]
  57. Fu, T.; Zang, Y.; Huang, Y.; Du, Z.; Huang, H.; Hu, C.; Chen, M.; Yang, S.; Chen, H. Photonic Machine Learning with On-Chip Diffractive Optics. Nat. Commun. 2023, 14, 70. [Google Scholar] [CrossRef]
  58. Hughes, T.W.; Minkov, M.; Shi, Y.; Fan, S. Training of Photonic Neural Networks through in Situ Backpropagation and Gradient Measurement. Optica 2018, 5, 864. [Google Scholar] [CrossRef]
  59. Robertson, J.; Hejda, M.; Bueno, J.; Hurtado, A. Ultrafast Optical Integration and Pattern Classification for Neuromorphic Photonics Based on Spiking VCSEL Neurons. Sci. Rep. 2020, 10, 6098. [Google Scholar] [CrossRef]
  60. Chen, Z.; Sludds, A.; Davis, R.; Christen, I.; Bernstein, L.; Ateshian, L.; Heuser, T.; Heermeier, N.; Lott, J.A.; Reitzenstein, S.; et al. Deep Learning with Coherent VCSEL Neural Networks. Nat. Photonics 2023, 17, 723–730. [Google Scholar] [CrossRef]
  61. Sawada, J.; Akopyan, F.; Cassidy, A.S.; Taba, B.; Debole, M.V.; Datta, P.; Alvarez-Icaza, R.; Amir, A.; Arthur, J.V.; Andreopoulos, A.; et al. TrueNorth Ecosystem for Brain-Inspired Computing: Scalable Systems, Software, and Applications. In Proceedings of the SC16: International Conference for High Performance Computing, Networking, Storage and Analysis, Salt Lake City, UT, USA, 13–18 November 2016; pp. 130–141. [Google Scholar]
  62. Siegel, J. With a Systems Approach to Chips, Microsoft Aims to Tailor Everything ‘from Silicon to Service’ to Meet AI Demand. Microsoft News, 15 November 2023. Available online: https://news.microsoft.com/source/features/ai/in-house-chips-silicon-to-service-to-meet-ai-demand/ (accessed on 1 June 2025).
  63. Miller, D.A.B. Rationale and Challenges for Optical Interconnects to Electronic Chips. Proc. IEEE 2000, 88, 728–749. [Google Scholar] [CrossRef]
  64. Nozaki, K.; Matsuo, S.; Fujii, T.; Takeda, K.; Shinya, A.; Kuramochi, E.; Notomi, M. Femtofarad Optoelectronic Integration Demonstrating Energy-Saving Signal Conversion and Nonlinear Functions. Nat. Photonics 2019, 13, 454–459. [Google Scholar] [CrossRef]
  65. Han, J.; Jentzen, A.; Weinan, E. Solving High-Dimensional Partial Differential Equations Using Deep Learning. Proc. Natl. Acad. Sci. USA 2018, 115, 8505–8510. [Google Scholar] [CrossRef]
  66. Tait, A.N.; Ma, P.Y.; Ferreira de Lima, T.; Blow, E.C.; Chang, M.P.; Nahmias, M.A.; Shastri, B.J.; Prucnal, P.R. Demonstration of Multivariate Photonics: Blind Dimensionality Reduction with Integrated Photonics. J. Light. Technol. 2019, 37, 5996–6006. [Google Scholar] [CrossRef]
  67. Markram, H. The Human Brain Project. Sci. Am. 2012, 306, 50–55. [Google Scholar] [CrossRef]
  68. Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. GPT-4 Technical Report. arXiv 2023. [Google Scholar] [CrossRef]
  69. Goodman, J.W.; Dias, A.R.; Woody, L.M. Fully Parallel, High-Speed Incoherent Optical Method for Performing Discrete Fourier Transforms. Opt. Lett. 1978, 2, 1. [Google Scholar] [CrossRef]
  70. Keyes, R.W. Optical Logic-in the Light of Computer Technology. Opt. Acta: Int. J. Opt. 1985, 32, 525–535. [Google Scholar] [CrossRef]
  71. Patel, D.; Ghosh, S.; Chagnon, M.; Samani, A.; Veerasubramanian, V.; Osman, M.; Plant, D.V. Design, Analysis, and Transmission System Performance of a 41 GHz Silicon Photonic Modulator. Opt. Express 2015, 23, 14263. [Google Scholar] [CrossRef]
  72. Argyris, A. Photonic Neuromorphic Technologies in Optical Communications. Nanophotonics 2022, 11, 897–916. [Google Scholar] [CrossRef] [PubMed]
  73. Zhou, H.; Dong, J.; Cheng, J.; Dong, W.; Huang, C.; Shen, Y.; Zhang, Q.; Gu, M.; Qian, C.; Chen, H.; et al. Photonic Matrix Multiplication Lights up Photonic Accelerator and Beyond. Light Sci. Appl. 2022, 11, 30. [Google Scholar] [CrossRef] [PubMed]
  74. Carolan, J.; Harrold, C.; Sparrow, C.; Martín-López, E.; Russell, N.J.; Silverstone, J.W.; Shadbolt, P.J.; Matsuda, N.; Oguma, M.; Itoh, M.; et al. Universal Linear Optics. Science 2015, 349, 711–716. [Google Scholar] [CrossRef]
  75. Moridsadat, M.; Tamura, M.; Chrostowski, L.; Shekhar, S.; Shastri, B.J. Design Methodology for Silicon Organic Hybrid Modulators: From Physics to System-Level Modeling. arXiv 2024. [Google Scholar] [CrossRef]
  76. Gaeta, A.L.; Lipson, M.; Kippenberg, T.J. Photonic-Chip-Based Frequency Combs. Nat. Photonics 2019, 13, 158–169. [Google Scholar] [CrossRef]
  77. Xavier, J.; Probst, J.; Becker, C. Deterministic Composite Nanophotonic Lattices in Large Area for Broadband Applications. Sci. Rep. 2016, 6, 38744. [Google Scholar] [CrossRef]
  78. Xavier, J.; Probst, J.; Back, F.; Wyss, P.; Eisenhauer, D.; Löchel, B.; Rudigier-Voigt, E.; Becker, C. Quasicrystalline-Structured Light Harvesting Nanophotonic Silicon Films on Nanoimprinted Glass for Ultra-Thin Photovoltaics. Opt. Mater. Express 2014, 4, 2290. [Google Scholar] [CrossRef]
  79. Stojanović, V.; Ram, R.J.; Popović, M.; Lin, S.; Moazeni, S.; Wade, M.; Sun, C.; Alloatti, L.; Atabaki, A.; Pavanello, F.; et al. Monolithic Silicon-Photonic Platforms in State-of-the-Art CMOS SOI Processes [Invited]. Opt. Express 2018, 26, 13106. [Google Scholar] [CrossRef]
  80. Sun, C.; Wade, M.T.; Lee, Y.; Orcutt, J.S.; Alloatti, L.; Georgas, M.S.; Waterman, A.S.; Shainline, J.M.; Avizienis, R.R.; Lin, S.; et al. Single-Chip Microprocessor That Communicates Directly Using Light. Nature 2015, 528, 534–538. [Google Scholar] [CrossRef] [PubMed]
  81. Bogaerts, W.; Chrostowski, L. Silicon Photonics Circuit Design: Methods, Tools and Challenges. Laser Photon. Rev. 2018, 12, 201700237. [Google Scholar] [CrossRef]
  82. Tait, A.N.; De Lima, T.F.; Nahmias, M.A.; Shastri, B.J.; Prucnal, P.R. Continuous Calibration of Microring Weights for Analog Optical Networks. IEEE Photonics Technol. Lett. 2016, 28, 887–890. [Google Scholar] [CrossRef]
  83. Ríos, C.; Youngblood, N.; Cheng, Z.; Le Gallo, M.; Pernice, W.H.P.; Wright, C.D.; Sebastian, A.; Bhaskaran, H. In-Memory Computing on a Photonic Platform. Sci. Adv. 2019, 5, aau5759. [Google Scholar] [CrossRef]
  84. Rios, C.; Stegmaier, M.; Hosseini, P.; Wang, D.; Scherer, T.; Wright, C.D.; Bhaskaran, H.; Pernice, W.H.P. Integrated All-Photonic Non-Volatile Multi-Level Memory. Nat. Photonics 2015, 9, 725–732. [Google Scholar] [CrossRef]
  85. Cheng, Z.; Ríos, C.; Pernice, W.H.P.; Wright, C.D.; Bhaskaran, H. On-Chip Photonic Synapse. Sci. Adv. 2017, 3, 1700160. [Google Scholar] [CrossRef]
  86. Seoane, J.J.; Parra, J.; Navarro-Arenas, J.; Recaman, M.; Schouteden, K.; Locquet, J.P.; Sanchis, P. Ultra-High Endurance Silicon Photonic Memory Using Vanadium Dioxide. NPJ Nanophotonics 2024, 1, 37. [Google Scholar] [CrossRef]
  87. Pintus, P.; Dumont, M.; Shah, V.; Murai, T.; Shoji, Y.; Huang, D.; Moody, G.; Bowers, J.E.; Youngblood, N. Integrated Non-Reciprocal Magneto-Optics with Ultra-High Endurance for Photonic in-Memory Computing. Nat. Photonics 2024, 19, 54–62. [Google Scholar] [CrossRef]
  88. Betker, J.; Goh, G.; Jing, L.; Brooks, T.; Wang, J.; Li, L.; Ouyang, L.; Zhuang, J.; Lee, J.; Guo, Y.; et al. Improving Image Generation with Better Captions. Available online: https://cdn.openai.com/papers/dall-e-3.pdf (accessed on 25 February 2024).
  89. Xu, B.; Huang, Y.; Fang, Y.; Wang, Z.; Yu, S.; Xu, R. Recent Progress of Neuromorphic Computing Based on Silicon Photonics: Electronic–Photonic Co-Design, Device, and Architecture. Photonics 2022, 9, 698. [Google Scholar] [CrossRef]
  90. Bai, Y.; Xu, X.; Tan, M.; Sun, Y.; Li, Y.; Wu, J.; Morandotti, R.; Mitchell, A.; Xu, K.; Moss, D.J. Photonic Multiplexing Techniques for Neuromorphic Computing. Nanophotonics 2023, 12, 795–817. [Google Scholar] [CrossRef]
  91. Marković, D.; Mizrahi, A.; Querlioz, D.; Grollier, J. Physics for Neuromorphic Computing. Nat. Rev. Phys. 2020, 2, 499–510. [Google Scholar] [CrossRef]
  92. Jaeger, H.; Haas, H. Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [PubMed]
  93. Maass, W. Networks of Spiking Neurons: The Third Generation of Neural Network Models. Neural Netw. 1997, 10, 1659–1671. [Google Scholar] [CrossRef]
  94. Song, S.; Miller, K.D.; Abbott, L.F. Competitive Hebbian Learning through Spike-Timing-Dependent Synaptic Plasticity. Nat. Neurosci 2000, 3, 919–926. [Google Scholar] [CrossRef]
  95. Reck, M.; Zeilinger, A.; Bernstein, H.J.; Bertani, P. Experimental Realization of Any Discrete Unitary Operator. Phys. Rev. Lett. 1994, 73, 58–61. [Google Scholar] [CrossRef]
  96. Chiles, J.; Buckley, S.M.; Nam, S.W.; Mirin, R.P.; Shainline, J.M. Design, Fabrication, and Metrology of 10 × 100 Multi-Planar Integrated Photonic Routing Manifolds for Neural Networks. APL Photonics 2018, 3, 5039641. [Google Scholar] [CrossRef]
  97. Jayatilleka, H.; Murray, K.; Guillén-Torres, M.Á.; Caverley, M.; Hu, R.; Jaeger, N.A.F.; Chrostowski, L.; Shekhar, S. Wavelength Tuning and Stabilization of Microring-Based Filters Using Silicon in-Resonator Photoconductive Heaters. Opt. Express 2015, 23, 25084. [Google Scholar] [CrossRef]
  98. Tait, A.N.; Jayatilleka, H.; De Lima, T.F.; Ma, P.Y.; Nahmias, M.A.; Shastri, B.J.; Shekhar, S.; Chrostowski, L.; Prucnal, P.R. Feedback Control for Microring Weight Banks. Opt. Express 2018, 26, 26422. [Google Scholar] [CrossRef]
  99. Bogaerts, W.; De Heyn, P.; Van Vaerenbergh, T.; De Vos, K.; Kumar Selvaraja, S.; Claes, T.; Dumon, P.; Bienstman, P.; Van Thourhout, D.; Baets, R. Silicon Microring Resonators. Laser Photon. Rev. 2012, 6, 47–73. [Google Scholar] [CrossRef]
  100. Tait, A.N.; Wu, A.X.; De Lima, T.F.; Zhou, E.; Shastri, B.J.; Nahmias, M.A.; Prucnal, P.R. Microring Weight Banks. IEEE J. Sel. Top. Quantum Electron. 2016, 22, 312–325. [Google Scholar] [CrossRef]
  101. Shi, B.; Calabretta, N.; Stabile, R. Deep Neural Network Through an InP SOA-Based Photonic Integrated Cross-Connect. IEEE J. Sel. Top. Quantum Electron. 2020, 26, 1–11. [Google Scholar] [CrossRef]
  102. Brunner, D.; Soriano, M.C.; Mirasso, C.R.; Fischer, I. Parallel Photonic Information Processing at Gigabyte per Second Data Rates Using Transient States. Nat. Commun. 2013, 4, 2368. [Google Scholar] [CrossRef]
  103. Komljenovic, T.; Davenport, M.; Hulme, J.; Liu, A.Y.; Santis, C.T.; Spott, A.; Srinivasan, S.; Stanton, E.J.; Zhang, C.; Bowers, J.E. Heterogeneous Silicon Photonic Integrated Circuits. J. Light. Technol. 2016, 34, 20–35. [Google Scholar] [CrossRef]
  104. Wang, Y.; Lv, Z.; Chen, J.; Wang, Z.; Zhou, Y.; Zhou, L.; Chen, X.; Han, S. Photonic Synapses Based on Inorganic Perovskite Quantum Dots for Neuromorphic Computing. Adv. Mater. 2018, 30, 201802883. [Google Scholar] [CrossRef]
  105. Sorianello, V.; Midrio, M.; Contestabile, G.; Asselberghs, I.; Van Campenhout, J.; Huyghebaert, C.; Goykhman, I.; Ott, A.K.; Ferrari, A.C.; Romagnoli, M. Graphene–Silicon Phase Modulators with Gigahertz Bandwidth. Nat. Photonics 2018, 12, 40–44. [Google Scholar] [CrossRef]
  106. He, M.; Xu, M.; Ren, Y.; Jian, J.; Ruan, Z.; Xu, Y.; Gao, S.; Sun, S.; Wen, X.; Zhou, L.; et al. High-Performance Hybrid Silicon and Lithium Niobate Mach–Zehnder Modulators for 100 Gbit S−1 and beyond. Nat. Photonics 2019, 13, 359–364. [Google Scholar] [CrossRef]
  107. Harris, N.C.; Ma, Y.; Mower, J.; Baehr-Jones, T.; Englund, D.; Hochberg, M.; Galland, C. Efficient, Compact and Low Loss Thermo-Optic Phase Shifter in Silicon. Opt. Express 2014, 22, 10487. [Google Scholar] [CrossRef]
  108. Hill, M.T.; Frietman, E.E.E.; de Waardt, H.; Khoe, G.-d.; Dorren, H.J.S. All Fiber-Optic Neural Network Using Coupled SOA Based Ring Lasers. IEEE Trans Neural. Netw. 2002, 13, 1504–1513. [Google Scholar] [CrossRef]
  109. Selmi, F.; Braive, R.; Beaudoin, G.; Sagnes, I.; Kuszelewicz, R.; Barbay, S. Relative Refractory Period in an Excitable Semiconductor Laser. Phys. Rev. Lett. 2014, 112, 183902. [Google Scholar] [CrossRef] [PubMed]
  110. Xu, X.; Ren, G.; Feleppa, T.; Liu, X.; Boes, A.; Mitchell, A.; Lowery, A.J. Self-Calibrating Programmable Photonic Integrated Circuits. Nat. Photonics 2022, 16, 595–602. [Google Scholar] [CrossRef]
  111. Hamerly, R.; Bandyopadhyay, S.; Englund, D. Asymptotically Fault-Tolerant Programmable Photonics. Nat. Commun 2022, 13, 6831. [Google Scholar] [CrossRef]
  112. Beri, S.; Mashall, L.; Gelens, L.; Van der Sande, G.; Mezosi, G.; Sorel, M.; Danckaert, J.; Verschaffelt, G. Excitability in Optical Systems Close to -Symmetry. Phys. Lett. A 2010, 374, 739–743. [Google Scholar] [CrossRef]
  113. Gelens, L.; Mashal, L.; Beri, S.; Coomans, W.; Van der Sande, G.; Danckaert, J.; Verschaffelt, G. Excitability in Semiconductor Microring Lasers: Experimental and Theoretical Pulse Characterization. Phys. Rev. A 2010, 82, 063841. [Google Scholar] [CrossRef]
  114. Peng, H.T.; Nahmias, M.A.; De Lima, T.F.; Tait, A.N.; Shastri, B.J.; Prucnal, P.R. Neuromorphic Photonic Integrated Circuits. IEEE J. Sel. Top. Quantum Electron. 2018, 24, 2840448. [Google Scholar] [CrossRef]
  115. Wang, Y.E.; Wei, G.Y.; Brooks, D. Benchmarking TPU, GPU, and CPU Platforms for Deep Learning. arXiv 2019. [Google Scholar] [CrossRef]
  116. Davies, M. Benchmarks for Progress in Neuromorphic Computing. Nat. Mach. Intell. 2019, 1, 386–388. [Google Scholar] [CrossRef]
  117. Xiang, S.; Han, Y.; Song, Z.; Guo, X.; Zhang, Y.; Ren, Z.; Wang, S.; Ma, Y.; Zou, W.; Ma, B.; et al. A Review: Photonics Devices, Architectures, and Algorithms for Optical Neural Computing. J. Semicond. 2021, 42, 023105. [Google Scholar] [CrossRef]
  118. Dong, B.; Aggarwal, S.; Zhou, W.; Ali, U.E.; Farmakidis, N.; Lee, J.S.; He, Y.; Li, X.; Kwong, D.-L.; Wright, C.D.; et al. Higher-Dimensional Processing Using a Photonic Tensor Core with Continuous-Time Data. Nat. Photonics 2023, 17, 1080–1088. [Google Scholar] [CrossRef]
  119. Bueno, J.; Maktoobi, S.; Froehly, L.; Fischer, I.; Jacquot, M.; Larger, L.; Brunner, D. Reinforcement Learning in a Large-Scale Photonic Recurrent Neural Network. Optica 2018, 5, 756. [Google Scholar] [CrossRef]
  120. Freiberger, M.; Katumba, A.; Bienstman, P.; Dambre, J. Training Passive Photonic Reservoirs with Integrated Optical Readout. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1943–1953. [Google Scholar] [CrossRef] [PubMed]
  121. Boshgazi, S.; Jabbari, A.; Mehrany, K.; Memarian, M. Virtual Reservoir Computer Using an Optical Resonator. Opt. Mater. Express 2022, 12, 1140. [Google Scholar] [CrossRef]
  122. Duport, F.; Schneider, B.; Smerieri, A.; Haelterman, M.; Massar, S. All-Optical Reservoir Computing. Opt. Express 2012, 20, 22783. [Google Scholar] [CrossRef]
  123. Shainline, J.M.; Buckley, S.M.; McCaughan, A.N.; Chiles, J.T.; Jafari Salim, A.; Castellanos-Beltran, M.; Donnelly, C.A.; Schneider, M.L.; Mirin, R.P.; Nam, S.W. Superconducting Optoelectronic Loop Neurons. J. Appl. Phys. 2019, 126, 5096403. [Google Scholar] [CrossRef]
  124. Xu, S.; Wang, J.; Wang, R.; Chen, J.; Zou, W. High-Accuracy Optical Convolution Unit Architecture for Convolutional Neural Networks by Cascaded Acousto-Optical Modulator Arrays. Opt. Express 2019, 27, 19778. [Google Scholar] [CrossRef]
  125. Zhang, H.; Gu, M.; Jiang, X.D.; Thompson, J.; Cai, H.; Paesani, S.; Santagati, R.; Laing, A.; Zhang, Y.; Yung, M.H.; et al. An Optical Neural Chip for Implementing Complex-Valued Neural Network. Nat. Commun. 2021, 12, 457. [Google Scholar] [CrossRef]
  126. Zhu, H.H.; Zou, J.; Zhang, H.; Shi, Y.Z.; Luo, S.B.; Wang, N.; Cai, H.; Wan, L.X.; Wang, B.; Jiang, X.D.; et al. Space-Efficient Optical Computing with an Integrated Chip Diffractive Neural Network. Nat. Commun. 2022, 13, 1044. [Google Scholar] [CrossRef]
  127. Roques-Carmes, C.; Shen, Y.; Zanoci, C.; Prabhu, M.; Atieh, F.; Jing, L.; Dubček, T.; Mao, C.; Johnson, M.R.; Čeperić, V.; et al. Heuristic Recurrent Algorithms for Photonic Ising Machines. Nat. Commun. 2020, 11, 249. [Google Scholar] [CrossRef]
  128. Xu, S.; Wang, J.; Yi, S.; Zou, W. High-Order Tensor Flow Processing Using Integrated Photonic Circuits. Nat. Commun. 2022, 13, 7970. [Google Scholar] [CrossRef]
  129. Xu, X.; Tan, M.; Corcoran, B.; Wu, J.; Boes, A.; Nguyen, T.G.; Chu, S.T.; Little, B.E.; Hicks, D.G.; Morandotti, R.; et al. 11 TOPS Photonic Convolutional Accelerator for Optical Neural Networks. Nature 2021, 589, 44–51. [Google Scholar] [CrossRef] [PubMed]
  130. Wu, C.; Yu, H.; Lee, S.; Peng, R.; Takeuchi, I.; Li, M. Programmable Phase-Change Metasurfaces on Waveguides for Multimode Photonic Convolutional Neural Network. Nat. Commun. 2021, 12, 96. [Google Scholar] [CrossRef]
  131. Robertson, J.; Kirkland, P.; Alanis, J.A.; Hejda, M.; Bueno, J.; Di Caterina, G.; Hurtado, A. Ultrafast Neuromorphic Photonic Image Processing with a VCSEL Neuron. Sci. Rep. 2022, 12, 4874. [Google Scholar] [CrossRef]
  132. Zhang, T.; Wang, J.; Dan, Y.; Lanqiu, Y.; Dai, J.; Han, X.; Sun, X.; Xu, K. Efficient Training and Design of Photonic Neural Network through Neuroevolution. Opt. Express 2019, 27, 37150. [Google Scholar] [CrossRef]
  133. Antonik, P.; Marsal, N.; Brunner, D.; Rontani, D. Bayesian Optimisation of Large-Scale Photonic Reservoir Computers. Cogn. Comput. 2023, 15, 1452–1460. [Google Scholar] [CrossRef]
  134. Scellier, B.; Bengio, Y. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation. Front Comput. Neurosci. 2017, 11, 00024. [Google Scholar] [CrossRef]
  135. Xu, T.; Zhang, W.; Zhang, J.; Luo, Z.; Xiao, Q.; Wang, B.; Luo, M.; Xu, X.; Shastri, B.J.; Prucnal, P.R.; et al. Control-Free and Efficient Silicon Photonic Neural Networks via Hardware-Aware Training and Pruning. arXiv 2024, arXiv:10.48550/arXiv.2401.08180. [Google Scholar]
  136. Zhao, X.; Lv, H.; Chen, C.; Tang, S.; Liu, X.; Qi, Q. On-Chip Reconfigurable Optical Neural Networks. Preprints 2021, 1–21. [Google Scholar] [CrossRef]
  137. Sebastian, A.; Le Gallo, M.; Burr, G.W.; Kim, S.; Bright Sky, M.; Eleftheriou, E. Tutorial: Brain-Inspired Computing Using Phase-Change Memory Devices. J. Appl. Phys. 2018, 124, 5042413. [Google Scholar] [CrossRef]
  138. Sun, J.; Kumar, R.; Sakib, M.; Driscoll, J.B.; Jayatilleka, H.; Rong, H. A 128 Gb/s PAM4 Silicon Microring Modulator with Integrated Thermo-Optic Resonance Tuning. J. Light. Technol. 2019, 37, 110–115. [Google Scholar] [CrossRef]
  139. Zhou, Z.; Yin, B.; Michel, J. On-Chip Light Sources for Silicon Photonics. Light Sci. Appl. 2015, 4, e358. [Google Scholar] [CrossRef]
  140. Prorok, S.; Petrov, A.Y.; Eich, M.; Luo, J.; Jen, A.K.-Y. Trimming of High-Q-Factor Silicon Ring Resonators by Electron Beam Bleaching. Opt. Lett. 2012, 37, 3114. [Google Scholar] [CrossRef] [PubMed]
  141. Xu, R.; Taheriniya, S.; Varri, A.; Ulanov, M.; Konyshev, I.; Krämer, L.; McRae, L.; Ebert, F.L.; Bankwitz, J.R.; Ma, X.; et al. Mode Conversion Trimming in Asymmetric Directional Couplers Enabled by Silicon Ion Implantation. Nano Lett. 2024, 24, 10813–10819. [Google Scholar] [CrossRef]
  142. Schrauwen, J.; Van Thourhout, D.; Baets, R. Trimming of Silicon Ring Resonator by Electron Beam Induced Compaction and Strain. Opt. Express 2008, 16, 3738. [Google Scholar] [CrossRef] [PubMed]
  143. Pérez, D.; Gasulla, I.; Das Mahapatra, P.; Capmany, J. Principles, Fundamentals, and Applications of Programmable Integrated Photonics. Adv. Opt. Photonics 2020, 12, 709. [Google Scholar] [CrossRef]
  144. Varri, A.; Taheriniya, S.; Brückerhoff-Plückelmann, F.; Bente, I.; Farmakidis, N.; Bernhardt, D.; Rösner, H.; Kruth, M.; Nadzeyka, A.; Richter, T.; et al. Scalable Non-Volatile Tuning of Photonic Computational Memories by Automated Silicon Ion Implantation. Adv. Mater. 2024, 36, 202310596. [Google Scholar] [CrossRef]
  145. Li, R.; Gong, Y.; Huang, H.; Zhou, Y.; Mao, S.; Wei, Z.; Zhang, Z. Photonics for Neuromorphic Computing: Fundamentals, Devices, and Opportunities. Adv. Mater. 2024, 37, 2312825. [Google Scholar] [CrossRef]
  146. Chen, S.; Li, W.; Wu, J.; Jiang, Q.; Tang, M.; Shutts, S.; Elliott, S.N.; Sobiesierski, A.; Seeds, A.J.; Ross, I.; et al. Electrically Pumped Continuous-Wave III–V Quantum Dot Lasers on Silicon. Nat. Photonics 2016, 10, 307–311. [Google Scholar] [CrossRef]
  147. Wang, C.; Zhang, M.; Chen, X.; Bertrand, M.; Shams-Ansari, A.; Chandrasekhar, S.; Winzer, P.; Lončar, M. Integrated Lithium Niobate Electro-Optic Modulators Operating at CMOS-Compatible Voltages. Nature 2018, 562, 101–104. [Google Scholar] [CrossRef]
  148. Zhang, Y.; Guo, X.; Ji, X.; Shen, J.; He, A.; Su, Y. What Can Be Integrated on the Silicon Photonics Platform and How? APL Photonics 2024, 9, 0220463. [Google Scholar] [CrossRef]
  149. Patel, D.; Samani, A.; Veerasubramanian, V.; Ghosh, S.; Plant, D.V. Silicon Photonic Segmented Modulator-Based Electro-Optic DAC for 100 Gb/s PAM-4 Generation. IEEE Photonics Technol. Lett. 2015, 27, 2433–2436. [Google Scholar] [CrossRef]
  150. Lam, S.; Khaled, A.; Bilodeau, S.; Marquez, B.A.; Prucnal, P.R.; Chrostowski, L.; Shastri, B.J.; Shekhar, S. Dynamic Electro-Optic Analog Memory for Neuromorphic Photonic Computing. arXiv 2024. [Google Scholar] [CrossRef]
Figure 1. The NPPN architecture. It incorporates ANN connections and RC dynamics. Red, black, and blue colors indicate optical, electrical, and optical/electrical processing or interconnect, respectively. Interconnect lines denote connections in the ANN configuration, while dashed lines indicate the absence of direct connections in the RC configuration.
Figure 1. The NPPN architecture. It incorporates ANN connections and RC dynamics. Red, black, and blue colors indicate optical, electrical, and optical/electrical processing or interconnect, respectively. Interconnect lines denote connections in the ANN configuration, while dashed lines indicate the absence of direct connections in the RC configuration.
Chips 04 00034 g001
Figure 2. Photonic synapses with diverse control schemes: electrical control (ad) and all-optical control (e,f). (a) An optical interference unit comprising a MZI, waveguides, and directional couplers with phase shifters is employed to perform unitary transforms, specifically optical matrix multiplication. This involves a weight matrix, M = UΣV†, derived through singular-value decomposition. The unitary matrices, U and V†, are realized using MZIs, while the diagonal matrix, Σ, is implemented with a Mach–Zehnder modulator. Refer to the micrograph image (top) [51] for visualization. (b) Photonic routing and weighting scheme using multilayer silicon nitride waveguides for all-to-all connectivity surrounded by silicon dioxide and an interplanar coupler [96]. (c) Thermo-optic microring resonator (MMR) weight bank tunable filters utilize WDM signals for add-and-drop functionalities, which are then summed by a balanced photodetector to enable the incorporation of positive or negative weights [82,100]. (d) A co-integrating chip with weighted additions for WDM input vectors, providing WDM outputs using InP SOA-based photonic cross-connects, a schematic (left) and microscope image (right) [101]. (e) Schematic implementation (left) and representation (right) of the reservoir employing nonlinear transient states (N) generated by a single nonlinear element (NL), which is subjected to delayed masked weighted feedback receiving input information (u(t)) to generate the readout ( y k ( t ) ). Each transient state utilized for computation is distributed along the delay line with a spacing of ϴ [102]. (f) PCM-based photonic synapse integrated into silicon nitride waveguides that modulate the optical mode as per the optical pulses sent down by the waveguide utilizing material phase switching property [85]. Figures adapted with permission from Ref. [51], SNL (a); and Ref. [82], IEEE (c). Figures reproduced with permission from Ref. [96], APL (b); Ref. [101], author(s) (d); Ref. [102], SNL (e); and Ref. [85], Science (f).
Figure 2. Photonic synapses with diverse control schemes: electrical control (ad) and all-optical control (e,f). (a) An optical interference unit comprising a MZI, waveguides, and directional couplers with phase shifters is employed to perform unitary transforms, specifically optical matrix multiplication. This involves a weight matrix, M = UΣV†, derived through singular-value decomposition. The unitary matrices, U and V†, are realized using MZIs, while the diagonal matrix, Σ, is implemented with a Mach–Zehnder modulator. Refer to the micrograph image (top) [51] for visualization. (b) Photonic routing and weighting scheme using multilayer silicon nitride waveguides for all-to-all connectivity surrounded by silicon dioxide and an interplanar coupler [96]. (c) Thermo-optic microring resonator (MMR) weight bank tunable filters utilize WDM signals for add-and-drop functionalities, which are then summed by a balanced photodetector to enable the incorporation of positive or negative weights [82,100]. (d) A co-integrating chip with weighted additions for WDM input vectors, providing WDM outputs using InP SOA-based photonic cross-connects, a schematic (left) and microscope image (right) [101]. (e) Schematic implementation (left) and representation (right) of the reservoir employing nonlinear transient states (N) generated by a single nonlinear element (NL), which is subjected to delayed masked weighted feedback receiving input information (u(t)) to generate the readout ( y k ( t ) ). Each transient state utilized for computation is distributed along the delay line with a spacing of ϴ [102]. (f) PCM-based photonic synapse integrated into silicon nitride waveguides that modulate the optical mode as per the optical pulses sent down by the waveguide utilizing material phase switching property [85]. Figures adapted with permission from Ref. [51], SNL (a); and Ref. [82], IEEE (c). Figures reproduced with permission from Ref. [96], APL (b); Ref. [101], author(s) (d); Ref. [102], SNL (e); and Ref. [85], Science (f).
Chips 04 00034 g002
Figure 3. Integrated photonic neuron with weighting and activation. All-optical: (a) Inputs to SOA-based spiking integrate-and-fire neurons are weighted passively using attenuators and delay lines, temporally integrated with a SOA, and subjected to thresholding using a highly germanium-doped fiber [28]. (b) In a PCM-based spiking neuron, input spikes are weighted using PCM cells and aggregated using a WDM multiplexer. When the integrated power of the post-synaptic spikes exceeds a threshold, the PCM cell on the ring resonator switches to produce an output pulse [22]. Optical–electro–optical: (c) A superconducting optoelectronic spiking neuron employs a superconducting-nanowire single-photon detector (SNSPD) to drive a superconducting switch (amplifier) [27], which is then followed by a silicon light-emitting diode [24]. (d) Interference neurons based on MZI achieve optical-to-optical activation by converting a fraction of the optical input to implement positive (excitatory) and negative (inhibitory) weights using a photodetector. The remaining original optical signal is intensity-modulated due to the intrinsic nonlinearity of the photodetector [23]. (e) Wavelength division multiplexing inputs are weighted using tunable MMRs (Figure 2c). The optical power is aggregated and sensed by a balanced photodiode, which then drives the electro-absorption modulator (EAM) incorporating an indium tin oxide layer monolithically integrated into silicon photonic waveguides. This EAM nonlinearly modulates the laser power [25]. (f) A balanced photodiode (Figure 2c) sums multiple wavelengths, implements positive (excitatory) and negative (inhibitory) weights, and drives a WDM silicon photonic modulator neuron exploiting its electro-optic nonlinearity [21]. (g) The device uses WDM to achieve multichannel fan-in and a photodetector to sum signals together to drive laser perceptron [26]. In (eg), device diagrams are shown on the left next to micrographs of each device. Figures adapted with permission from Ref. [24], AIP (c); and Ref. [25], APL (e). Figures reproduced with permission from Ref. [28], OPG (a); Ref. [22], SNL (b); Ref. [23], IEEE (d); Ref. [21], APS (f); and Ref. [26], AIP (g).
Figure 3. Integrated photonic neuron with weighting and activation. All-optical: (a) Inputs to SOA-based spiking integrate-and-fire neurons are weighted passively using attenuators and delay lines, temporally integrated with a SOA, and subjected to thresholding using a highly germanium-doped fiber [28]. (b) In a PCM-based spiking neuron, input spikes are weighted using PCM cells and aggregated using a WDM multiplexer. When the integrated power of the post-synaptic spikes exceeds a threshold, the PCM cell on the ring resonator switches to produce an output pulse [22]. Optical–electro–optical: (c) A superconducting optoelectronic spiking neuron employs a superconducting-nanowire single-photon detector (SNSPD) to drive a superconducting switch (amplifier) [27], which is then followed by a silicon light-emitting diode [24]. (d) Interference neurons based on MZI achieve optical-to-optical activation by converting a fraction of the optical input to implement positive (excitatory) and negative (inhibitory) weights using a photodetector. The remaining original optical signal is intensity-modulated due to the intrinsic nonlinearity of the photodetector [23]. (e) Wavelength division multiplexing inputs are weighted using tunable MMRs (Figure 2c). The optical power is aggregated and sensed by a balanced photodiode, which then drives the electro-absorption modulator (EAM) incorporating an indium tin oxide layer monolithically integrated into silicon photonic waveguides. This EAM nonlinearly modulates the laser power [25]. (f) A balanced photodiode (Figure 2c) sums multiple wavelengths, implements positive (excitatory) and negative (inhibitory) weights, and drives a WDM silicon photonic modulator neuron exploiting its electro-optic nonlinearity [21]. (g) The device uses WDM to achieve multichannel fan-in and a photodetector to sum signals together to drive laser perceptron [26]. In (eg), device diagrams are shown on the left next to micrographs of each device. Figures adapted with permission from Ref. [24], AIP (c); and Ref. [25], APL (e). Figures reproduced with permission from Ref. [28], OPG (a); Ref. [22], SNL (b); Ref. [23], IEEE (d); Ref. [21], APS (f); and Ref. [26], AIP (g).
Chips 04 00034 g003
Figure 5. AI topologies that are also applicable to NPNs are represented via a rectangular Venn diagram. The NPN architecture integrates recurrent and feedforward connections, facilitating advanced learning processes. Dashed lines indicate the lack of direct feedback connections in the feedforward case, while the complete network for the recurrent case elucidates the network’s operational dynamics.
Figure 5. AI topologies that are also applicable to NPNs are represented via a rectangular Venn diagram. The NPN architecture integrates recurrent and feedforward connections, facilitating advanced learning processes. Dashed lines indicate the lack of direct feedback connections in the feedforward case, while the complete network for the recurrent case elucidates the network’s operational dynamics.
Chips 04 00034 g005
Figure 6. On-chip neuromorphic photonic approaches. (a) Passive photonic reservoir computing is a time-delayed recurrent neural network that uses fixed high-dimensional reservoirs for computational tasks depicting input, output, and flow via black and blue arrows and red dots, respectively [49]. (b) Superconducting optoelectronic circuits are feedforward multilayer perceptrons that use semiconducting few-photon light-emitting diodes and superconducting-nanowire single-photon detectors with N0 neurons [50]. (c) The MZI-based coherent nanophotonic circuit is an internally and externally trained feedforward network using phase shifters, depicting the optical interference unit that implements matrix multiplication and attenuation with red and blue meshes, respectively [51]. (d) Wavelength-division multiplexed broadcast and weight network is a recurrent, continuous-time model programmed by a compiler composed of a mirroring weight bank, a balanced photodiode for summing, and a micro-ring modulator for nonlinear activation [52]. (e) Multiwavelength PCM-based photonic neurosynaptic network presents a feedforward, spiking model with both external and local training composed of layers, including a collector made up of micro-rings that utilize wavelength division multiplexer to unite optical signal from the previous layer (bottom) and a distributor that broadcast signal equally germanium–antimony–tellurium synapse (top) [22]. (f) A feedforward pretrained on-chip diffractive optics neural network with continuous output based on a phase-tunable complex-valued transmission coefficient made up of a silicon slot filled with silicon dioxide [57]. (g) Coherent VCSEL optoelectronic neural network architecture uses a 3D hybrid layout comprising 2D VCSEL arrays bonded to a CMOS driver, a phase mask for beam fanout, and photodetector arrays for homodyne multiplication and inline nonlinearity operated via spatial–temporal multiplexing for parallelized matrix–vector computations. Fabricated arrays of 5 × 5 wire-bonded VCSELs on a GaAs substrate [60]. Figures reproduced with permission from Ref. [49], SNL (a); Ref. [50], APS (b); Ref. [51], SNL (c); Ref. [21], APS (d); Ref. [22], SNL (e); Ref. [57], SNL (f); and Ref. [60], SNL (g).
Figure 6. On-chip neuromorphic photonic approaches. (a) Passive photonic reservoir computing is a time-delayed recurrent neural network that uses fixed high-dimensional reservoirs for computational tasks depicting input, output, and flow via black and blue arrows and red dots, respectively [49]. (b) Superconducting optoelectronic circuits are feedforward multilayer perceptrons that use semiconducting few-photon light-emitting diodes and superconducting-nanowire single-photon detectors with N0 neurons [50]. (c) The MZI-based coherent nanophotonic circuit is an internally and externally trained feedforward network using phase shifters, depicting the optical interference unit that implements matrix multiplication and attenuation with red and blue meshes, respectively [51]. (d) Wavelength-division multiplexed broadcast and weight network is a recurrent, continuous-time model programmed by a compiler composed of a mirroring weight bank, a balanced photodiode for summing, and a micro-ring modulator for nonlinear activation [52]. (e) Multiwavelength PCM-based photonic neurosynaptic network presents a feedforward, spiking model with both external and local training composed of layers, including a collector made up of micro-rings that utilize wavelength division multiplexer to unite optical signal from the previous layer (bottom) and a distributor that broadcast signal equally germanium–antimony–tellurium synapse (top) [22]. (f) A feedforward pretrained on-chip diffractive optics neural network with continuous output based on a phase-tunable complex-valued transmission coefficient made up of a silicon slot filled with silicon dioxide [57]. (g) Coherent VCSEL optoelectronic neural network architecture uses a 3D hybrid layout comprising 2D VCSEL arrays bonded to a CMOS driver, a phase mask for beam fanout, and photodetector arrays for homodyne multiplication and inline nonlinearity operated via spatial–temporal multiplexing for parallelized matrix–vector computations. Fabricated arrays of 5 × 5 wire-bonded VCSELs on a GaAs substrate [60]. Figures reproduced with permission from Ref. [49], SNL (a); Ref. [50], APS (b); Ref. [51], SNL (c); Ref. [21], APS (d); Ref. [22], SNL (e); Ref. [57], SNL (f); and Ref. [60], SNL (g).
Chips 04 00034 g006
Figure 7. Applications as a prospect for neuromorphic photonic networks [10].
Figure 7. Applications as a prospect for neuromorphic photonic networks [10].
Chips 04 00034 g007
Table 1. On-chip approaches for neuromorphic photonic networks.
Table 1. On-chip approaches for neuromorphic photonic networks.
NPN Type [Ref.]SynapseSynaptic MemoryPhotonic NeuronPhysicsTopologyRemarks
RC [49]Node of reservoir with multiple feedback loops280 spy interconnection delayIntrinsic nonlinearity of photodetectorSuperposition principleReservoirNo power consumption in the reservoir and high bit rate scalability (>100 Gbit/s). Cannot be generalized for complex computing application.
SOC [50]Interplanar or lateral waveguide coupler with electromechanically tunable couplingMEMS capacitor.Phase-change nanowires from superconducting-to-normal metal above a threshold induced by photon absorption arranged in parallel or series detectorSuperconductivity and MEMS capacitanceANN and SNNHighly scalable, zero static power dissipation, extraordinary device efficiencies. Require cryogenic temperature (2K). Bandwidth limited to 1 GHz.
CNC [51]OIU consisting of beamsplitters and phase shifters for unitary transformation and attenuators for diagonal matrixNANonlinear mathematical saturable absorber functionTO effectTwo-layer DNNCan implement any arbitrary ANN. May allow online training. Bulky and
requires high driving voltage.
B&W [52]Reconfigurable TO-MRR filtersNAMach–Zehnder modulatorTO effectCTRNNCapable of implementing generalized
reconfigurable RNN. Bandwidth limited to 1 GHz.
MN [22]Optical waveguides integrated PCM on top, controlling propagating optical modeGST Phase.Optical ReLU designed via MRR-PCM on topWDM and PCM dynamicANNNo waveguide crossings, no accumulation of errors and signal contamination. The endurance of GST is of ~104 switching cycles and sub-nanosecond operation speed.
DO [57]Pretrained phase values on distinct hidden layers via SSSDNADiffractive unit composed of three identical SSSDHuygens–Fresnel principle and TO effectThree-layer DNNScalable, simple structure design, and all-optical passive operation.
Requiring external algorithmic compensation.
VNN [60]Homodyne detection with phase-encodingNAHomodyne VCSEL nonlinearityLasing principleThree-layer ANNScalable and can be integrated into 2D/3D arrays and inline nonlinearity.
Limited by phase stability and integration scale.
Optical interference unit (OUI); thermo-optic (TO); continuous-time RNN (CTRNN); not applicable (NA). Selected publications based on novelty or experimental demonstration.
Table 2. Applied training and comparison of on-chip neuromorphic photonic networks.
Table 2. Applied training and comparison of on-chip neuromorphic photonic networks.
NPN TypeDevice Basic Unit
[Reference]
Networks and TrainingComparison
TopologyTrainingData
(Train/Test) %
ApplicationRemark or Accuracy
Exp. (Sim.)
NBUs/mm2Operational Power (pJ/FLOP)Throughput (TOPS)
RCSpiral nodes [49]ReservoirFivefold cross-validation, ridge regression, and
winner-takes-all approach
10000 bits for Boolean task and
5-bit headers
Arbitrary Boolean logic and 5-bit header recognition>99 (-)62,50000.4
SOCSNSPD [50]ANN
And
SNN
Backpropagation and STDP--Designed for Scalability7 to 40000.0001419.6
CNCTunable MZI [51]Two-layer
DNN
SGD360 data points (50:50)Vowel recognition76.7 (91.7)<100.076006.4
B&WTO-MMR [52]CTRNNBifurcation analysis500 data points from 0.05 to 0.85Lorenz attractorB&W is Isomorphic to CTRNN1600288.00001.2
MNX-PCM [55]CNNBackpropagationMNIST handwritten digitsDigit recognition95.3 (96.1)<50.0059028.8
DO ##SWU [57]Three-layer
DNN
Pretrained backpropagation (adaptive moment estimation)Iris (80:20)
and MNIST handwritten digits (85:15)
Classification90 (90) and
86 (96.3)
20000.0000113,800.0
VNNVCSEL
arrays [60]
Optical NNBackpropagationMNIST handwritten digitsImage classification93.1 (95.1)60.00750
For more comprehensive information, readers may also refer to other reported works [56,58,59,110,111,118,127,128,129,130,131].
No. of basic unit (NBU); trillions of operations per second (TOPS) = 2 x No. of layers in network x No. of rows x No. of columns x Detection rate; floating point operations (FLOPs); X represents ring or MAC unit; sub-wavelength unit (SWU); ## reported 30% fabrication error; selected publications based on novelty or experimental demonstration.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gupta, S.; Xavier, J. Neuromorphic Photonic On-Chip Computing. Chips 2025, 4, 34. https://doi.org/10.3390/chips4030034

AMA Style

Gupta S, Xavier J. Neuromorphic Photonic On-Chip Computing. Chips. 2025; 4(3):34. https://doi.org/10.3390/chips4030034

Chicago/Turabian Style

Gupta, Sujal, and Jolly Xavier. 2025. "Neuromorphic Photonic On-Chip Computing" Chips 4, no. 3: 34. https://doi.org/10.3390/chips4030034

APA Style

Gupta, S., & Xavier, J. (2025). Neuromorphic Photonic On-Chip Computing. Chips, 4(3), 34. https://doi.org/10.3390/chips4030034

Article Metrics

Back to TopTop