Next Article in Journal
3D Reticulated Actuator Inspired by Plant Up-Righting Movement Through a Cortical Fiber Network
Previous Article in Journal
Evaluation of Human Ear Anatomy and Functionality by Axiomatic Design
 
 
Communication

Neuro-Inspired Signal Processing in Ferromagnetic Nanofibers

1
Center for Science and Education—Institute of Physics, Silesian University of Technology, 44-100 Gliwice, Poland
2
Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, 44-100 Gliwice, Poland
3
Faculty of Electronics and Informatics, Koszalin University of Technology, 75-453 Koszalin, Poland
4
Faculty of Engineering and Mathematics, Institute for Technical Energy Systems (ITES), Bielefeld University of Applied Sciences, 33619 Bielefeld, Germany
*
Author to whom correspondence should be addressed.
Academic Editors: Silvia Battistoni and Paschalis Gkoupidenis
Biomimetics 2021, 6(2), 32; https://doi.org/10.3390/biomimetics6020032
Received: 28 March 2021 / Revised: 20 May 2021 / Accepted: 25 May 2021 / Published: 26 May 2021
(This article belongs to the Special Issue Biomimetic Devices for Neuro-Inspired Applications)

Abstract

Computers nowadays have different components for data storage and data processing, making data transfer between these units a bottleneck for computing speed. Therefore, so-called cognitive (or neuromorphic) computing approaches try combining both these tasks, as is done in the human brain, to make computing faster and less energy-consuming. One possible method to prepare new hardware solutions for neuromorphic computing is given by nanofiber networks as they can be prepared by diverse methods, from lithography to electrospinning. Here, we show results of micromagnetic simulations of three coupled semicircle fibers in which domain walls are excited by rotating magnetic fields (inputs), leading to different output signals that can be used for stochastic data processing, mimicking biological synaptic activity and thus being suitable as artificial synapses in artificial neural networks.
Keywords: neuromorphic computing; nanofibers; bending radius; data processing; spikes; neuron excitation neuromorphic computing; nanofibers; bending radius; data processing; spikes; neuron excitation

1. Introduction

Neuro-inspired signal processing presently relates to the most intensively studied topics of technical sciences in a wide range of completely different realizations. It can be implemented from an electronic perspective, using, for example, field-programmable gate array (FPGA) circuits [1,2,3,4] or employing the recent achievements of spintronics [5,6,7], phase change materials [8,9] or other components.
Different computational schemes have been developed, such as artificial neural networks (ANNs), support vector machines or reservoir computing as a special type of recurrent ANN [10,11,12]. Simulating the human brain with recurrent neural networks necessitates calibration of weights and connections of the nodes in the network; this time-consuming task can be avoided by using a reservoir in which weights between input values and reservoir neurons are either set arbitrarily or optimized [13,14], while the weights between reservoir neurons and output layer are linearly trained, thus speeding up the training process [15,16]. For all these approaches, it is necessary to produce artificial neurons and synapses that are able to transfer and modulate data. Often, such artificial neurons and synapses are produced by electronic devices with a variable resistance which represents the synaptic weight [17,18,19].
Several approaches to neuro-inspired signal processing are based on magnetic materials. There is a rich tradition of such research, firstly introduced by works of Allwood, Cowburn et al. [20,21,22] and more recently by Grollier et al. [23,24], to name just a few. Generally, signal processing necessitates a deterministic or stochastic correlation between input and output values. This means that not only input and output need to be defined, but also data transport and processing between them. In several approaches, nanofibers or nanofiber networks are used for these tasks, in which data can be transported and manipulated in the form of domain walls [25,26,27]. The static and dynamic magnetic properties of such nanofibers depend strongly on their geometry, material and orientation with respect to the external magnetic field [28,29,30,31].
From the perspective of neuroscience and neuronal spikes, the effect of neuron excitation if a given threshold level is exceeded is fundamental [32,33,34]. In the language of bioelectronics, this can be interpreted in the following way: If the excitations from several inputs overlap constructively, a defined energy barrier is overcome, and the output action potential is different from zero. Hence, since biological signals have different amplitudes, it can be imagined that the implementation of a small network of ferromagnetic fibers might be a good choice to use magnetization dynamics to provide a wide enough range of vector values, mimicking neuronal and synaptic activity.
Here we investigate three-semicircle systems with two inputs and three outputs with dynamic micromagnetic simulations. Our results show that, depending on the chosen threshold values, different data processing operations can be defined in this simple system. It must be mentioned that the nanofiber network presented here forms a single artificial neuronal cell body, receiving signals and performing data processing, which can be used as a part of a forward-propagating neural network [35,36,37]. It does not form a full artificial neural network with axons and synapses, which has to be implemented after establishing the functionality of these devices. Generally, the functionalities of the elements found in the human brain are not 100% reflected by such a neuromorphic approach.

2. Materials and Methods

Micromagnetic simulations were carried out using the micromagnetic solver Magpar, which dynamically solves the Landau–Lifshitz–Gilbert equation of motion [38]. The simulation parameters were chosen as follows: saturation magnetization J S = 1.005   T , damping constant α = 0.02 , exchange constant A = 1.3 · 10 11   J / T , and anisotropy constant equal zero since the material permalloy (Py) was chosen. The total length of each single fiber is 1570 nm with a bending radius of 500 nm, the cross-section 10 nm × 60 nm, meshed with an average size of 3.3 nm. The externally applied field is 1 T at a frequency of 500 MHz, applied at local input areas of 60 nm × 60 nm × 10 nm. The simulated geometry is depicted in Figure 1. This combination of material and dimensions was investigated before and found to be suitable to enable domain wall nucleation and propagation by a local rotating magnetic field of the frequency chosen here [39]. Such nanostructures can be produced by e-beam lithography or similar methods from different materials, typically on Si wafers or other non-magnetic flat surfaces, but also in a free-standing form [40,41,42].
Micromagnetic simulations are performed with four input combinations, defined by in-plane rotating fields: LL, LR, RL, and RR, where L = counterclockwise and R = clockwise. These inputs are used to feed information into the system. The resulting output signals are derived at the positions A, B, and C, meaning that we obtain magnetization vectors. Each vector component can have values from −1 to +1. After that, for a given component (separately for M x ,   M y ,   M z ) the weighted sum is calculated. This step is not yet implemented physically; possible realizations are connections with different lengths, diameters or in general different probabilities for data transport in a defined time.
It should be mentioned that while this is a proof-of-concept, building up the system in reality is technologically possible. The dimensions chosen here are relatively large and well suited for lithographic production. Applying localized varying magnetic fields at regions of some ten to a few hundred nanometers diameter is usually performed by microwave technique, as shown in diverse papers [43,44,45,46].

3. Results and Discussion

As an example, the calculation
M x ( t o t ) = w A M x ( A ) + w B M x ( B ) + w C M x ( C ) ,
imitates similar combinations found in neural networks, with w i defining the weights of the respective outputs. The total signal M x ( t o t ) is then normalized by its maximum value that occurs during simulations, related usually to 150 ns or 200 ns. In this way, the M x ( t o t n o r m ) component values fall in the range of < 1 ; + 1 > . Next, for the assumed threshold value M t h , a digitization operation is performed; i.e. the transformation from M x ( t o t n o r m ) into M x ( t o t d i g ) , namely
M x ( t o t d i g ) = { 1 i f M x ( t o t d i g ) M t h 0 i f M x ( t o t d i g ) < M t h .
Since M x ( t o t n o r m )     < 1 ; + 1 > , we tested M t h   < 1 ; + 1 > with a resolution of 0.1 . The steps of data preparation are shown in Figure 2 for w A = w C = 0.45 ,   w B = 0.1 ,   M t h = 0.7 , and RL combination of rotating fields.
Firstly, Figure 2a–c shows the single outputs. It is visible here that outputs A and C show very fast oscillations between maximum and minimum x components, i.e., fast-moving and oscillating domain walls (cf. domain walls near output A in Figure 1). Output B behaves differently, as also visible in Figure 1. Here, the interaction between the left and the right semi-circle results in a more stable situation, with some “spikes” visible when domain walls move through this output. It should be mentioned that these spikes are not directly suitable to be used as logic results, as in spin-torque oscillators used to prepare neural circuits [47]. Instead, weighted sums of the three outputs are applied here to allow for differentiation between the input combinations LL, LR, RL and RR.
In all three outputs, it is visible that the signal starts only at approx. 20–25 ns. This time gap between the onset of the rotating input fields and the onset of receiving an output value is correlated with the velocity of the domain walls moving through the nanostrips (cf. Figure 1). After this time gap, the initial starting configuration is overwritten with the introduced signals.
Due to the spatial symmetry of our fiber-based neuron, for all simulated cases we assume w A = w c w B , while w A + w B + w C = 1 . As an example of calculation results, we present in Table 1 the case RL along with w A = w c = 0.45 ,   w B = 0.1 ;   w A = w c = 0.40 ,   w B = 0.2 ; or w A = w c = 0.35 ,   w B = 0.3 , for several representative threshold values of M t h .
Here, the influence of the threshold values is obvious. Smaller values of M t h   are easier overcome by M x ( t o t d i g ) , so that smaller threshold values will lead to more positions being 1 than 0 and vice versa. In this way, it is possible to define the output with the desired probability.
This, on the other hand, is the basis for the common process of adding up signals. For this, it is necessary that not only “learning” is realized, but also “forgetting”; i.e., if a certain stimulus value (here named in this way to avoid confusion with the threshold values defined before) is not reached after a certain time, the sum of the signals is set back to its original value (here zero) and summing up starts again [48]. This process can be realized, e.g., by a mono-domain magnetic tunnel junction in which the input stimuli frequency must be high enough to allow for crossing the energy barrier that separates two stable states [49]. Quite similarly, here it is possible to define a certain time (i.e., number of simulation steps) after which a stimulus value must be crossed; else the state reached is “forgotten” and the process starts again from zero.
Figure 3 depicts the corresponding summing mechanism of two examples using the weights w A = w c = 0.35 ,   w B = 0.3 and thresholds of 0.4 or 0.2, respectively. The first 25 ns in which the input signals do not fully influence the output signals are neglected. While Figure 3a,b show the original signals derived for these thresholds, Figure 3c,d show the corresponding “learning and forgetting” simulation. Here, each “1” in Figure 3a,b adds a defined value in Figure 3c,d (“learning”), while for each “0” in Figure 3a,b, a value > 0 in Figure 3c,d is reduced by a defined value (“forgetting”). The leaking rate, defining the “forgetting”, is set to values from 0.1 to 0.3. A value of 0.1, e.g., means that one “learning” step” is “forgotten” after 10 “forgetting” steps, etc.
Comparing these exemplary threshold values (cf. Table 1), it becomes clear that they should be correlated with different stimulus values, here chosen as 0.5 (Figure 3c) or 2.5 (Figure 3d), respectively, as marked by the horizontal lines. Comparing “learning” (i.e., ranges of increasing values) and “forgetting” (i.e., ranges of decreasing values), it can be noted that the first shows a certain stochastic behavior, as mentioned before, while “forgetting” is here realized by a simple linear function, as explained in the caption of Figure 3. This can be modified in a next step to mimic the human brain more adequately; however, this was not within the scope of this project.
While until now we concentrated on the input case RL to describe the system and its basic functionality, a discussion of the influence of the input on the weighted output is still necessary. Table 2, Table 3 and Table 4 depict the values of the digital signals, averaged over the time in which a signal can be measured at the outputs (i.e., after 25 ns, cf. Figure 2), derived for the different input cases LL, LR, RL and RR in case of the aforementioned combinations of weights and threshold values. It must be mentioned that these weights are just a few from a broad range of possible combinations. The column RL corresponds to the values depicted in Figure 2; Table 2, Table 3 and Table 4 correspond to the weight combinations in columns 2–4 of Table 1.
In particular, a threshold value of 0 (marked in grey) seems to be suitable to differentiate between different input combinations. It must be mentioned that the differences in the averaged values found for this threshold value (and others) strongly depend on the chosen weights. While for a combination of w A = w c = 0.45 ,   w B = 0.1 , i.e., the smallest value of w B chosen here, LL and RR give quite similar results for Mth = 0, LR and RL differ clearly from each other and from the symmetric cases (Table 2). This is different for the cases in which larger values for w B were chosen (Table 3 and Table 4), where LL and RR give quite different results, while LL and RL (Table 3) or LL and LR (Table 4) give nearly identical averaged values.
As this short overview shows, the system suggested here cannot be used to build up a classical binary logic, as it is available in a transistor, etc. Instead, a more complicated logic is enabled with a broad range of possible correlations between inputs and outputs, defined by the combinations of weights of the single outputs. Similar to artificial neural networks, setting these weights will define the results of the performed logic operations, i.e., the correlation between inputs and output. In a full neural network, the averaged values can also be used to define new weights in the next layer.
As these examples show, domain wall motion in small nanowire networks can serve to simulate neuronal behavior, including “learning” and “forgetting”.

4. Conclusions

In a recent study, neurons were defined as three coupled semicircle fibers in which domain walls are excited by rotating magnetic fields. These inputs, defined by pairs or rotational orientations (clockwise/counterclockwise), result in different outputs, which were investigated in terms of “learning” and “forgetting”. Depending on the number of added signals per time, defining “learning”, and the leaking rate, defining “forgetting”, these artificial neurons are found to reach a defined stimulus value with a certain probability. Such simple systems, which can be prepared by lithographic processes, can thus be used as parts of neuromorphic hardware.

Author Contributions

Conceptualization, T.B. and A.E.; methodology, T.B.; software, J.G. and P.S.; formal analysis, T.B. and A.E.; investigation, T.B.; writing—original draft preparation, T.B. and A.E.; writing—review and editing, all authors; visualization, T.B. and A.E. All authors have read and agreed to the published version of the manuscript.

Funding

Research efforts were partially supported (T. B.) by the Silesian University of Technology Rector’s Grant no. 14/030/RGJ21/00110.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Jiménez-Fernández, A.; Jimenez-Moreno, G.; Linares-Barranco, A.; Dominguez-Morales, M.J.; Paz-Vicente, R.; Civit-Balcells, A. a neuro-inspired spike-based pid motor controller for multi-motor robots with low Cost fpgas. Sensors 2012, 12, 3831–3856. [Google Scholar] [CrossRef] [PubMed][Green Version]
  2. Cerezuela Escudero, E.; Pérez Pena, F.; Paz Vicente, R.; Jimenez-Fernandez, A.; Jimenez Moreno, G.; Morgado-Estevez, A. Real-time neuro-inspired sound source localization and tracking architecture applied to a robotic platform. Neurocomputing 2018, 283, 129–139. [Google Scholar] [CrossRef]
  3. Dominguez-Morales, M.; Domínguez-Morales, J.P.; Jiménez-Fernández, Á.; Linares-Barranco, A.; Jiménez-Moreno, G. Stereo Matching in Address-Event-Representation (AER) bio-inspired binocular systems in a Field-Programmable Gate Array (FPGA). Electrons 2019, 8, 410. [Google Scholar] [CrossRef][Green Version]
  4. Prashanth, B.; Ahmed, M.R. FPGA Implementation of bio-inspired computing architecture based on simple neuron model. In Proceedings of the 2020 International Conference on Artificial Intelligence and Signal Processing (AISP), Amaravati, India, 10–12 January 2020; pp. 1–6. [Google Scholar]
  5. Locatelli, N.; Grollier, J.; Querlioz, D.; Vincent, A.F.; Mizrahi, A.; Friedman, J.S.; Vodenicarevic, D.; Kim, J.-V.; Klein, J.-O.; Zhao, W. Spintronic devices as key elements for energy-efficient neuroinspired architectures. Des. Automat. Test. Eur. Conf. Exhib. 2015, 2015, 994–999. [Google Scholar]
  6. Sengupta, A.; Roy, K. Neuromorphic computing enabled by physics of electron spins: Prospects and perspectives. Appl. Phys. Express 2018, 11, 030101. [Google Scholar] [CrossRef]
  7. Resch, S.; Khatamifard, S.K.; Chowdhury, Z.I.; Zabihi, M.; Zhao, Z.Y.; Wang, J.-P.; Sapatnekar, S.S.; Karpuzcu, U.R. PIMBALL: Binary neural networks in spintronic memory. ACM Transac. Architect. Code Optim. 2019, 16, 41. [Google Scholar] [CrossRef][Green Version]
  8. Zhang, W.; Mazzarello, R.; Wuttig, M.; Ma, E. Designing crystallization in phase-change materials for universal memory and neuro-inspired computing. Nat. Rev. Mater. 2019, 4, 150–168. [Google Scholar] [CrossRef]
  9. Wang, Q.; Niu, G.; Ren, W.; Wang, R.; Chen, X.; Li, X.; Ye, Z.; Xie, Y.; Song, S.; Song, Z. Phase change random access memory for neuro-inspired computing. Adv. Electron. Mater. 2021, 2001241. [Google Scholar] [CrossRef]
  10. Prashanth, B.U.V.; Ahmed, M.R. Design and performance analysis of artificial neural network based artificial synapse for bio-inspired computing. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1108, pp. 1294–1302. [Google Scholar]
  11. Richhariya, B.; Tanveer, M. EEG signal classification using universum support vector machine. Expert Syst. Appl. 2018, 106, 169–182. [Google Scholar] [CrossRef]
  12. Soriano, M.C.; Brunner, D.; Escalona-Morãn, M.; Mirasso, C.; Fischer, I. Minimal approach to neuro-inspired information processing. Front. Comput. Neurosci. 2015, 9, 68. [Google Scholar] [CrossRef][Green Version]
  13. Huang, G.; Huang, G.-B.; Song, S.; You, K. Trends in extreme learning machines: A review. Neural Netw. 2015, 61, 32–48. [Google Scholar] [CrossRef] [PubMed]
  14. Ertuğrul, Ö.F. A novel randomized machine learning approach: Reservoir computing extreme learning machine. Appl. Soft Comput. 2020, 94, 106433. [Google Scholar] [CrossRef]
  15. Lukoševičius, M.; Jaeger, H.; Schrauwen, B. Reservoir computing trends. KI Künstliche Intell. 2012, 26, 365–371. [Google Scholar] [CrossRef]
  16. Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: A review. Neural Netw. 2019, 115, 100–123. [Google Scholar] [CrossRef]
  17. Wong, H.-S.P.; Salahuddin, S. Memory leads the way to better computing. Nat. Nanotechnol. 2015, 10, 191–194. [Google Scholar] [CrossRef] [PubMed][Green Version]
  18. Wu, C.; Kim, T.W.; Choi, H.Y.; Strukov, D.B.; Yang, J.J. Flexible three-dimensional artificial synapse networks with correlated learning and trainable memory capability. Nat. Commun. 2017, 8, 1–9. [Google Scholar] [CrossRef][Green Version]
  19. Feng, X.; Li, Y.; Wang, L.; Chen, S.; Yu, Z.G.; Tan, W.C.; Macadam, N.; Hu, G.; Huang, L.; Chen, L.; et al. A fully printed flexible mos 2 memristive artificial synapse with femtojoule switching energy. Adv. Electron. Mater. 2019, 5, 1900740. [Google Scholar] [CrossRef][Green Version]
  20. Allwood, D.A.; Vernier, N.; Xiong, G.; Cooke, M.D.; Atkinson, D.; Faulkner, C.C.; Cowburn, R. Shifted hysteresis loops from magnetic nanowires. Appl. Phys. Lett. 2002, 81, 4005–4007. [Google Scholar] [CrossRef]
  21. Cowburn, R.P.; Allwood, D.A.; Xiong, G.; Cooke, M.D. Domain wall injection and propagation in planar Permalloy nanowires. J. Appl. Phys. 2002, 91, 6949. [Google Scholar] [CrossRef]
  22. Allwood, D.A.; Xiong, G.; Cowburn, R. Domain wall cloning in magnetic nanowires. J. Appl. Phys. 2007, 101, 24308. [Google Scholar] [CrossRef]
  23. Grollier, J.; Querlioz, D.; Stiles, M.D. Spintronic nanodevices for bioinspired computing. Proc. IEEE 2016, 104, 2024–2039. [Google Scholar] [CrossRef][Green Version]
  24. Lequeux, S.; Sampaio, J.; Cros, V.; Yakushiji, K.; Fukushima, A.; Matsumoto, R.; Kubota, H.; Yuasa, S.; Grollier, J. A magnetic synapse: Multilevel spin-torque memristor with perpendicular anisotropy. Sci. Rep. 2016, 6, 31510. [Google Scholar] [CrossRef] [PubMed][Green Version]
  25. Ryu, K.-S.; Thomas, L.; Yang, S.-H.; Parkin, S.S.P. Current induced tilting of domain walls in high velocity motion along perpendicularly magnetized micron-sized Co/Ni/Co racetracks. Appl. Phys. Express 2012, 5, 093006. [Google Scholar] [CrossRef]
  26. Yang, S.-H.; Ryu, K.-S.; Parkin, S.S.P. Domain-wall velocities of up to 750 m s−1 driven by exchange-coupling torque in syn-thetic antiferromagnets. Nat. Nanotechnol. 2015, 10, 221–226. [Google Scholar] [CrossRef] [PubMed]
  27. Alejos, O.; Raposo, V.; Sanchez-Tejerina, L.; Martinez, E. Efficient and controlled domain wall nucleation for magnetic shift registers. Sci. Rep. 2017, 7, 11909. [Google Scholar] [CrossRef] [PubMed][Green Version]
  28. Garg, C.; Yang, S.-H.; Phung, T.; Pushp, A.; Parkin, S.S.P. Dramatic influence of curvature of nanowire on chiral domain wall velocity. Sci. Adv. 2017, 3, e1602804. [Google Scholar] [CrossRef] [PubMed][Green Version]
  29. Blachowicz, T.; Ehrmann, A. Magnetization reversal in bent nanofibers of different cross sections. J. Appl. Phys. 2018, 124, 152112. [Google Scholar] [CrossRef][Green Version]
  30. Kern, P.; Döpke, C.; Blachowicz, T.; Steblinski, P.; Ehrmann, A. Magnetization reversal in ferromagnetic Fibonacci nano-spirals. J. Magn. Magn. Mater. 2019, 484, 37–41. [Google Scholar] [CrossRef]
  31. Blachowicz, T.; Döpke, C.; Ehrmann, A. Micromagnetic simulations of chaotic ferromagnetic nanofiber networks. Nanomaterials 2020, 10, 738. [Google Scholar] [CrossRef][Green Version]
  32. Pérez-Peña, F.; Morgado-Estevez, A.; Linares-Barranco, A.; Jiménez-Fernández, A.; Gomez-Rodriguez, F.; Jimenez-Moreno, G.; López-Coronado, J. Neuro-Inspired Spike-based motion: From dynamic vision sensor to robot motor open-loop control through Spike-VITE. Sensors 2013, 13, 15805–15832. [Google Scholar] [CrossRef][Green Version]
  33. Susi, G.; Toro, L.A.; Canuet, L.; López, M.E.; Maestu, F.; Mirasso, C.R.; Pereda, E. A neuro-inspired system for online learning and recognition of parallel spike trains, based on spike latency, and heterosynaptic STDP. Front. Neurosci. 2018, 12, 780. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, W.; Gao, B.; Tang, J.; Yao, P.; Yu, S.; Chang, M.-F.; Yoo, H.-J.; Qian, H.; Wu, H. Neuro-inspired computing chips. Nat. Electron. 2020, 3, 371–382. [Google Scholar] [CrossRef]
  35. Van de Burgt, Y.; Lubberman, E.; Fuller, E.J.; Keene, S.T.; Faria, G.C.; Agarwal, S.; Marinela, M.J.; Talin, A.A.; Salleo, A. A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing. Nat. Mater. 2017, 16, 414–418. [Google Scholar] [CrossRef] [PubMed]
  36. Lv, Z.; Zhou, Y.; Han, S.-T.; Roy, V. From biomaterial-based data storage to bio-inspired artificial synapse. Mater. Today 2018, 21, 537–552. [Google Scholar] [CrossRef]
  37. Tian, B.; Liu, L.; Yan, M.; Wang, J.L.; Zhao, Q.B.; Zhong, N.; Xiang, P.H.; Sun, L.; Peng, H.; Shen, H.; et al. A robust artificial synapse based on organic ferroelectric polymer. Adv. Electron. Mater. 2019, 5, 1800600. [Google Scholar] [CrossRef][Green Version]
  38. Scholz, W.; Fidler, J.; Schrefl, T.; Suess, D.; Dittrich, R.; Forster, H.; Tsiantos, V. Scalable parallel micromagnetic solvers for magnetic nanostructures. Comput. Mater. Sci. 2003, 28, 366–383. [Google Scholar] [CrossRef]
  39. Blachowicz, T.; Ehrmann, A. Spintronics—Theory, Modelling, Devices; De Gruyter: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  40. Enrico, A.; Dubois, V.; Niklaus, F.; Stemme, G. Scalable manufacturing of single nanowire devices using crack-defined shadow mask lithography. ACS Appl. Mater. Interfaces 2019, 11, 8217–8226. [Google Scholar] [CrossRef][Green Version]
  41. Mun, J.H.; Cha, S.K.; Kim, Y.C.; Yun, T.; Choi, Y.J.; Jin, H.M.; Lee, J.E.; Jeon, H.U.; Kim, S.Y.; Kim, S.O. Controlled segmentation of metal nanowire array by block copolymer lithography and reversible ion loading. Small 2017, 13, 1603939. [Google Scholar] [CrossRef] [PubMed]
  42. Askey, J.; Hunt, M.O.; Langbein, W.; Ladak, S. Use of two-photon lithography with a negative resist and processing to realise cylindrical magnetic nanowires. Nanomaterials 2020, 10, 429. [Google Scholar] [CrossRef] [PubMed][Green Version]
  43. Davies, C.S.; Kruglyak, V.V. Generation of propagating spin waves from edges of magnetic nanostructures pumped by uniform microwave magnetic field. IEEE Trans. Magn. 2016, 52, 1–4. [Google Scholar] [CrossRef]
  44. Gruszecki, P.; Kasprzak, M.; Serebryannikov, A.E.; Krawczyk, M.; Śmigaj, W. Microwave excitation of spin wave beams in thin ferromagnetic films. Sci. Rep. 2016, 6, 22367. [Google Scholar] [CrossRef] [PubMed][Green Version]
  45. Mushenok, F.B.; Dost, R.; Davies, C.S.; Allwood, D.A.; Inkson, B.J.; Hrkac, G.; Kruglyak, V.V. Broadband conversion of microwaves into propagating spin waves in patterned magnetic structures. Appl. Phys. Lett. 2017, 111, 042404. [Google Scholar] [CrossRef]
  46. Haldar, A.; Adeyeye, A.O. Microwave assisted gating of spin wave propagation. Appl. Phys. Lett. 2020, 116, 162403. [Google Scholar] [CrossRef][Green Version]
  47. Hoppensteadt, F. Spin torque oscillator neuroanalog of von Neumann’s microwave computer. Biosystems 2015, 136, 99–104. [Google Scholar] [CrossRef] [PubMed]
  48. Blachowicz, T.; Ehrmann, A. Magnetic elements for neuromorphic computing. Molecules 2020, 25, 2550. [Google Scholar] [CrossRef] [PubMed]
  49. Sengupta, A.; Roy, K. Encoding neural and synaptic functionalities in electron spin: A pathway to efficient neuromorphic computing. Appl. Phys. Rev. 2017, 4, 041105. [Google Scholar] [CrossRef]
Figure 1. Simulated geometry, consisting of three magnetic half-circles with outputs (AC) and inputs (D,E). The orientation of the magnetization is depicted by the color-code given in the inset: red = up, blue = down, green = horizontal.
Figure 1. Simulated geometry, consisting of three magnetic half-circles with outputs (AC) and inputs (D,E). The orientation of the magnetization is depicted by the color-code given in the inset: red = up, blue = down, green = horizontal.
Biomimetics 06 00032 g001
Figure 2. Data preparation steps: (a) time-resolved output of positions A, (b) B and (c) C; (d) the weighted sum over these outputs, as calculated in Equation (1); (e) the normalized weighted sum; and (f) the digitized signal which is higher than a defined threshold.
Figure 2. Data preparation steps: (a) time-resolved output of positions A, (b) B and (c) C; (d) the weighted sum over these outputs, as calculated in Equation (1); (e) the normalized weighted sum; and (f) the digitized signal which is higher than a defined threshold.
Biomimetics 06 00032 g002aBiomimetics 06 00032 g002b
Figure 3. Magnetization components calculated for w A = w c = 0.35 ,   w B = 0.3 and threshold values of (a) MT = 0.4; (b) MT = 0.2; summing up signals for different leaking rates, calculated for the same weights and threshold values of (c) MT = 0.4; (d) MT = 0.2. Leaking rates are defined as “forgetting” rates; i.e., after 5 steps with a leaking rate of 0.2, a single “learning” step is “forgotten” again.
Figure 3. Magnetization components calculated for w A = w c = 0.35 ,   w B = 0.3 and threshold values of (a) MT = 0.4; (b) MT = 0.2; summing up signals for different leaking rates, calculated for the same weights and threshold values of (c) MT = 0.4; (d) MT = 0.2. Leaking rates are defined as “forgetting” rates; i.e., after 5 steps with a leaking rate of 0.2, a single “learning” step is “forgotten” again.
Biomimetics 06 00032 g003
Table 1. Digital signals, derived for the case RL and different combinations of weights and threshold values, as explained in the text. The x-axes differ to make the signals better visible.
Table 1. Digital signals, derived for the case RL and different combinations of weights and threshold values, as explained in the text. The x-axes differ to make the signals better visible.
M t h w A = w c = 0.45 ,   w B = 0.1 w A = w c = 0.40 ,   w B = 0.2 w A = w c = 0.35 ,   w B = 0.3
0.8 Biomimetics 06 00032 i001 Biomimetics 06 00032 i002 Biomimetics 06 00032 i003
0.4 Biomimetics 06 00032 i004 Biomimetics 06 00032 i005 Biomimetics 06 00032 i006
0 Biomimetics 06 00032 i007 Biomimetics 06 00032 i008 Biomimetics 06 00032 i009
−0.4 Biomimetics 06 00032 i010 Biomimetics 06 00032 i011 Biomimetics 06 00032 i012
−0.8 Biomimetics 06 00032 i013 Biomimetics 06 00032 i014 Biomimetics 06 00032 i015
Table 2. Averaged values of digital signals for the weights w A = w c = 0.45 ,   w B = 0.1 .
Table 2. Averaged values of digital signals for the weights w A = w c = 0.45 ,   w B = 0.1 .
MthLLLRRLRR
+0.80.00320.00220.00150.0008
+0.40.04870.05910.06810.0809
0.00.30100.48390.55930.3169
−0.40.98670.93300.94520.9839
−0.80.99970.99850.99811.0000
Table 3. Averaged values of digital signals for the weights w A = w c = 0.40 ,   w B = 0.2 .
Table 3. Averaged values of digital signals for the weights w A = w c = 0.40 ,   w B = 0.2 .
MthLLLRRLRR
+0.80.00100.00220.00210.0019
+0.40.14310.06070.14260.1976
0.00.57540.48200.57540.8356
−0.40.97500.92860.94950.9982
−0.80.99930.99800.99871.0000
Table 4. Averaged values of digital signals for the weights w A = w c = 0.35 ,   w B = 0.3 .
Table 4. Averaged values of digital signals for the weights w A = w c = 0.35 ,   w B = 0.3 .
MthLLLRRLRR
+0.80.00620.00180.00130.0032
+0.40.16300.05260.12260.3047
0.00.48320.48110.59400.9532
−0.40.99990.93650.97041.0000
−0.81.00000.99890.99991.0000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop