# Temporal Modeling of Neural Net Input/Output Behaviors: The Case of XOR

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. System Specification and I/O Behaviors

## 3. Systems Implementation of a Memoryless Function

## 4. DEVS Deterministic Representation of Gelenbe Neuron

- $X=\{{P}^{+},{P}^{-}\}$ is the set of positive and negative input pulses,
- $Y=\left\{P\right\}$ is the set of plain pulse outputs,
- $S=\{0,1,2\}$ is the set of non-negative integer states,
- ${\delta}_{ext}(s,e,{P}^{+})=s+1$ is the external transition increasing the state by 1 when receiving a positive pulse,
- ${\delta}_{ext}(s,e,{P}^{+},{P}^{+})=s+2$ is the external transition increasing the state by 2 when simultaneously receiving two positive pulses,
- ${\delta}_{ext}(s,e,{P}^{-})=max(s-1,0)$ is the external transition decreasing the state by 1 (except at zero) when receiving a negative pulse,
- ${\delta}_{int}(s>0)=max(s-1,0)$ is the non-zero states internal transition function decreasing the state by one (except at zero),
- $\lambda (s>0)=P$ is the non-zero states output a pulse,
- $\lambda \left(s\right)=\varphi $ is the output sending non-event for states below threshold,
- $ta\left(s\right)=tfire$ is the time advance, $tfire$, for states above 0, and
- $ta\left(0\right)=+\infty $ is the infinity time advance for zero passive state.

- $X=\{{P}^{+},{P}^{-}\}$ is the set of positive and negative input pulses,
- $Y=\left\{P\right\}$ is the set of plain pulse outputs,
- $S=\{0,1,2\}$ is the set of non-negative integer states,
- ${\delta}_{ext}(s,e,{P}^{+})=s+1$ is the external transition increasing the state by 1 when receiving a positive pulse,
- ${\delta}_{ext}(s,e,{P}^{+},{P}^{+})=s+2$ is the external transition increasing the state by 2 when simultaneously receiving two positive pulses,
- ${\delta}_{ext}(s,e,{P}^{-})=floor(s-1,0)$ is the external transition decreasing the state by 1 (except at zero) when receiving a negative pulse,
- ${\delta}_{int}(s>0)=floor(s-1,0)$ is the non-zero states internal transition function decreasing the state by one (except at zero),
- $\lambda (s\ge Thresh)=P$ is the output sending a pulse for states above or equal threshold,
- $\lambda \left(s\right)=\varphi $ is the output sending non-event for states below threshold,
- $ta(s\ge Thresh)=tfire$ is the time advance, $tfire$, for states above or equal threshold, and
- $ta(s<Thresh)=tdecay$ is the time advance, $tfire$, for states below threshold.

## 5. Realization of the XOR Function

- When there are no input pulses, there are no output pulses,
- When a single input pulse arrives and is not followed within tfireAnd by a second pulse, then an output pulse is produced after tfireOr of the input pulse arrival time.
- When the pair of input pulses arrive within tfireAnd of each other, then no output pulse is produced.

## 6. Characterization of SNN I/O Behaviors and Computations

## 7. Probabilistic System Implementation of XOR

## 8. Discussion

- Time dispersion of pulses—the input arguments are encoded in pulses over a time base, where inter-arrival times make a difference in the output.
- Coincidence of pulses—in particular, whether pulses represent arguments from the same submitted input or subsequent submission depends on their spacing in time.
- End-to-end computation time—the total processing time in a multi-component concurrent system depends on relative phasing as well as component timings, and may be poorly estimated by summing up of individual execution cycles.
- Time for return to ground state—the time that must elapse before a system that has performed a computation is ready to receive new inputs may be longer than its computation time, as it requires all components to return to their ground states.

- As static recognizers of memoryless patterns, DNNs may become ultra-capable (analogous to AlphaGo progress [24]), but as representative of human cognition, they may vastly overemphasize that one dimension and correspondingly underestimate intelligent computational capabilities in humans and animals in other respects.
- As models of real neural processing, DNNs do not operate within the system temporal framework discussed here, and therefore may prove impractical in real-time applications which impose time and energy consumption constraints such as those just discussed [25].

## Author Contributions

## Conflicts of Interest

## Appendix A. Discrete Event System Specification (DEVS) Basic Model

## Appendix B. Simulation Relation

## Appendix C. Behavior of the Markov Model

- There is only one way to transition from s0 to sFire, and that is by going from s0 to s1 in the original model, which happens with posInputRate. Therefore, $P01=posInputRate$.
- Similarly, there is only one way to transition from sFire to s0, and this happens with $negInputRate+FireRate$. Therefore, $P10=negInputRate+FireRate$.
- The probability of remaining in the sFire, $P11=1-P10$ (these must sum to 1).
- Similarly, $P00=1-P01$.

## Appendix D. Possible Applications of DEVS Modeling to Random Neural Networks

## References

- Carandini, M. From circuits to behavior: A bridge too far? Nat. Neurosci.
**2012**, 15, 507–509. [Google Scholar] [CrossRef] [PubMed] - Smith, L.S. Deep neural networks: The only show in town? In Proceeedings of the Workshop on Can Deep Neural Networks (DNNs) Provide the Basis for Articial General Intelligence (AGI) at AGI 2016, New York, NY, USA, 16–19 July 2016.
- Goertzel, B. Are There Deep Reasons Underlying the Pathologies of Today’s Deep Learning Algorithms? In Artificial General Intelligence; Springer International Publishing: Cham, Switzerland, 2015; pp. 70–79. [Google Scholar]
- Paugam-Moisy, H.; Bohte, S. Computing with Spiking Neuron Networks. In Handbook of Natural Computing; Kok, J., Heskes, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Ghosh-Dastidar, S.; Hojjat, A. Spiking neural networks. Int. J. Neural Syst.
**2009**, 19, 295–308. [Google Scholar] [CrossRef] [PubMed] - Minsky, M.; Papert, S. Perceptrons; MIT Press: Cambridge, MA, USA, 1969. [Google Scholar]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1: Foundations; Rumelhart, D.E., McClelland, J.L., Eds.; MIT Press: Cambridge, MA, USA, 1986; pp. 318–362. [Google Scholar]
- Bland, R. Learning XOR: Exploring the Space of a Classic Problem; Computing Science Technical Report; Department of Computing Science and Mathematics, University of Stirling: Stirling, Scotland, June 1998. [Google Scholar]
- Toma, S.; Capocchi, L.; Federici, D. A New DEVS-Based Generic Artificial Neural Network Modeling Approach. In Proceedings of the EMSS 2011, Rome, Italy, 12 September 2011.
- Pessa, E. Neural Network Models: Usefulness and Limitations. In Nature-Inspired Computing: Concepts, Methodologies, Tools, and Applications; IGI Global: Hershey, PA, USA, 2017; pp. 368–395. [Google Scholar]
- Maass, W. Lower bounds for the computational power of spiking neural networks. Neural Comput.
**1996**, 8, 1–40. [Google Scholar] [CrossRef] - Schmitt, M. On computing Boolean functions by a spiking neuron. Ann. Math. Artif. Intell.
**1998**, 24, 181–191. [Google Scholar] [CrossRef] - Brette, R.; Rudolph, M.; Carnevale, T.; Hines, M.; Beeman, D.; Bower, J.M.; Diesmann, M.; Morrison, A.; Goodman, P.H.; Harris, F.C., Jr.; et al. Simulation of networks of spiking neurons: A review of tools and strategies. J. Comput. Neurosci.
**2007**, 23, 349–398. [Google Scholar] [CrossRef] [PubMed] - Zeigler, B.P. Cellular Space Models: New Formalism for Simulation and Science. In The Philosophy of Logical Mechanism: Essays in Honor of Arthur W. Burks; Salmon, M.H., Ed.; Springer: Dordrecht, The Netherlands, 1990; pp. 41–64. [Google Scholar]
- Gelenbe, E. Random Neural Networks with Negative and Positive Signals and Product Form Solution. Neural Comput.
**1989**, 1, 502–510. [Google Scholar] [CrossRef] - Zeigler, B.P.; Kim, T.G.; Praehofer, H. Theory of Modeling and Simulation: Integrating Discrete Event and Continuous Complex Dynamic Systems, 2nd ed.; Academic Press: Boston, MA, USA, 2000. [Google Scholar]
- Zeigler, B.P.; Nutaro, J.; Seo, C. Combining DEVS and Model-Checking: Concepts and Tools for Integrating Simulation and Analysis. Int. J. Process Model. Simul.
**2016**, in press. [Google Scholar] - Maass, W. Fast sigmoidal networks via spiking neurons. Neural Comput.
**1997**, 9, 279–304. [Google Scholar] [CrossRef] [PubMed] - Maass, W. Networks of Spiking Neurons: The Third Generation of Neural Network Models. Neural Netw.
**1996**, 10, 1659–1671. [Google Scholar] [CrossRef] - Zeigler, B.P. Discrete Event Abstraction: An Emerging Paradigm For Modeling Complex Adaptive Systems. In Perspectives on Adaptation in Natural and Artificial Systems; Booker, L., Forrest, S., Mitchell, M., Riolo, R., Eds.; Oxford University Press: New York, NY, USA, 2005; pp. 119–141. [Google Scholar]
- Mayerhofer, R.; Affenzeller, M.; Fried, A.; Praehofer, H. DEVS Simulation of Spiking Neural Networks. In Proceedings of the Euro-Pean Meeting on Cybernetics and Systems, Vienna, Austria, 30 March–1 April 2002.
- Booij, O. Temporal Pattern Classification using Spiking Neural Networks. Master’s Thesis, Universiteit van Amsterdam, Amsterdam, The Netherlands, August 2004. [Google Scholar]
- Maass, W.; Natschlager, T.; Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput.
**2002**, 14, 2531–2560. [Google Scholar] [CrossRef] [PubMed] - Koch, C. How the Computer Beat the Go Master, Scientific American. 2016. Available online: https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/ (accessed on 14 January 2017).
- Hu, X.; Zeigler, B.P. Linking Information and Energy—Activity-based Energy-Aware Information Processing. Simul. Trans. Soc. Model. Simul. Int.
**2013**, 89, 435–450. [Google Scholar] [CrossRef] - Muzy, A.; Zeigler, B.P.; Grammont, F. Iterative Specification of Input-Output Dynamic Systems and Implications for Spiky Neuronal Networks. IEEE Syst. J.
**2016**. Available online: http://www.i3s.unice.fr/muzy/Publications/neuron.pdf (accessed on 14 January 2017). [Google Scholar] - Yoon, Y.C. LIF and Simplified SRM Neurons Encode Signals Into Spikes via a Form of Asynchronous Pulse Sigma-Delta Modulation. IEEE Trans. Neural Netw. Learn. Syst.
**2016**, PP, 1–14. [Google Scholar] [CrossRef] [PubMed] - Gelenbe, E. G-networks: A unifying model for neural and queueing networks. Ann. Oper. Res.
**1994**, 48, 433–461. [Google Scholar] [CrossRef] - Gelenbe, E.; Fourneau, J.M. Random Neural Networks with Multiple Classes of Signals. Neural Comput.
**1999**, 11, 953–963. [Google Scholar] [CrossRef] [PubMed] - Gelenbe, E. The first decade of G-networks. Eur. J. Oper. Res.
**2000**, 126, 231–232. [Google Scholar] [CrossRef] - Gelenbe, E. G-networks: Multiple classes of positive customers, signals, and product form results. In IFIP International Symposium on Computer Performance Modeling, Measurement and Evaluation; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Gelenbe, E.; Timotheou, S. Random Neural Networks with Synchronized Interactions. Neural Comput.
**2008**, 20, 2308–2324. [Google Scholar] [CrossRef] [PubMed] - Gelenbe, E.; Timotheou, S. Synchronized Interactions in Spiked Neuronal Networks. Comput. J.
**2008**, 51, 723–730. [Google Scholar] [CrossRef]

**Figure 2.**Variants of behavior and corresponding input/output (I/O) pairs, with (

**a**) saving input values when they arrived; or (

**b**) resetting to initial state once the output is computed. White circles indicate states, black circles initial states, and arrows transitions.

**Figure 3.**Deterministic system realization of memoryless function: (

**a**) Input/Output Black Box; (

**b**) Input and Output Trajectories.

**Figure 4.**Two-state deterministic Discrete Event System Specification (DEVS) model of Gelenbe neuron, with (

**a**) DEVS state graph; (

**b**) Closely Spaced Inputs; and (

**c**) Widely Spaced Inputs. The time elapsed since the last transition is indicated as $e\in {\mathbb{R}}_{0}^{+,\infty}$.

**Figure 5.**Coupled Model for XOR Implementation, with (

**a**) XOR Network Description; (

**b**) Single Input Pulse; and (

**c**) Double Input Pulse. Note that t1 in (

**c**) is the same time in (

**b**), representing the time an inhibited pulse would have arrived.

**Figure 6.**Implementation of the XOR using Spiking Neural Net (SNN) equivalent components described in DEVS: (

**a**) L-Arrival Component; (

**b**) XOR Coupled Model.

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zeigler, B.P.; Muzy, A.
Temporal Modeling of Neural Net Input/Output Behaviors: The Case of XOR. *Systems* **2017**, *5*, 7.
https://doi.org/10.3390/systems5010007

**AMA Style**

Zeigler BP, Muzy A.
Temporal Modeling of Neural Net Input/Output Behaviors: The Case of XOR. *Systems*. 2017; 5(1):7.
https://doi.org/10.3390/systems5010007

**Chicago/Turabian Style**

Zeigler, Bernard P., and Alexandre Muzy.
2017. "Temporal Modeling of Neural Net Input/Output Behaviors: The Case of XOR" *Systems* 5, no. 1: 7.
https://doi.org/10.3390/systems5010007