An Interface Platform for Robotic Neuromorphic Systems

: Neuromorphic computing is promising to become a future standard in low-power AI applications. The integration between new neuromorphic hardware and traditional microcontrollers is an open challenge. In this paper, we present an interface board and a communication protocol that allows communication between different devices, using a microcontroller unit (Arduino Due) in the middle. Our compact printed circuit board (PCB) links different devices as a whole system and provides a power supply for the entire system using batteries as the power supply. Concretely, we have connected a Dynamic Vision Sensor (DVS128), SpiNNaker board and a servo motor, creating a platform for a neuromorphic robotic system controlled by a Spiking Neural Network, which is demonstrated on the task of intercepting incoming objects. The data rate of the implemented interface board is 24.64 k symbols/s and the latency for generating commands is about 11ms. The complete system is run only by batteries, making it very suitable for robotic applications.


Introduction
Spiking Neural Networks (SNN) run on neuromorphic hardware are an excellent platform to develop new robotic systems, and at the same time get better insight into the operation of biological neural systems [1]. The strength of SNNs lies in the neuron structure, where the biological membrane potential is computed resolving a differential equation, placing this model closer to the biological neuron. Compared to the second generation of Artificial Neural Networks (ANN), Spiking Neural Networks receive discrete signals (spikes), greatly simplifying the computation complexity. A presynaptic neuron fires when the membrane potential exceeds a potential threshold value, emitting a spike to the postsynaptic neurons. After that, a relaxing phase follows, where the membrane potential is restored to the resting value before new input stimuli increase the potential again [2]. The information in SNNs is encoded by the signal itself and its timing [3].
Due to these features, neuromorphic computing exhibits low latencies and low energy consumption. In order to take advantage of the SNN principle and make it available, the Advanced Processor Technologies Research Group (ATP) at the University of Manchester develops a manycore computer architecture SpiNNaker (Spiking Neural Network Architecture) to simulate some operational aspects of the human brain [4]. This computing platform has been deployed in several research fields, such as robotics. One example is the "line follower robot" from ATP [5] where a DVS signal is sub-sampled to match SpiNNaker's processing speed. The system employs a PC to convert protocols between DVS, SpiNNaker, and wheel motors. Another example, PushBot [6], is a mobile robot that employs two microcontroller units (MCUs) that are used to connect with SpiNNaker and other external devices, respectively. It communicates with the other MCU through WIFI to send and receive DVS data and commands for the controlling motor. An autonomous mobile platform [7], based on a 48-chip SpiNNaker board, uses an MCU for communication between SpiNNaker and other components and a Complex Programmable Logic Device (CPLD) interface board. It is able to achieve two independent tasks: trajectory stabilization using real-time computed optic flow and stimulus tracking with Nengo [8].
Different Interfaces Boards (IB) were implemented for SpiNNaker, examples are the FPGA-based solution from the SpiNNaker team [9] and the MCU interface [10] jointly developed by the University of Manchester and the Technical University of Munich. The former provides a unidirectional data transfer, which means the interface can only send data from input sensors to SpiNNaker and makes it difficult to connect other external devices. In the latter, the data flow is bidirectional, but necessitates two data format conversions and two chips for symbol transmission, resulting in greater power consumption.
The system introduced here represents the first complete prototype which in comparison to the previous work [11] has an integrated power supply and power regulators, a built-in sensor for the reward signal and new communication software for the microcontroller. This is the first self-contained and stable version of our robotic system. The implemented Interface Board links together and powers the whole system consisting of a Dynamic Vision Sensor (DVS128), a SpiNNaker board (SpiNN-3), an MCU (Arduino Due) and a digital motor (Futaba S9257), using batteries. In addition, the board allows multiple inputs from different types of sensors, entirely managed by the MCU. The input signals are encoded in a way that allows communication with SpiNNaker, which is working as "the brain", analyses the signal and sends back the result to the MCU, which is then able to drive a device such as a servo as an output. Figure 1 shows the configuration we used to test our IB.  (DVS and SpiNnaker), and a digital motor, touch sensor, and the Interface Board with an MCU which represent the backbone of the system. Here we tentatively include a PC, which is used during the setup phase to adjust the field of view of the camera and to analyze the produced data and can be removed during the operational phase of the robot.

Interface Board Design and Specification
The proposed Interface Board is designed to meet compactness, portability and low power consumption specifics, allowing the use of different types of input sensors (as vision, sound, chemical, temperature sensors), and different output devices (as motors, alarms, lights, actuators). With a size of only 120 mm (width) × 120 mm (length) × 55 mm (height), our platform consists of a PCB board that allow the direct connection of an MCU (using direct pin connection) and a SpiNNaker board (fixed on the board and connected with a dedicated cable). The Arduino Due was selected as MCU for its high number of GPIO ports and adequate processing capacity. It is used to perform communication protocols, so that input sensors, SpiNNaker and the actuators can communicate with each other. The board provides the connection between different robotic components and relevant functions (e.g., inbound links, level shift, voltage regulation). A compact interface board provides a 'spinal cord' which connects the peripheral 'organs' (i.e., vision and sound sensors or actuator) to the 'brain' (SpiNNaker), allows them to talk to each other and pre-processes the signals. Figure 2 shows the data flow of the hardware components introduced previously in Figure 1. It is possible to distinguish three main phases: input signal processing, communication with SpiNNaker and output command execution.

SpiNNaker
SpiNNaker is a multi-core, real-time computer [12] that simulates brain networks. It uses ARM's high-performance embedded processors [13,14], and thus utilizes their very low power and computational efficiency. Several different SpiNNaker platforms have been developed, we are using the smallest SpiNN-3 board, but the communication protocol works for all the other models. SpiNN-3 board consists of four chips, each having 18 ARM968 processing cores and local memory [15]. The power supply of the SpiNN-3 board is 5 V 1 A via a 2.1mm DC port. For peripheral connections, the SpiNN-3 board has 2 ports: ports J1 and J2 which are 34-way sub-miniature head sockets [16]. The pin assignments are shown in Figure 3. The input part of the link includes inputs Lin(6)-(0) and output LinACK. The output part of the link includes input LoutACK and output Lout(0)-(6).

Arduino Due
Arduino Due board was selected because of the high number of GPIO ports and for the presence of a USB host connection that allows plugging in a USB device (DVS camera sensor). The Arduino Due is based on a 32-bit ARM chip with a clock speed of 84 MHz. The recommended power supply range of the Arduino Due is 7-12 V. On our board, the Arduino is powered with 9 V. To communicate with the J2 link 16 GPIO are necessary, which are covered by the 54 GPIO pins of the Arduino MCU. Thus, there are many unused GPIO pins available to connect with other external devices for further development. The Arduino Due also provides 5 V, 3.3 V and 2 DAC ports to supply power to external devices.
For communication with PC, DVS or eDVS, this board has a UART and 2 micro-USB ports with a maximum 115,200 baud rate.

Hardware Links
The layout of our PCB for the Interface Board is shown in Figure 4. As stated in the SpiNNaker-3 and Arduino Due sections, they work at different input voltages. In order to use a unified input current for the IB, two different voltage regulators, the MC7809, which is used to provide a stable source of power at 9 V to Arduino, and the MC7805, which supplies power to SpiNNaker at 5 V, were used. The GPIOs of the Arduino operate at 3.3 V; the SpiNN-3 board port operates at 1.8 V. Thus, level shifters are needed to perform conversions between 3.3 V and 1.8 V, and reverse. TXS0108E chips are used on this PCB for that purpose, which provides 8-way bidirectional level shifts. Therefore, two TXS0108E chips are adequate for 16 GPIO communications. The maximum data rate of this chip is 110 Mbps, which will not be a limitation of the system data rate. Table 1 shows the links between the SpiNNaker-J connector and the Arduino Due ports, passing through the voltage regulators. In Figure 4, (right side) it is possible to see the Arduino Due connection pins, where the MCU is connected directly to the IB. Table 2 shows the GPIO connection between Arduino Due and the main components.

Communication Protocol
In the previous sections, we described the system from the hardware point of view, focusing on connecting the components (Figure 2). To permit the correct exchange of data between SpiNNaker and the MCU, two different communication protocols were used: the 2-of-7 coding protocol and the 2-phase handshake protocol. For data coding and encoding, a self-timed 2-of-7 coding protocol is used to transmit data in packet form so that the data can be recognized by the SpiNNaker. The 2-phase handshake protocol is used to make sure each packet is successfully transmitted and received by the receiver.

2-of-7 Coding
In Spiking Neural Networks information is represented as a time-dependent sequence of spikes, such as a sequence of bits transmitted on a channel. Different coding protocols have been used in SNN each with some limitations. Conventional rate coding counts spikes in fixed time windows to represent each information unit (i.e., encode one alphabet letter or one number), resulting in a relatively slow technique. A faster coding solution is possible with rank order coding, with whom it is possible to use shorter time windows to represent more bits of information. For a real-time scenario, the delay must be as short as possible, prioritizing the transmission time and sacrificing the amount of information that can be represented [17]. N-of-M coding is a protocol that mirrors these properties, allowing a parallel transmission of N bits to represent M bits of information. It is a Non-Return to Zero (NRZ) protocol meaning that the voltage level is not reset to zero after each bit [18]. SpiNNaker uses a 2-of-7 coding protocol to communicate with external devices, where the sender just needs to change the logical level of two wires during each symbol transfer. The states of the other five wires are not changed [19]. In data transfer, there are a total of 17 symbols that can be conveyed, as shown in Table 3. These symbols are able to present 16 hexadecimal digits (from 0 to F) and an EOP sign (which means the end of packets).

2-Phase Handshake Protocol
In the 2-phase handshake protocol, the sender first transmits a symbol to the receiver, then waits for the receiver to provide an acknowledgment signal. The receiver will send an acknowledgment signal to the sender after receiving a symbol from the sender.

Encoding/Decoding Packets
After MCU receives the events from input sensors, MCU will encode the data into packets and send them to SpiNNaker in real time. There are two types of SpiNNaker packets of different sizes: 40 bits and 72 bits [16]. 'EOP' symbols are placed after each packet to show that the transmission of that packet was completed. In comparison to the 40-bit packet, the 72-bit packet contains a 32-bit extra payload. The 72-bit packets are also called nearest neighbor (NN) packets; the 40-bits packets are also called multi-cast (MC) packets. The header and packet data are included in both types of packets. The packet data contains the spike information, while the header contains the packet's specification. The packet type is indicated by the last two bits in the header, where '01' indicates NN packets and '00' indicates MC packets. The parity bit indicated is odd parity, which means that the total number of 1s in the packet, excluding the EOP, is odd. The value of the parity bit is determined after the rest of the packet bits are decided.
The packets are divided into hex numbers and sent in symbols with the 2-of-7 coding protocol. After one packet is received, the hex number of each symbol will be transformed into binary numbers (The SpiNNaker starts to read the packet from the end of the packet, which means the order of the symbols in each part of the packet is reversed and the symbol conversion between hex numbers and binary numbers is also reversed. For instance, the SpiNNaker reading order of 'C000' is '000C', where the hex number 'C' converted into binary numbers is '0011' instead of '1100').
For packets sent from MCU to SpiNNaker, all the packets are MC packets, which have the size of 40 bits. When the SpiNNaker communicates with an external device, the external device is regarded as a virtual chip inside the system for the SpiNNaker. Thus, a virtual routing key is needed (4 hex numbers sent in the second half of the packet data part).

Evaluation
The evaluation of the IB ( Figure 5) starts with analyzing the communication data rate between Arduino and SpiNNaker (both up-link and down-link). The tests are executed by sending predefined data packets with a time interval of 15 ms.

Network Topology
For the purpose of this work, a simple neural network was configured (Figure 2 green box). The number of inputs corresponds to the 8 possible positions of the input spikes received from the DVS sensor. The output is composed of 8 neurons that reflect the input to the output without using a learning rule to predict the final position of the target ball. The integration of the touch signal with a learning rule is under consideration for future work.

Up-Link Data Rate
With a time interval between packets set to 1.5 ms and 0.15 ms, respectively, the communication is able to operate successfully, but the SpiNNaker issues a warning that the time interval between packets is too small. The speed of up-link communication exceeds the processing speed of SNN hosted on SpiNNaker. The SpiNNaker receives 112 packets in 50 ms, which corresponds to 24.64 ksymbols/s or 12.32 kbytes/s.

Down-Link Data Rate
When the Arduino sends the predefined packets to SpiNNaker at maximum speed, the Arduino receives all the predefined packets from SpiNNaker. Thus, the downlink data rate should be equal to or larger than the up-link data rate. The data rate of the whole system is limited by the up-link data rate, which is 24.64 ksymbols/s.

Accuracy
The accuracy of the 'robotic goalie' blocking the balls heavily depends on the SNN which runs on SpiNNaker and the speed of approaching balls. To compute the accuracy of our prototype, we take into account the number of saved balls over the total number of launches. The experiment consists of 100 ping-pong balls launched from approximately 1 m with various speeds and directions. When the speed of the approaching ball is relatively low (up to approximately 1 m/s), the 'robotic goalie' can intercept the ball with a 75% success rate. As the ball speed increases, the accuracy of the system gradually decreases. For the balls with fast speed, The robotic system can not maneuver the goalie to the right location prior to the balls arriving. The reason is that the SNN used in this project has not been taught to anticipate the track of balls on the basis of their movement at the early stages of their approach to the goal.

Latency
The response latency in the prototype is the time it takes for the DVS to catch the ball's movement and the servo motor to move to the desired location. The minimum time step in DVS is 1 µs. As a result, DVS needs at least 1 µs to produce an event and another 1 µs to communicate the event. Then, the MCU keeps receiving these events (via the serial port) and encodes them into packets every 500 µs. The next relevant parameter is the communication speed between the MCU and SpiNNaker. Sending a packet to SpiNNaker takes 1 2240 s ≈ 0.45 ms and receiving a packet by the MCU takes 0.45 ms. This is longer than the time it takes for DVS to capture an event and for the MCU to receive it. Furthermore, for every 1 µs, the MCU verifies the condition of all processes. As a consequence, events are received, packets are sent or received, instructions are generated, and commands are executed in parallel. The MCU sends the second converted event to SpiNNaker at the same time it receives the first packet from SpiNNaker. The simulation on the SpiNNaker runs with 1 ms time step. Thus, the minimum time for the SpiNNaker processing data is 1 ms. Furthermore, the packets are queued and the commands are created using the most recent N pac received packets, and the uplink and downlink speeds are virtually identical. As a result, the time spent on operations prior to generating servo control commands (t command ) is the sum of DVS events, communication to MCU, encoding time, time to communicate to SpiNNaker, processing and time to send back the response to the MCU with N pac packets: t command =(0.001 ms + 0.001 ms) · N events + 0.032 ms+ 0.5 ms + 1 2240 s + 1 ms + N pac · 1 2240 s For N events = 100 and N pac = 20, we theoretically estimate this time to be approximately 11 ms. If the number of events needed to create an output from the SNN running on SpiNNaker is increased to N events = 1000, the latency will increase only to 13 ms. This we compare with the latency of the executive organ, such as a servo motor. For example, in our robogoalie demonstrator [20] (Figure 6) we use a Futaba digital motor, which has a speed of 60°of angular rotation in 75 ms. Therefore the whole system was reasonably successful (about 85% of the time) in intercepting the ball, with speeds of up to 1 m/s.   [20]. The DVS camera was connected to a PC for data monitoring and the data was forwarded to the Arduino Due board. A video demonstration is available at https://youtu.be/0UqKsB0lQr8 (accessed on 21 October 2022).

Discussion
The aim of our work was to implement a compact Interface Board Platform to allow communication between an MCU and a SpiNNaker board. We used an Arduino Due board to manage the communication between the various peripheral hardware and the SpiNNaker board, adapting the signal with two voltage level shifters.
The test environment is represented by the goalkeeper task, where a robotic arm intercepts an incoming ball moving toward the horizontal axis. To "see" the target object, a Dynamic Video Sensor camera was used, exploiting its fast, non-redundant information transmission and low power consumption. However, there is no learning rule that allows the prediction of the final position of the ball (it is under consideration for future works). This lack of the SNN model limits the tests to specific environmental conditions (e.g., light, noise). In fact, a learning rule can work in different conditions, enabling a dynamic adaptation to many conditions. The current implementation of the system considers the horizontal axis only, resulting in a failure interception when a bouncing ball is thrown toward the goal. This is not a limitation because is out of the aim of this work but, it is possible to consider the installation of a two-axis actuator to block the target in the 2D space.
Although there are some limitations in our system for the goalkeeper task, it is possible to find different strong points. Looking at the reaction time, our Interface Board consumes only 13 ms for 1000 events (considering that the DVS have a resolution of 128 × 128 [21] or 16,384 pixels, 1000 events is reasonable for the tests) to actuate the decision, showing the fast communication between the components. Then, considering that the whole system works with usual batteries, the low power consumption, and its portability are the features that differ from other similar works, breaking down considerably their consumption. Table 4 provides a comparison of our prototype with other neuromorphic projects, tested on the same task or with potential experimentation in solving our similar problem. The main difference, excluding the motors used and weight and only considering the power computation devices, is the power consumption. It is clear how the fast communication of our Interface Board in combination with neuromorphic hardware, completely reduces energy consumption.

Conclusions
In this paper, we build spiNNaLink, an Interface Board to link a SpiNN-3 board with an MCU, allowing the usage of multiple input sensors and output actuators. The implemented PCB board works with a set of 1.5V AA rechargeable batteries making it small and portable ( Figures 5 and 6). The Arduino Due is utilized to perform communication protocols and data conversion between devices. The use of a specific MCU is not a limitation and it is related only to this particular version of the Interface Board. In fact, considering that the level shifters can work between 1.65 V to 5.50 V and that it is possible to use port extender chips to increase the GPIO ports, future versions of the Interface Board could be used with different MCUs. Furthermore, to overcome the lack of a USB host port, it is possible to consider eDVS cameras communicating directly using the Serial port. The power supply for the entire system, including Arduino and SpiNNaker, is also provided by the interface board. Our tests show that the board allows fast communication link from input sensors (DVS) to the output channel (motor), resulting in a delay of ∼11 ms. The interception accuracy is sensible to the ball speed and direction, due to the simple SNN developed without learning rule. This is because the main purpose of the project was related to the hardware link and the Interface Board. This limit can be treated in future research where the network can be trained to predict the final position of the ball and, additionally, a reward feedback signal can enable self-learning in this task.
The most relevant feature of our prototype is the very low power consumption that compared with similar, or task-related, neuromorphic robots, breaks down completely the consumptions, maintaining a high computational power even with the usage of batteries. Moreover, projecting us into a future scenario where more components will compose a more complex system, it is easy to imagine how the usage of extremely low power consumption hardware can permit the development of bio-inspired, autonomous and energy-independent devices.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: