Low-Complexity Lossless Coding of Asynchronous Event Sequences for Low-Power Chip Integration

The event sensor provides high temporal resolution and generates large amounts of raw event data. Efficient low-complexity coding solutions are required for integration into low-power event-processing chips with limited memory. In this paper, a novel lossless compression method is proposed for encoding the event data represented as asynchronous event sequences. The proposed method employs only low-complexity coding techniques so that it is suitable for hardware implementation into low-power event-processing chips. A first, novel, contribution consists of a low-complexity coding scheme which uses a decision tree to reduce the representation range of the residual error. The decision tree is formed by using a triplet threshold parameter which divides the input data range into several coding ranges arranged at concentric distances from an initial prediction, so that the residual error of the true value information is represented by using a reduced number of bits. Another novel contribution consists of an improved representation, which divides the input sequence into same-timestamp subsequences, wherein each subsequence collects the same timestamp events in ascending order of the largest dimension of the event spatial information. The proposed same-timestamp representation replaces the event timestamp information with the same-timestamp subsequence length and encodes it together with the event spatial and polarity information into a different bitstream. Another novel contribution is the random access to any time window by using additional header information. The experimental evaluation on a highly variable event density dataset demonstrates that the proposed low-complexity lossless coding method provides an average improvement of 5.49%, 11.45%, and 35.57% compared with the state-of-the-art performance-oriented lossless data compression codecs Bzip2, LZMA, and ZLIB, respectively. To our knowledge, the paper proposes the first low-complexity lossless compression method for encoding asynchronous event sequences that are suitable for hardware implementation into low-power chips.


Introduction
The recent research breakthroughs in the neuromorphic engineering domain have made possible the development of a new type of sensor, called the event camera, which is bioinspired by the human brain, as each pixel operates individually and mimics the behaviour of a separate nerve cell. In contrast to the conventional camera, in which all pixels are designed to capture the intensity of the incoming light at the same time, the event camera sensor reports only the changes of the incoming light intensity above a threshold, at any timestamp, and at any pixel position by triggering a sequence of asynchronous events (sometimes called spikes); otherwise it remains silent. Because each pixel detects and reports independently only the change in brightness, the event camera sensor proposes a new paradigm shift for capturing visual data.
The event camera provides a series of important technological advantages, such as a high temporal resolution as the asynchronous events can be triggered at a minimum timestamp distance of only 1 µs (10 −6 s), i.e., the event sensor can achieve a frame rate of up to 1 million (M) frames per second (fps). This is made possible thanks to the remarkable novel event camera feature of capturing all dynamic information without unnecessary static information (e.g., background), which is an extremely useful feature for capturing high-speed motion scenes for which the conventional camera usually fails to provide a good performance. Two types of sensors are currently available on the market: (i) the dynamic vision sensor (DVS) [1], which captures only the event modality; and (ii) the dynamic and active-pixel vision sensor (DAVIS) [2], which is comprised of a DVS sensor and an active pixel sensor (APS), i.e., it captures a sequence of conventional camera frames and their corresponding event data. The event camera sensors are now widely used in the computer vision domain, wherein the RGB and event-based solutions already provide an improved performance compared with state-of-the-art RGB-based solutions for applications such as deblurring [3], feature detection and tracking [4,5], optic flow estimation [6], 3D estimation [7], superresolution [8], interpolation [9], visual odometry [10], and many others. For more details regarding event-based applications in computer vision, please see the comprehensive literature review presented in [11]. To achieve high frame rates, the captured asynchronous event sequences reach high bit-rate levels when stored using the raw event representation of 8 bytes (B) per event provided by the event camera. Therefore, for better preprocessing of event data on low-power event-processing chips, novel low-complexity and efficient event coding solutions are required to be able to store without any information loss the acquired raw event data. In this paper, a novel lowcomplexity lossless compression method is proposed for efficient-memory representation of the asynchronous event sequences by employing a novel low-complexity coding scheme so that the proposed codec is suitable for hardware implementation into low-cost event signal processing (ESP) chips.
The event data compression domain is understudied whereas the sensor's popularity continues to grow thanks to improved technical specifications offered by the latest class of event sensors. The problem was tackled in only a few articles that propose to either encode the raw asynchronous event sequences generated by the sensor with or without any information loss [12][13][14], or to first preprocess the event data from a sequence of synchronous event frames (EFs) that are finally encoded by employing a video coding standard [15,16]. The EF sequences are formed by using an event-accumulation process that consists of splitting the asynchronous event sequence into spatiotemporal neighbourhoods of time intervals, processing the events triggered in a single time interval, and then generating a single event for each pixel position in the EF. These performance-oriented coding solutions are too complex for hardware implementation in the ESP chip designed with limited memory, and may be integrated only in a system on a chip (SoC) wherein enough computation power and memory is available.
In our prior work [17,18], we proposed employing an event-accumulation process which first splits each asynchronous event sequence into spatiotemporal neighbourhoods by using different time-window values, and then generates the EF sequence by using a sum-accumulation process, whereby the events triggered in a time window are represented by a single event that is set as the sign of the event polarity sum and stored at the corresponding pixel position. In [17], we proposed a performance-oriented, context-based lossless image codec for encoding the sequence of event camera frames, in which the event spatial information and the event polarity are encoded separately by using the event map image (EMI) and the concatenated polarity vector (CPV). One can note that the lossless compression codec proposed in [17] is suitable for hardware implementation in SoC chips. In [18], we proposed a low-complexity lossless coding framework for encoding event camera frames by adapting the run-length encoding scheme and Elias coding [19] for EF coding. One can note that the low-complexity lossless compression codec proposed in [18] is suitable for hardware implementation in ESP chips. The goal of this work is to propose a novel low complexity-oriented lossless compression codec for encoding asynchronous event sequences, suitable for hardware implementation in ESP chips.
In summary, the novel contributions of this work are summarized as follows.
(1) A novel low-complexity lossless compression method for encoding raw event data represented as asynchronous event sequences, which is suitable for hardware implementation into ESP chips. (2) A novel low-complexity coding scheme for encoding residual errors by dividing the input range into several coding ranges arranged at concentric distances from an initial prediction. (3) A novel event sequence representation that removes the event timestamp information by dividing the input sequence into ordered same-timestamp event subsequences that can be encoded in separated bit streams. (4) A lossless event data codec that provides random access (RA) to any time window by using additional header information.
The remainder of this paper is organized as follows. Section 2 presents an overview of state-of-the-art methods. Section 3 describes the proposed low-complexity lossless coding framework. Section 4 presents the experimental evaluation of the proposed codecs. Section 5 draws the conclusions of this work.

State-of-the-Art Methods
To achieve an efficient representation of the large amount of event data, a first approach was proposed to losslessly (without any information loss) encode the asynchronous event representation. In [12], a lossless compression method is proposed by removing the redundancy of the spatial and temporal information by using three strategies: adaptive macrocube partitioning structure, the address-prior mode, and the time-prior mode. The method was extended in [13] by introducing an event sequence octree-based cube partition and a flexible intercube prediction method based on motion estimation and motion compensation. However, the coding performance of these methods (based on the spike coding strategy) remains limited.
In another approach, the asynchronous event representation is compressed by employing traditional lossless data compression methods. In [14], the authors present a coding performance comparison study of different traditionally based lossless data compression strategies when employed to encode raw event data. The study shows that traditional dictionary-based methods for data compression provide the best performance. The dictionary-based approach consists of searching for matches of data between the data to be compressed and a set of strings stored as a dictionary, in which the goal is to find the best match between the information maintained in the dictionary and the data to be compressed. One of the most well-known algorithms for lossless data compression is the Lempel-Ziv 77 (LZ77) algorithm [20], which was created by Lempel and Ziv in 1977. LZ77 iterates sequentially through the input string and stores any new match into a search buffer. The Zeta Library (ZLIB) [21], an LZ77 variant called deflation, proposed a strategy whereby the input data is divided into a sequence of blocks. The Lempel-Ziv-Markov chain algorithm (LZMA) [22] is an advanced dictionary-based codec developed by Igor Pavlov for lossless data compression, which was first used in the 7-Zip open source code. The Bzip2 algorithm is based on the well-known Burrows-Wheeler transform [23] for block sorting, which operates by applying a reversible transformation to a block of input data.
In a more recent approach [24], the authors propose to treat the asynchronous event sequence as a point cloud representation and to employ a lossless compression method based on a point cloud compression strategy. One can note that the coding performance of such a method depends on the performance of the geometry-based point cloud compression (G-PCC) algorithm used in the algorithm design.
Many of the upper-level applications prefer to consume the event data as an "intensitylike" image rather than asynchronous events sequence, wherein several event-accumulation processes are proposed [25][26][27][28][29][30] to form the EF sequence. Hence, in another approach, several methods are proposed to losslessly encode the generated EF sequence. The study in [14] was extended in [15] by proposing a time aggregation-based lossless video encoding method based on the strategy of accumulating events over a time interval by creating two event frames that count the number positive and negative polarity events, which are concatenated and encoded by the high-efficiency video coding (HEVC) standard [31]. Similarly, the coding performance depends on the performance of the video coding standard employed to encode the concatenated frames.
To further improve event data representation, another approach was proposed to encode the asynchronous event sequences by relaxing the lossless compression constraint problem and accepting information loss. In [32], the authors propose a macrocuboids partition of the raw event data, and they employ a novel spike coding framework, inspired by video coding, to encode spike segments. In [16], the authors propose a lossy coding method based on a quad-tree segmentation map derived from the adjacent intensity images. One can note that the information loss introduced by such methods might affect the performance of the upper-level applications.

Proposed Low-Complexity Lossless Coding Framework
Let us consider an event camera having a W × H pixel resolution. Any change of the incoming light intensity triggers an asynchronous event, e i = (x i , y i , p i , t i ), which stores (based on the sensors representation) the following information in 8 B of memory: i.e., the pixel positions where the event was triggered; • polarity information p i ∈ {−1, 1},where the symbol "−1" signals a decrease and symbol "1" signals an increase in the light intensity; and • timestamp t i , the time when the event was triggered.
Hence, an asynchronous event sequence, denoted as S T = {e i } i=1,2,...,N e , collects N e events triggered over a time period of T µs. The goal of this paper is to encode S T by employing a novel, low-complexity lossless compression algorithm. Figure 1 depicts the proposed low-complexity lossless coding framework scheme for encoding asynchronous event sequences. A novel sequence representation groups the same-timestamp events in subsequences and reorders them. Each same-timestamp subsequence is encoded in turn by the proposed method, called low-complexity lossless compression of asynchronous event sequences (LLC-ARES). LLC-ARES is built based on a novel coding scheme, called the triple threshold-based range partition (TTP).  Figure 1. The proposed low-complexity lossless coding framework. The input asynchronous event sequence, S T , is first represented by using the proposed event representation as a set of sametimestamp subsequences, S k , having same-timestamp t k , and then encoded losslessly by employing the proposed method. The output bitstream of each same-timestamp subsequence can be stored in memory as a compressed file. Moreover, it can also be collected as a package bitstream for all the timestamps found in a time period ∆ RA and then stored in memory together with bitstream-length information stored as a header as a compressed file with RA, so that the proposed codec can provide RA to any time window of size ∆ RA . Section 3.1 presents the proposed sequence representation. Section 3.2 presents the proposed low-complexity coding scheme. Section 3.3 presents the proposed method.

Proposed Sequence Representation
An input asynchronous event sequence, S T , is arranged as a set of same-timestamp subsequences, S T = {S k } k=0,1,...,T −1 , where each same-timestamp subsequence collects all N k e events in S T triggered at the same timestamp t k . One can note that at the decoder side the timestamp information is recovered based on the subsequence length information, {N k e } k=0,1,...,T −1 , i.e., t k = k is set to all N k e events. Each S k is ordered in the ascending order of the largest spatial information dimension, e.g., y k i < y k i+1 . However, if y k i = y k i+1 , then S k is further ordered in the ascending order of the remaining dimension, i.e., x k i < x k i+1 . Figure 2 depicts the proposed sequence representation and highlights the difference between the sensor's event-by-event (EE) order, depicted on the left side, and the sametimestamp (ST) order, depicted on the right side. Note that the EE order proposes to write to file, in turn, each event e i . Although the proposed ST order proposes to write to file the number of events of each same-timestamp subsequence, N k e having the sametimestamp t k , and, if N k e > 0, it is followed by the spatial and the event information of all same-timestamp events, i.e., . Section 4 shows that the stateof-the-art dictionary-based data compression methods provide an improved performance when the proposed ST order is employed to represent the input data compared with the EE order.

Proposed Triple Threshold-Based Range Partition (TTP)
For hardware implementation of the proposed event data codec into low-power event-processing chips, a novel low-complexity coding scheme is proposed. The binary representation range of the residual error is partitioned into smaller intervals selected by using a short-depth decision tree designed based on a triple threshold, ∆ = (δ 1 , δ 2 , δ 3 ). Hence, the input range is partitioned into several smaller coding ranges arranged at concentric distances from the initial prediction.
Let us consider the case of encoding x ∈ [1, H], i.e., a finite range, by using the predictionx by writing the binary representation of the residual error = x −x on exactly n bits. Because on the decoder side n is unknown, the triple threshold ∆ is used to create a decision tree having the role of partitioning the input range [1, H] into five types of coding ranges (see Figure 3a), where either the binary representation of is represented by using a different number of bits or the binary representation of x is written by using a different number of bits.
Deterministic case?
Small error:
to represent x − 1 on n 1 bits. Otherwise, b 1 = 1 and R5 is used to represent H − x on n 2 bits.
Note that the range [1, x 1 ] contains x 1 possible values. To fully utilize the entire set of code words (i.e., including 00 · · · 0 having n 1 bits length), x − 1 is represented on n 1 bits. Algorithm 1 presents the pseudocode of the basic implementation of the TTP encoding algorithm. It is employed to represent a general value x by using the predictionx, the support range [1, H], and the triple threshold parameter, ∆, as output bitstream B, which contains the decision tree bits, followed by the binary representation of the required additional information for the corresponding coding range. Algorithm 2 presents the pseudocode of the basic implementation of the corresponding TTP decoding algorithm.
Algorithm 1: Encode a general x by using TTP Data: True value x, predictionx, range [1, H], and triple threshold ∆;

Deterministic Cases
In some special cases, some part of the information can be directly determined from the current coding context. For example, if x 1 or x 2 is outside the finite range (see Figure 4a), then R4 or R5 does not exist and the context tree is built without checking condition (c4), i.e., in such case one bit is saved. More exactly, steps 11-14 in Algorithms 1 and 2 are replaced with either step 12 (encode/decode using R4) or step 14 (encode/decode using R5).   Moreover, because x 1 and x 2 = H − x 2 + 1 are not power-2 numbers, the most significant bit of x, b n 1 −1 , is 0, thanks to the constraint 1 ≤ x ≤ x 1 and 1 ≤ x ≤ x 2 , respectively. Figure 4b shows that if x ∈ (x 1 − 2 n 1 −1 , 2 n 1 −1 ] and b n 1 −1 would be set as 1, then x > x 1 and the constraint would be violated. Hence, b n 1 −1 is always set 0 if x ∈ (x 1 − 2 n 1 −1 , 2 n 1 −1 ], (or similarly when x ∈ (x 2 − 2 n 2 −1 , 2 n 2 −1 ]).

Algorithm Variations
The basic implementation of the TTP algorithm was modified for encoding different types of data. Let us denote  Figure 3c,d show the TTP y range partitioning and decision tree, respectively. Some data types have a very large or infinite support range. The sequence of number of events of each timestamp, {N k e } k=0,1,...,T −1 , is encoded by using version TTP e . Note that N k e ∈ [0, HW]; however, there is a very low probability of having a large majority of pixels triggered with the same timestamp. Therefore, because N e is usually very small, TTP e is designed to use the doublet threshold ∆ e = (δ 1 , δ 2 ), as experiments show that a triplet threshold does not improve the coding performance. Figure 3e shows the TTP e range partitioning, where the values 0, 1, . . . , δ 2 − 2 are encoded by R2 as the last value, δ 2 − 1 (having the binary representation as n δ 2 bits of 1, i.e., 11 . . . 1 n δ 2 ), signals the use of R6 to encode | | − ∆ − 2 by using a simple coding technique, the Elias gamma coding (EGC) [19]. Figure 3f shows the decision tree, where N k e = 0 (i.e., S k = ∅) is encoded by the first bit of the decision tree.
Finally, TTP L is designed to encode the length of the package bitstream B , denoted as L (see Section 3.3.3). TTP L defines seven partition intervals by using two triple thresholds: ∆ S = (δ S 1 , δ S 2 , δ S 3 ) is used for encoding small errors using R1S, R2S, and R3S, and ∆ L = (δ L 1 , δ L 2 , δ L 3 ) is used for encoding large errors using R1L, R2L, and R3L. Similar to TTP e , R6 is signalled in R3L by using the last value δ L 3 − 1 and | | − ∆ S − ∆ L − 2 is encoded by employing EGC [19].

Proposed Method
The proposed method, LLC-ARES, employs the proposed representation to generate the set of same-timestamp subsequences, {S k } k=0,1,...,T −1 (see Section 3.1). Subsequence S k is encoded as bitstream B t k by using Algorithm 3, which employs the proposed coding scheme, TTP (see Section 3.2). The compressed file collects these bitstreams as B = [B t 0 B t 1 · · · B t T −1 ].

Prediction
To be able to employ each one of the four algorithm variations, TTP x , TTP y , TTP e , and TTP L , four types of predictions,N k e , (x k r ,ŷ k r ),x k i ,L , are computed by using the following set of equations: In (2), the prediction for the spatial information of the first event, e 0 1 , in the sametimestamp subsequence S k , is set as the sensor's centre ( H 2 , W 2 ), whereas the rest of the values depend on the first event e κ 1 of the previously nonempty same-timestamp subsequence S κ . In (3), if y k i is small,x k i is set as the median of a small prediction window of size w 1 ; otherwise it is of a larger prediction window of size w 2 . In our work, we set the parameters as follows: τ e = 10, τ x = 2 3 + 2 4 , τ y = 3, w 1 = 5, w 2 = 15.

Random Access Functionality
LLC-ARES-RA is an LLC-ARES version which provides RA to any time window of size ∆ RA . Hence, S T is now divided into P = T ∆ RA packages of ∆ RA time-length, denoted S T = {S } =1,2,...,P . The proposed LLC-ARES is employed to encode each package S as the bitstream set {B t k } k=0,1,··· ,∆ RA −1 , which is collected as the package bitstream, B = [B t 0 B t 1 · · · B t ∆ RA ], having L bit length. The TTP L version is employed to encode L using the predictionL , computed using (4), and the two triple threshold ∆ S and ∆ L , and to generate the header bitstream, B H , as depicted in Figure 1 Figure 5 presents in detail the workflow of encoding by using the proposed LLC-ARES method an asynchronous event sequence of 2 µs time-length, containing 23 triggered events. The input sequence received from the event sensor is initially represented by using the EE order. The proposed sequence representation is employed by first grouping and then rearranging the asynchronous event sequence by using the ST order. Because the input sequence contains two timestamps, the ST order consist of the same-timestamp subsequence S 0 of 10 events and the same-timestamp subsequence S 1 or 13 events. LLC-ARES encodes each data structure by using different TTP variations as described in Algorithm 3.

EE order ST order
Group & Reorder  00101100  01001001  01000001  11111010  00111100  00000001  00110010  00000001  10000000  11000000  00110101  00001100  10001110  00110100  10000100  10010111  11101111  00010000  01100001  01001001  00001110  10110110  00000000  10101100  01100100  00000011  00111010  01000101  00000011  00110111  00101000  10000000  00000101  00010011  00000010  00100100  Output bytes written to file Figure 5. The encoding workflow using the proposed LLC-ARES method as an asynchronous event sequence of 2 µs time-length, containing 23 events. The input sequence, represented by using the EE order, is first grouped and rearranged by using the proposed ST order. LLC-ARES encodes each data structure by using different TTP variations as an output bitstream of 316 bits stored by using 40 bytes, i.e., 40 numbers having an 8-bit representation.

Experimental Setup
In our work, the experimental evaluation is carried out on large-scale outdoor stereo event camera datasets [33], called DSEC. They contain 82 asynchronous event sequences captured for network training (training data) by using the Prophesee Gen3.1 event sensor placed on top of a moving car, having a W × H = 640 × 480 pixel resolution. All results reported in this paper use the DSEC asynchronous event sequences sorted in the ascending order of their event acquisition density. By driving at different speeds and in different outdoor scenarios, the DSEC sequences provide a highly variable density of events (see Figure 5a, in which one can see that the event density variates between 5 and 30 Mevps). Figure 6b depicts the cumulated number of events over the first 10 s of the DSEC sequences having the lowest, medium, and highest acquired event density shown in Figure 6a. To limit the runtime of state-of-the-art codecs, for each event sequence, only the first T = 10 8 µs (100 s) of captured event data are encoded in this work. The DSEC dataset is made publicly available online [34]. The proposed method, LLC-ARES, is implemented in the C programming language. The LLC-ARES-RA version is tested by using a time window of ∆ RA of 10 2 µs, 10 3 µs, and 10 4 µs, where for each event sequence only the first T = 10 7 µs of captured event data are encoded. The raw data size is computed by using the sensor specifications of 8 B per event.
The compression results are compared by using the following metrics: (c1) Compression ratio (CR), defined as the ratio between the raw data size and the compressed file size; (c2) Relative compression (RC), defined as the ratio between the compressed file size of a target codec and the compressed file size of LLC-ARES; and (c3) Bit rate (BR), defined as the ratio between the compressed file size in bits and the number of events in the asynchronous event sequence, measured in bits per event (bpev), e.g., raw data has 64 bpev.
The runtime results are compared by using the following metrics: (t1) Event density (ρ E ), defined as the ratio between the number of events in the asynchronous event sequence and the encoding/acquisition time, measured in millions of events per second (Mevps); (t2) Time ratio (TR), defined as the ratio between the data acquisition time and the codec encoding time; and (t3) Runtime, defined as the ratio between the encoding/decoding time (µs) and the number of events.
One can note that the comparison with [12] was not possible, as the codec is not publicly available and the dataset is made available only for academic research purposes. Figure 7 shows the CR results and Figure 8 shows the BR results over DSEC [34]. One can note that, for state-of-the-art methods, the proposed ST order provides an improved performance of up to 96% compared with the sensor's EE order. LLC-ARES (designed for low-power chip integration) provides an improved performance compared with all state-of-the-art codecs (designed for SoC integration) over the sequences having a small and medium event density, and a close performance over the sequences having a high event density as more complex coding techniques are employed by the traditional lossless data compression methods.  [34], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density. Table 1 shows the average CR and BR results over DSEC [34].One can note that, compared with the state-of-the-art performance-oriented lossless data compression codecs, Bzip2, LZMA, and ZLIB, the proposed LLC-ARES codec provides the following:

Compression Results
(i) an average CR improvement of 5.49%, 11.45%, and 35.57%, respectively; (ii) an average BR improvement of 7.37%, 13.40%, and 37.12%, respectively; and (iii) an average bitsavings of 1.09 bpev, 1.99 bpev, and 5.50 bpev, respectively.  [34], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.  Figure 9 shows the event density results and Figure 10 shows the TR results over DSEC. One can note that compared with runtime performance of state-of-the-art codecs, LLC-ARES provides a performance much closer to real time for all sequences, and an outstanding performance for the sequences having a high event density. More exactly, LLC-ARES provides a much faster coding speed than the state of the art for the case of high event acquisition density. Whereas the asynchronous event sequences have a very low event acquisition density, LLC-ARES provides an encoding speed as close as approximately 90% of the real-time performance (see Figure 10). Moreover, the software implementation was not optimized, as it can be further improved by a software developer expert to provide an improved runtime performance when deployed on an ESP chip.   [34], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density. Table 1 shows the average event density and TR results over DSEC. One can note that, compared with the state-of-the-art lossless data compression codecs, Bzip2, LZMA, and ZLIB, the proposed LLC-ARES codec provides the following:

Runtime Results
(i) an average event density improvement of 234×, 412×, and 2086×, respectively; and (ii) an average TR improvement of 216×, 401×, and 1969×, respectively. Figures 11 and 12 show the encoding and decoding runtime over DSEC, respectively. Note that LLC-ARES is a symmetric codec, wherein the encoder and decoder have similar complexity and runtime, whereas the traditional state-of-the-art lossless data compression methods are asymmetric codecs, as the encoder is much more complex than the decoder. Table 2 presents the average results over DSEC by using the EE order and the proposed ST order. Note that the LLC-ARES performance is approximately 10 µs/ev for both encoding and decoding, while the traditional state-of-the-art lossless data compression methods achieve an encoding time between 135% and 515% higher than LLC-ARES and a decoding time between 92% lower and 58% higher than LLC-ARES.
The implementation of LLC-ARES was not optimized, as the implemented method must be redesigned for integration into low-power chips. These experimental results show that a proof-of-concept implementation of the algorithm on a CPU machine provides an improved performance compared with the state-of-the-art methods when tested on the same experimental setup. Please note that only LLC-ARES employs simple coding techniques so that it is suitable for hardware implementation into low-power ESP chips.   [34], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.  Figure 13 shows the RC results over DSEC. One can note that the RC results are quite similar, as the size of the header bitstream is neglectable compared with the timewindow sequence bitstream. When providing RA to the smallest tested time window of ∆ RA = 100 µs, compared with LLC-ARES, the coding performance of the proposed LLC-ARES-RA method decreases with less than 0.19% when the encoded header information is stored in memory and less than 0.35% when the decoded header information is stored in memory, denoted here as memory usage (MU) results.  Figure 13. The relative compression (RC) results for RA results over the DSEC dataset [34], wherein the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.

Conclusions
In this paper, we proposed a novel lossless compression method for encoding the event data acquired by the new event sensor and represented as an asynchronous event sequence. The proposed LLC-ARES method is built based on a novel low-complexity coding technique so that it is suitable for hardware implementation into low-power ESP chips. The proposed low-complexity coding scheme, TTP, creates short-depth decision trees to reduce either the binary representation of the residual error computed based on a simple prediction, or the binary representation of the true value. The proposed event representation employs the novel ST order, whereby same-timestamp events are first grouped into same-timestamp subsequences, and then reordered to improve the coding performance. The proposed LLC-ARES-RA method provides RA to any time window by employing a header structure to store the length of the bitstream packages.
To our knowledge, the paper proposes the first low-complexity lossless compression method for encoding asynchronous event sequences that is suitable for hardware implementation into low-power chips.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: