Interactive Graph Stream Analytics in Arkouda

: Data from emerging applications, such as cybersecurity and social networking, can be abstracted as graphs whose edges are updated sequentially in the form of a stream. The challenging problem of interactive graph stream analytics is the quick response of the queries on terabyte and beyond graph stream data from end users. In this paper, a succinct and efﬁcient double index data structure is designed to build the sketch of a graph stream to meet general queries. A single pass stream model, which includes general sketch building, distributed sketch based analysis algorithms and regression based approximation solution generation, is developed, and a typical graph algorithm—triangle counting—is implemented to evaluate the proposed method. Experimental results on power law and normal distribution graph streams show that our method can generate accurate results (mean relative error less than 4%) with a high performance. All our methods and code have been implemented in an open source framework, Arkouda, and are available from our GitHub repository, Bader-Research. This work provides the large and rapidly growing Python community with a powerful way to handle terabyte and beyond graph stream data using their laptops.


Introduction
More and more emerging applications, such as social networks, cybersecurity and bioinformatics, have data that often come in the form of real-time graph streams [1]. Over its lifetime, the sheer volume of a stream could be petabytes, or more like network traffic analysis in the IPv6 network, which has 2 128 nodes. These applications motivate the challenging problem of designing succinct data structures and real-time algorithms to return approximate solutions. Different stream models [2][3][4][5][6][7], which allow access to the data in only one pass (or multi-pass in semi-streaming algorithms), are developed to answer different queries using a significantly reduced sublinear space. The accuracy of such models' approximate solutions is often guaranteed by a pair of user specified parameters, > 0 and 0 < δ < 1. This means that the (1 + ) approximation ratio can be achieved with a probability of (1 − δ). This literature provides the theoretical foundation to solve the challenging large scale stream data problem.
However, ignoring the hidden constants in the theoretical space and time complexity analysis can cause unacceptable performance problems in real-world applications. When different applications, such as from social, economic and natural domains, have to handle real-time large data streams as routine work, the practical performance for real-world applications becomes an even more important and urgent requirement.
At the same time, graph streams need exploratory data analysis (EDA) [8][9][10]. Instead of checking results, given a hypothesis, with data, EDA is primarily for seeing what the data can tell us beyond the formal modeling or hypotheses testing task. In this way, EDA tries to maximize the value of data. Popular EDA methods and tools, which often run on laptops or common personal computers, cannot hold terabyte or even larger graph datasets, let alone produce highly efficient analysis results. Arkouda [11,12] is an EDA framework under early development that brings together the productivity of Python with world-class high-performance computing. If a graph stream algorithm can be integrated into Arkouda, it means that data scientists can take advantage of both laptop computing and cloud/supercomputing to perform interactive data analysis at scale.
In this work, we propose a single pass stream model and develop a typical algorithm for interactive graph stream analytics that can meet practical performance requirements with high accuracy. Furthermore, the proposed model and algorithm will be integrated into Arkouda to support the Python community to handle graph streams efficiently. The major contributions of this paper are as follows:

1.
A succinct Double-Index (DI) data structure is designed to build the sketch of a large graph stream. The DI data structure can support an edge based sparse graph partition to achieve load balance and to achieve O(1) time complexity in searching the incidence vertices of a given edge or the adjacency list of a given vertex.

2.
A single pass regression analysis stream model is developed to support general graph streams analysis. Our model can map a large graph stream into a much smaller user specified working space to build a general graph stream sketch. Our method can improve the query quality by dividing the sketch into multiple partitions. A general regression method is proposed to generate the approximate solution based on the partial query results from different partitions.

3.
A typical graph algorithm-triangle counting [13]-is developed to evaluate the efficiency of the proposed stream regression model. Experimental results using two typical distributions-power law and normal distributions-show that our method can achieve a high accuracy with a very low (4%) mean relative error. 4.
All the proposed methods have been integrated into an open-source framework, Arkouda, to evaluate their practical end-to-end performance. This work can help the large and popular data analytics community that exists within Python to conduct interactive graph stream analytics easily and efficiently on terabytes, and beyond, of graph stream data.

Related Work
There are two different methods for handling graph streams. One is the exact method and the other is the approximate method. The exact method needs very large memory to completely hold the graph stream. This is often not feasible for hardware resource limited systems. The approximate method will use very limited memory (sublinear space) to handle the graph streams but can only provide approximate solutions. In this section, we will introduce the related approximate methods, exact methods and different triangle counting algorithms.

Graph Stream Sketch
A much smaller sketch allows for many queries over the large graph stream to obtain approximate results efficiently. Therefore, how to build a graph stream's sketch is of fundamental importance for graph stream analytics. There are several methods that build the sketch by treating each stream element independently without keeping the relationships among those elements. For example, CountMin [4] allows fundamental queries in data stream summarization such as point, range, and inner product queries. It can also be used to find quantiles and frequent items in a data stream. Bottom K sketch [5] places both priority sampling and weighted sampling, without replacement, within a unified framework. It can generate tighter bounds when the total weight is available. gSketch [6] introduces the sketch partitioning technique to estimate and optimize the responses to basic queries on graph streams. By exploiting both data and query workload samples, gSketch can achieve better query estimation accuracy than that achieved with only the data sample. In fact, we borrow the sketch partitioning idea in our implementation of the Double-Index (DI) sketch. However, these existing sketches focus on ad-hoc problems (they can only solve the proposed specific problems instead of general problems), so they cannot support general data analytics over graph streams.
TCM sketch [7] uses a graphical model to build the sketch of a graph stream. This means that TCM is a general sketch that allows complicated queries. However, TCM's focus is setting up a general sketch theoretically, instead of optimizing the practical performance for real-world datasets. Our Double-Index (DI) sketch is especially designed for real-world sparse graph streams and it can achieve a high practical performance.

Complete Graph Stream Processing Method
Several dynamic graph management and analytics solutions are as follows. Aspen [14] takes advantage of purely functional trees data structure, C-trees, to implement quick graph updates and queries. LLAMA's streaming graph data structure [15] is motivated by the CSR format. However, like Aspen, LLAMA is designed for batch processing in the single-writer multi-reader setting and does not provide graph mutation support. GraphOne [16] can run queries on the most recent version of the graph while processing updates concurrently by using a combination of an adjacency list and an edge list.
Systems such as LLAMA, Aspen and GraphOne focus on designing efficient dynamic graph data structures, and their processing units do not support incremental computation. KickStarter [17] maintains the dependency information in the form of dependency trees, and performs an incremental trimming process that adjusts the values based on monotonic relationships. GraphBolt [18] incrementally processes streaming graphs and minimizes redundant computations upon graph mutations while still guaranteeing synchronous processing semantics. DZiG [19] is a high-performance streaming graph processing system that retains efficiency in the presence of sparse computations while still guaranteeing BSP semantics. However, all such methods have the challenging requirement that the memory must be large enough to hold all the streams. For very large streams, this is often not feasible.

Triangle Counting Algorithm
Triangle counting is a key building block for important graph algorithms such as transitivity and K-truss. There are two kinds of triangle counting algorithms, exact triangle counting for static graphs and approximate triangle counting for graphs built from dynamic streams [1]. Since the proposed DI sketch is a graphical model, all the static triangle counting algorithms can also be used on the small DI sketch.
Graph Challenge https://graphchallenge.mit.edu/challenges (accessed on 20 July 2021) is an important competition for large graph algorithms (including triangle counting). Starting from 2017, many excellent static triangle counting algorithms have been developed. They target three hardware platforms: shared memory, distributed memory and GPU.
The shared memory methods take advantage of some fast parallel packages, such as Kokkos Kernels [20] or Cilk [21], to improve their performance. However, the GPU methods [22][23][24] use massively parallel fine-grain hardware threads of a GPU to improve the performance. Distributed memory triangle counting focuses on very large graphs that cannot fit in a single node's memory. Some heuristics [25], optimized communication libraries [26] and graph structures [27] are used to improve the performance. We leverage the ideas in such methods to develop our Chapel-based multi-locale triangle counting algorithm.
The basic idea of graph stream analysis is to estimate the exact query result of a graph stream based on the sampling results. Colorful triangle counting [28] is an example. However, it needs to know the number of triangles and the maximum number of triangles of an edge to set the possibility value. This is not feasible in practical applications. Reduction-based [29] triangle counting is a typical method, which can design a theoretical algorithm based on user specified values ( , δ). Such a method often cannot directly be used satisfactorily in practical applications because hidden constant values often impact the performance. Neighborhood sampling [30] is another method for triangle counting with significant space and time complexity improvements. Specifically, Braverman et al. [31] discuss the difficulty of the triangle counting algorithm in a streaming model. Other sam-pling methods, such as [32], have space usage that depends on the ratio of the number of triangles and the number of triples, or the algorithm will require the edge stream to meet a specific order. Jha et al. [33] apply the birthday paradox theory to sampling data to estimate the number of triangles in a graph stream.

Arkouda Framework
Arkouda is an open-source framework that aims to support flexible, productive, and high performance large scale data analytics. The basic building components of Arkouda include three parts: the front-end (Python [34]), the back-end (Chapel [35]) and a middle, communicative part (ZeroMQ [36]). Python is the interface for end users to utilize the different functions from Arkouda. All large data analysis and high performance computing (HPC) tasks are executed at the back-end (server) without users needing to know the algorithmic implementation details. ZeroMQ is used for the data and instruction exchanges between Python users and the Chapel back-end services.
In order to enable exploratory data analysis on large scale datasets in Python, Arkouda divides its data into two physical sections. The first section is the metadata, which only include attribute information and occupy very little memory space. The second section is the raw data, which include the actual big datasets to be handled by the back-end. However, from the view of the Python programmers, all data are directly available just as on their local laptop device. This is why Arkouda can break the limit of local memory capacity, while at the same time bringing traditional laptop users powerful computing capabilities that could previously only be provided by supercomputers.
When users are exploring their data, if only the metadata section is needed, then the operations can be completed locally and quickly. These actions are carried out just as in previous Python data processing workflows. If the operations have to be executed on raw data, the Python program will automatically generate an internal message and send the message to Arkouda's message processing pipeline for external and remote help. Arkouda's message processing center (ZeroMQ) is responsible for exchanging messages between its front-end and back-end. When the Chapel back-end receives the operation command from the front-end, it will execute the analysis task quickly on the powerful HPC resources and large memory to handle the corresponding raw data and return the required information back to the front-end. Through this, Arkouda can support Python users to locally handle large scale datasets residing on powerful back-end servers without knowing all the detailed operations at the back-end. Figure 1 is a schematic diagram visualizing how Arkouda can bridge the popularity of Python and the power of HPC together to enable EDA at scale. In this figure, from the conceptual level view, Arkouda has three loosely coupled components. The different components can be designed and implemented independently as long as they follow the same information exchanging interface. Although Arkouda is implemented using Chapel, ZeroMQ and Python, the basic concepts can be implemented with different languages and packages. From the view of data, the data are classified into two categories, metadata and raw data. Correspondingly, we simply refer to the operations that can be done locally and quickly as light operations. The operations that need help from the remote server or data center are referred to as heavy operations. Arkouda can automatically organize the data and the corresponding operations to provide a better performance and user experience.
In this paper, we discuss the integration of our graph stream sketch, model and algorithm into Arkouda to enable popular and high level Python users to conduct interactive graph stream analytics.

Edge Index and Vertex Index
We propose a Double-Index (DI) data structure to support quick search from a given edge to its incident vertices or from a given vertex to its adjacency list. The two index arrays are called the edge index array and the vertex index array. Furthermore, our DI data structure requires a significantly smaller memory footprint for sparse graphs.
The edge index array consists of two arrays with the same shape. One is the source vertex array and the other is the destination vertex array. If there are a total of M edges and N vertices, we will use the integers from 0 to M − 1 to identify different edges and the integers from 0 to N − 1 to identify different vertices. For example, given edge e = i, j , we will let SRC[e] = i and DST[e] = j, where SRC is the source vertex array and DST is the destination vertex array; e is the edge ID number. Both SRC and DST have the same size M. When all edges are stored in SRC and DST, we will sort them based on their combined vertex ID value (SRC[e], DST[e]), and remap the edge ID from 0 to M − 1. Based on the sorted edge index array, we can build the vertex index array, which also consists of two of the same shape arrays. For example, in Figure 2, we let edge e 1000 have ID 1000. If e 1000 = 50, 3 , e 1001 = 50, 70 and e 1002 = 50, 110 are all the edges starting from vertex 50 (a directed graph). Then we will let one vertex index array STR[50] = 1000 and another vertex index array NEI[50] = 3. This means that, for given vertex ID 50, the edges starting with vertex 50 are stored in the edge index array starting at position 1000 and there is a total of 3 such edges. If there are no edges from vertex i, we will let STR[i] = −1 and NEI[i] = 0. In this way, we can directly search the neighbors or adjacency list of any given vertex. Our DI data structure can also support graph weights. If each vertex has a different weight, we use an array, V_WEI, to express the weight values. If each edge has a weight, we use an array, E_WEI, to store the different weights.  . Based on the NEI and STR vertex index arrays, we can find the neighbor vertices or the adjacency list in O(1) time complexity. For given edge e = i, j , it will be easy for us to find its incident vertices i in array SRC[e] and j in array DST[e] and also in O(1) time complexity. Regarding the storage space, if the graph has M edges and N vertices, we will need 2M memory to store all the edges. Compared with the dense matrix data structure, which needs N 2 memory to store all the edges, this is much smaller. To improve the adjacency list search performance, we use 2N memory to store the NEI and STR arrays. Figure 2 shows M sorted edges represented by the SRC and DST arrays. Any one of the N vertices n k can find its neighbors using NEI and STR arrays with O(1) time complexity. For example, given edge i, j , if vertex j's starting index in SRC is 1000, it has three adjacency edges, then such edges can be found starting from index position 1000 in arrays SRC and DST using NEI and STR arrays directly. This figure shows how the NEI and STR arrays can help us locate neighbors and adjacency lists quickly.

Time and Space Complexity Analysis
For an undirected graph, an edge i, j means that we can also arrive at i from j. We may use the data structures SRC, DST, STR, NEI to search the neighbors of j in SRC. However, this search cannot be performed in O(1) time complexity. To improve the search performance, we introduce another four arrays, called reversed arrays, SRCr, DSTr, STRr, NEIr. For any edge i, j that has its i vertex in SRC and j vertex in DST, we will have the corresponding reverse edge j, i in SRCr and DSTr, where SRCr has exactly the same elements as in DST, and DSTr has exactly the same elements as in SRC. SRCr and DSTr are also sorted, and NEIr and STRr are the array of the number of neighbors and the array of the starting neighbor index just like the directed graph. So, for a given vertex i of an undirected graph, the neighbors of vertex i will include the elements in Given a directed graph with M edges and N vertices, our data structure will need 2(M + N) integer (64 bits) storage or M+N 4 bytes. For an undirected graph, we will need twice the storage of a directed graph. For weighted vertices and weighted edges, additional N, M integer storage will be needed, respectively.

Edge Oriented Sparse Graph Partition
For real-world power law [37][38][39] graphs, the edge and vertex distributions are highly skewed. Few vertices will have very large degrees but many vertices have very small degrees. If we partition the graph evenly based on the vertices, it will be easy to cause a load balancing problem because the processor that holds the vertices that have a large number of edges will often have a very heavy load. So, we equally divide the total number of edges into different computing nodes instead. Figure 3 shows the basic idea of our sparse graph partition method. The edge arrays SRC and DST will be distributed by BLOCK onto different processors to make sure most of the processors will have the same load. When we assign an edge's vertex entry in index arrays NEI and STR to the same processors, this approach can increase the locality when we search from edge to vertex or from vertex to edge. However, this requires us to distribute NEI and STR in an irregular way since the number of elements assigned to different processors may be very different. In our current implementation, we just partition NEI and STR arrays evenly as the edge arrays.

Comparison with CSR
The compressed sparse row (CSR) or compressed row storage (CRS) or Yale format has been widely used to represent a sparse matrix with a much smaller space. Our doubleindex data structure has some similarity with CSR. The value array of CSR is just like the edge weight array in DI; the column array of CSR is the same as the DST array in DI; the row array of CSR is very close to the STR in DI. CSR is a vertex oriented data structure and it can support quick search from any vertex to its adjacency list. DI has the same function as CSR.
The major difference between DI and CSR is that DI provides the explicit mapping between an edge ID to its incident vertices, but CSR does not. This difference has two effects: (1) DI can support another kind of quick search from any edge ID to its incident vertices; however, CSR cannot, because SCR does not provide the source vertex of a given edge ID; (2) DI can support edge oriented graph partition (see Section 4.3) based on the explicit edge index arrays to achieve load balance; however, CSR cannot.
Another difference is that in DI we use an array NEI to explicitly represent the number of neighbors of a given vertex to remove the ambiguous meaning of the row index array in CSR.
if we extend one element in the STR array and use the multiple meanings of the row index array in CSR. In our DI data structure, the meaning of STR[v] is not ambiguous. It is the starting position of vertex v in the edge index array or is not −1 and it is the starting position of the next non zero element after v or the total number of non zero elements. So, for any case in which we need to make sure STR[v] is really the starting position of v, we must execute an additional check because the value of STR[v] itself cannot provide such information. When we use the DI data structure to express a much smaller graph sketch, the number of the total vertices N is much smaller than before. So, in DI we use an additional N size array NEI (much smaller than the original graph) to ensure the parallel operations on STR have clear semantics and are easy to understand.

Single Pass Regression Analysis Stream Model
Our stream model has the following features: (1) Only one pass is needed to build the graph sketch; (2) A parameter, the Shrinking Factor, is introduced to allow users to directly control the size of a sketch. So, the users' requirement can be used to build a sketch with much less space; (3) The graph stream sketch is divided into three different partitions (we also refer to the three independent small graphs as sub-sketches) and this method can help us to avoid global heterogeneity and skewness but take advantage of the local similarity; (4) Our sketch is expressed as a graph so the exact graph analysis method on a complete graph can also be used on our different sub-sketches; (5) A general multivariate linear regression model is used to generate the approximate solution based on the results from different sub-sketches. The regression method can be employed to support general graph analysis methods.
The framework of our stream model includes three parts: (1) Mapping a graph stream into a much smaller graph sketch in one pass. The sketch can have multiple independent partitions or sub-sketches; (2) Developing an exact algorithm on different sub-sketches to generate partial solutions; (3) Developing a regression model to generate an approximate solution based on different partial solutions.

Sliding Window Stream
We first describe our method when the stream comes from a sliding window, which means that only the last M edges falling into the given time window will be considered.
We define the shrinking factor SF (a positive integer) to constrain the space allocation of a stream sketch. This means that the size of the edges and vertices of the graph sketch will be 1 SF of the original graph. The existing research on sketch building [7] shows that multiple pairwise independent hash function [4] methods can often generate better estimation results because they can reduce the probability of hash collisions. In our solution, we sample at different parts of the given sliding window instead of the complete stream and then map the sampled edges into multiple (here three) independent partitions to completely avoid collisions. We name the three partitions Head, Mid, and Tail partitions or sub-sketches, respectively. Each partition will be assigned with 1 3SF space of the original graph. Since we divide the sketch into three independent partitions, we select a very simple hash function. For a given sliding window, we let the total number of edges and vertices in this window be M and N. Then, the size of each partition will have Partition M = M 3SF edges and Partition N = N 3SF vertices. For any edge e = i, j from the original graph, we will map the edge ID e to mod(e, Partition M ). At the same time, its two vertices i and j will be mapped to mod(i, Partition N ) and mod(j, Partition N ). If e < M 3 , then we will map e = i, j to the Head partition. If e ≥ 2M 3 , we will map e = i, j to the Tail partition. Otherwise, we will map e = i, j to the Mid partition. Figure 4 is an example that shows how we map, completely, 3,000,000 edges in a given sliding window into a sketch with three partitions. The edges from 1,000,000 to 1999,999 are mapped to the Head partition p0 that has 1000 edges. The edges from 2,000,000 to 2,999,999 are mapped to the Mid partition p1 and the edges from 3,000,000 to 3,999,999 are mapped to the Tail partition p2. Each partition is expressed as a graph. After the three smaller partitions are built, we sort the edges and create the DI data structure to store the partition graphs.
We map different parts of a stream into corresponding sub-sketches and this is very different from the existing sketch building methods. They map the same stream into different sketches with independent pairwise hash functions so each sketch is an independent summarization of the stream. However, in our method, one sub-sketch can only stand for a part of the stream and we use different sub-sketches to capture the different local features in a stream. Our regression model (see Section 5.3) will be responsible for generating the final approximate solution.
More partitions will help to capture more local features of a stream. However, we aim for a regression model that is as simple as possible. Too many partitions will make our regression model have more variables and become complicated. We select three as the default number of partitions because this can achieve sufficient accuracy based on our experimental results. Of course, for some special cases, we may choose more partitions but use the same proposed framework.

Insert-Only and Insert-Delete Streams
For the sliding window stream, we only need to care about the edges in the given window. However, for insert-only and insert-delete streams, we need to consider the edges that will come in the future. Here, we will describe the sketch updating mechanism in our stream model.
For the insert-delete stream, we will introduce a new weight array in the sketch. If the total number of edges that can be held in one partition is Partition M , then we will introduce a weight array E_WEI with size Partition M to store the latest updated information of each edge. The initial value of E_WEI will be zero. If a new insert edge is mapped to e, then we will let E_WEI[e] = E_WEI[e] + 1. If a deleted edge is mapped to e, then we will let E_WEI[e] = E_WEI[e] − 1. So, for any edge in the partition, if E_WEI[e] = 0, it means that the edge has been deleted so we can safely remove this edge from our sketch. If E_WEI[e] > 1, it means that there are multiple edges in the stream that have been mapped to this edge. For the insert-only stream, it is similar to the insert-delete stream but we never reduce the value of E_WEI.
To improve the sketch update performance, we will use the bulk updating mechanism for the two streams. If B new edges have arrived (they can be insert edges or delete edges), we will divide B into three parts and map them into the three different partitions just as in the method used in the sliding window stream. After the new edges are updated, we need to resort the edges and update the DI data structure to express the updated sub-sketches.

Edge-Vertex Iterator Based Triangle Counting Algorithm
We can directly employ existing exact graph algorithms in our sketch because the sketch is also expressed as a graph. Here, we will use a typical graph analysis algorithmtriangle counting-to show how we can develop optimized exact algorithms based on our DI data structure.
To improve the performance of a distributed triangle counting algorithm, two important problems are maintaining load balancing and avoiding remote access or communication as much as possible.
In Chapel, the locale type refers to a unit of the machine resources on which the program is running. Locales have the capability to store variables and to run Chapel tasks. In practice, for a standard distributed memory architecture, a single multicore/SMP node is typically considered a locale. Shared memory architectures are typically considered a single locale. In this work, we will develop a multiple-locale exact triangle counting algorithm for distributed memory clusters.
For power law graphs, a small number of vertices may have a high degree. So, if we divide the data based on number of vertices, it is easy to cause an unbalanced load. Our method divides the data based on the number of edges. At the same time, our DI data structure will keep the edges connected with the same vertex together. So, the edge partition method will result in good load balancing and data access locality.
However, if each locale directly employs the existing edge iterator [13] on its edges, the reverse edges of the undirected graphs are often not in the same locale. This will cause a new load balancing problem. So, we will first generate the vertices based on the edges distributed to different locales. Then, each locale will employ the vertex iterator to calculate the number of triangles. The combined edge-vertex iterator method is the major innovation for our triangle counting method on distributed systems. When we employ the high level parallel language Chapel to design the parallel exact triangle counting algorithm, there are two additional advantages: (1) Our DI data structure can work together with the coforall or forall parallel construct of Chapel to exploit the parallelism; (2) We can take advantage of the high level module Set provided by Chapel to implement the parallel set operation easily and efficiently.
At a high level, our proposed algorithm takes advantage of the multi-locale feature of Chapel to handle very large graphs in distributed memory. The distributed data are processed at their locales or their local memory. Furthermore, each shared memory compute node can also process their own data in parallel. The following steps are needed to implement the multi-locale exact triangle counting algorithm: (1) The DI graph data should be distributed evenly onto the distributed memory to balance the load; (2) Each distributed node only counts the triangle including the vertices assigned to the current node; (3) All the triangles calculated by different nodes should be summed together to obtain the exact number of triangles.
Our multi-locale exact triangle counting algorithm is described in Algorithm 1. For a given graph sketch partition G sk = E sk , V sk , we will use an array subTriSum to keep each locale's result (line 2). Here in line 3, we use coforall instead of forall to allow each loc in Locales to execute the following code in parallel so we can fully exploit the distributed computing resources. The code between line 3 and line 17 will be executed in parallel on each locale. Each locale will use a local variable triCount to store the number of triangles (line 5). Line 6 and line 7 are important for implementing load balancing. Assume edges from e start to e end are assigned to the current locale, we can obtain the corresponding vertex ID StartVer = SRC[e start ] and EndVer = SRC[e end ] as the vertices interval that the current locale will handle. Since different locales may have different edges with the same starting vertex, the interval [StartVer..EndVer] of different locales may overlap. At the same time, some starting vertex in the SRCr index array may not appear in SRC, so we should also make sure there is no "hole" in the complete interval [0..|V sk | − 1]. In line 7, we will make sure all the intervals will cover all the vertices without overlapping.
Our method includes the following steps: (1) If the current locale's StartVer is the same as the previous locale's EndVer, this means that one vertex's edges have been partitioned into two different locales. We will set StartVer = StartVer + 1 to avoid two locales executing the counting on the same vertex; (2) If the current locale's StartVer is different from the previous locale's EndVer, and the difference is larger than 1, this means that there is a "hole" between the last locale's vertices and the current locale's vertices. So we will let the current locale's StartVer = last locale's EndVer + 1; (3) If the current locale is the last locale, we will let its EndVer = the last vertex ID. If the current locale is the first locale, we will let StartVer = 0.
From line 8 to line 14 we will count all the triangles starting from the vertices assigned to the current locale in parallel. In line 9 we will generate all the adjacent vertices u adj of the current vertex u and its vertex ID is larger than u. From line 10 to line 12, for any vertex v ∈ u adj , we will generate all the adjacent vertices v adj of current vertex v and its vertex ID is larger than v. So the number of vertices in u adj ∩ v adj is the number of triangles having edge u, v . Since we only calculate the triangles whose vertices meet u < v < w, we will not count the duplicate triangles. In this way, we can avoid the unnecessary calculation. In line 15, each locale will assign its total number of triangles to the corresponding position of array subTrisum. At the end in line 18, when we sum all the number of triangles of different locales, we will obtain the total number of triangles.

Real-World Graph Distribution Based Regression Model
Instead of developing different specific methods for different graph analysis problems, we propose a general regression method to generate the approximate solution based on the results of different sub-sketches.
One essential objective of a stream model is generating an accurate estimation based on the query results from its sketch. The basic idea of our method is to exploit the realworld graph distribution information to build an accurate regression model. Then we can use the regression model to generate approximate solutions for different queries.
Specifically, for the triangle counting problem, when we know the exact number of triangles in each sub-sketch (see Table 1), the key problem is how we can estimate the total number of triangles.
To achieve user-required accuracy for different real-world applications, our method exploits the features of real-world graph distributions. Many sparse networks-social networks in particular-have an important property; their degree distribution follows the power law distribution. Normal distributions are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. So, we develop two regression models for the two different graph degree distributions.
We let E H , E M , E T be the exact number of triangles in the Head, Mid and Tail subsketches. The exact triangle counting algorithm is given in Section 5.2.
For normal degree distribution graphs, we assume that the total number of triangles has a linear relationship with the number of triangles in each sub-sketch. We take E H , E M , E T as three randomly sampled results of the original stream graph to build our multivariate linear regression model. TC Normal is the estimated number of triangles and the regression model is given in Equation (1). For power law graphs, the sampling results of E H , E M , E T can be significantly different because of the skewness in degree distribution. A power law distribution has the property that its log-log plot is a straight line. So we assume the logarithmic value of the total number of triangles in a stream graph should have a linear relationship with the logarithmic values of the exact number of triangles in different degree sub-sketches. Then we have two ways to build the regression model. One is unordered and the other is ordered. The unordered model can be expressed as in Equation (2). In this model, the relative order information of sampling results cannot be captured.
where TC powerlaw,log = log(TC powerlaw ) is the logarithmic value of the estimated value TC powerlaw ; E H,log = log(E H ), E M,log = log(E M ), and E T,log = log(E T ); log(E H ), log(E M ), and log(E T ) are the logarithmic values of E H , E M , and E T , respectively. Then, we refine the regression model for power law distribution as follows. We get E min , E median and E max by sorting E H , E M , and E T . They mean the minimum, median and maximum sampling values of the number of triangles. They will be used to stand for the sampling results of a power law distribution at the right (low possibility), middle (a little bit high possibility) and left part (very high possibility). We let E min,log = log(E min ), E median,log = log(E median ), and E max,log = log(E max ). Our ordered multivariate linear regression model for power law graphs is given in Equation (3).
The accuracy of our multivariate linear regression model is given in in Section 7.3.1 and we can see that the ordered regression model is better than the unordered regression model. Both of our normal and ordered power law regression models achieve a very high accuracy.

Integration with Arkouda
Based on the current Arkouda framework, the communication mechanism can be directly reused between Python and Chapel. To support large graph analysis in Arkouda, the major work is implementing the graph data structure and the corresponding analysis functions in both Python and the Chapel package. In this section, we will introduce how we define the graph classes to implement our DI data structure and develop a corresponding Python benchmark to drive the triangle counting method.

Graph Classes Definition in Python and Chapel
Our DI data structure can also support graph weight. If each vertex has a different weight, we use an array V_WEI to express the weight values. If each edge has a weight, we use an array E_WEI to stand for different weights. Based on our DI data structure, four class directed graph (GraphD), directed and weighted graph (GraphDW), undirected graph (GraphUD) and undirected and weighted graph (GraphUDW) are defined in Python. Four corresponding classes-SegGraphD, SegGrapDW, SegGraphUD, SegGraphUDWare also defined in Chapel to present different kinds of graphs. In our class definition, directed graph is the base class. Then, undirected graph and directed and weighted graph are the two child classes. Finally, undirected and weighted graph will inherit from undirected graph.
All the graph classes, including their major attributes, are given in Table 2. The left columns are the Python classes and their descriptions. The right columns are the corresponding Chapel classes and their descriptions. Based on the graph class definition, we can develop different graph algorithms.

Triangle Counting Benchmark
To compare the performance of the exact method and our approximate method, we implement two triangle counting related functions. For the exact method, we implement the "graph_file_read" and "graph_triangle" Python functions to read a complete graph file and call our triangle counting kernel algorithm (Algorithm 1) to calculate the number of triangles.
For our approximate method, we implement the "stream_file_read" function to simulate the stream by reading a graph file's edges one by one. The graph sketch introduced in Section 5.1 will be built automatically when we read the edges. Currently, we only implement the sliding window streaming's sketch building. Automatically building the insert-only and insert-delete streaming will be future work after we evaluate our data structure and sketch with a larger collection of graph algorithms.
After the sketch of a sliding window stream is built in the memory, we will call on the "stream_tri_cnt" function to first calculate the exact number of triangles in each sub-sketch (it will also call on our kernel triangle counting Algorithm 1) and then use the regression model to estimate the total number of triangles in the given stream.
For the power law distribution regression model, in the benchmark we also implement single variable regression models using the maximum, minimum and median results of the three sub-sketches respectively. Our testing results show that the proposed regression method in Section 5.3 can achieve a better accuracy.

Experimental Setup
Testing of the methods was conducted in an environment composed of a 32 node cluster with an FDR Infiniband between the nodes in the cluster. Infiniband connections between nodes is commonly found in high performance computers. Every node is made up of 512 GB of RAM and two 10-core Intel Xeon E5-2650 v3 @ 2.30 GHz processors with 20 cores total. Due to Arkouda being designed primarily for data analysis in an HPC setting, an architecture setup that aptly fits an HPC scenario was chosen for testing.

Datasets Description and Analysis
We will use graph files and read the edges line by line to simulate the sliding window of a graph stream. The graphs selected for testing were chosen from various publicly available datasets for the benchmarking of graph methods. These include datasets from the 10th DIMACS Implementation Challenge [40], as well as some real-life graph datasets from the Stanford Network Analysis Project (SNAP) http://snap.stanford.edu/data/ (accessed on 20 July 2021). Pertinent information about these datasets for the triangle counting method are displayed in Table 3. Based on the graphs' degree distribution, we divided them into two classes, normal distribution and power law distribution. Prior research has shown that some real-world graphs follow power law distributions. Normal distributions are also found ubiquitously in the natural and social sciences to represent real-valued random variables whose distributions are not known. For normal distributions, their typical parameters, the mean and standard deviation fitted from the given graphs, are listed in Table 3 too. Figure 5 shows the comparison between the fitting results and the original graph data. We can see that the fitting results can match with the original data very well. In other words, the Delaunay graphs follow the normal distribution.
For graphs that follow the power law distribution y(x) = a × x −k , we also present their fitting parameters a and k in Table 3. Figure 6 provides some examples of these graphs. We can see that, in general, their signatures are close to power laws. However, we can also see that some data do not fit a power law distribution very well.

Accuracy
In order to evaluate the accuracy of our method, we conducted experiments on nine normal distribution and 12 power law distribution benchmark graphs with shrink factors 4, 8, 16, 32, and 64. In total, we have 105 test cases. For each test, we will have three triangle numbers for the Head, Mid and Tail sketch partitions. We will use the exact number of triangles in the three sub-sketches to provide the approximate number of triangles in the given graph stream. The testing results are given in Table 1.
For the normal distribution, based on our regression model expressed as Equation (1), we can get the parameters and express the multivariate linear regression equation as: When we compute the absolute value of all the percent error, the mean error is 4%, the max error is 25% and the R-squared value is 1, and this means that the sketch results are absolutely related to the final number of triangles.
We also evaluated our regression model to see how the accuracy changes with the shrinking factor. For a normal distribution, we built five regression models by selecting the shrinking factors as 4,8,16,32, and 64, respectively. The result is shown in Figure 7. We can see that both the mean and max error will increase when we doubly increase the value of the shrinking factor step by step. However, the mean increases very slowly. The mean error changes from 1% to 5.97%. The max error will increase from 3.36% to 26.87%. This result shows that our regression model for normal distribution graphs is stable and very accurate. When the graph size shrinks to half of its previous size, there is a very small effect on the accuracy when the number of triangles is not less than some threshold value. This is important because we can use much smaller sketches to achieve very accurate results.
For the power law distribution, based on our ordered regression model in Equation (3) which is refined from the unordered regression model, the log value multivariate linear regression equation can be expressed as: The mean error is 3.2%. The max value is 7.2%. The R-squared value is 0.91. This means that the model can describe the data very well. If we use the unordered regression model, Equation (2), the mean error is 4.5%. The max value is 12.5%. R-squared is 0.81. All the results are worse than the results of the ordered regression model in Equation (3). We also performed experiments to show the accuracy of ordered and unordered regression models when we built five different regression models using different shrinking factors of 4, 8, 16, 32, and 64. The experimental results are presented in Table 4 and we can make two conclusions from the results: (1) All the results of our ordered regression model are better than the results of the unordered regression model. This means that when we exploit more degree distribution information and integrate it into our model, we can achieve a better accuracy; (2) When we double the value of the shrinking factor, the change in accuracy is much smaller than the change in the normal regression model. This also means that, in the power law model, the accuracy is much less sensitive to the shrinking factor because the power law graph is highly skewed. This conclusion also means that we can use a relatively smaller shrinking factor to achieve almost the same accuracy.
The accuracy evaluation results show that the proposed regression model for normal distribution graphs and power law distribution graphs can achieve very accurate results. For both, the mean relative error is less than 4%. However, if we employ the normal regression model in Equation (1) to the power-law distribution data, the mean relative error can be as high as 26%, and the max relative error is 188%. This means that employing different regression models for different degree distribution graphs may significantly improve the accuracy. Existing exact triangle counting algorithms show that, for very large graphs, the communication between different computing nodes will consume the majority of the time. Therefore, reducing the total communication, or improving data access locality during triangle counting, are the key factors for improving the final performance.
Since our triangle counting method can maintain load balancing and locality, it can achieve a high performance for large graph streams. Our experimental results show that our streaming triangle algorithm displays high locality. These results are provided in Table 5. We calculate all the local data access times and all the remote data access times during the triangle counting procedure. We define the locality access ratio LAR = NL NL+NR , where NL is the total number of local accesses and NR is the total number of remote accesses. For the three sub-sketches-Head, Mid and Tail-we calculate their LAR values, respectively. From the table we can see that, for all cases, the LAR value is larger than 50% and the average LAR value is 74%. This means that, during our triangle counting procedure, most of the data come from local memory. This is the major reason our method can achieve a high performance.
To show the speedup of the streaming process, we compare the execution time of the original complete graph triangle counting and the execution time of our sketch method. The experimental results are given in Table 6. From the table we can see that our streaming method uses much less space and runs much faster to estimate the number of triangles in large graph stream with only one pass. Since the experimental time is very long for large graphs, here we just present three different shrinking factors to show the trend. We can see that, for very large graphs, we can use a very large shrinking factor to significantly reduce the processing space, and the absolute speedup is also very high. However, the relative speedup compared with the shrinking factor is a little bit lower. For relatively small graphs, our shrinking factor is also relatively small, so the absolute speedup is also small but the relative speedup is high. Our experimental results show that the speedup will increase almost linearly with the shrinking factor.
We also use an even larger graph benchmark, com-Friendster, a network dataset that has 1,806,067,135 edges and 65,608,366 vertices https://snap.stanford.edu/data/ (accessed on 20 July 2021), and run it with two locales. When the shrinking factor is 32,768, it will take 44,719.4 s to build the sketch with a maximum of 235,332KB memory footprint. The triangle counting time is 22,368.2 s. When the shrinking factor is 65,536, it will take 44,593.0 s to build the sketch with a maximum of 235,288KB memory footprint. The triangle counting time is 22,296.0 s. If we use the exact method, also with two locales, the program will fail because it runs out of memory. This example really shows that our method can handle very large graphs in a limited memory footprint.

Conclusions
Interactive graph stream analysis is a challenging problem. In this paper, we present our solution to show how we can solve the problem in the open source framework Arkouda. The advantage of Arkouda lies in two aspects: high productivity and high performance. High productivity means that the end users can use a popular data science language such as Python to explore different graph streams. High performance means that the end users can break the limit of their laptop and personal computer's capabilities in memory and calculation to handle very large graphs in an interactive way.
We design a double index data structure to support quick search from a given vertex to edges or from a given edge to vertices. This greatly reduces the total memory footprint for large sparse graph streams. Based on the proposed double index data structure, we develop an efficient multiple-partition stream sketch building method. Our sketch can support general stream query problems because our sketch uses a small graph to summarize a large graph. We define a shrinking factor to let users control the size of final sketch accurately. We exploit the ubiquitous normal and power law degree distributions of given graph streams to propose two regression models to estimate the results of the graph streams from their sketch partitions. Based on the double index data structure, we develop a multi-locale distributed triangle counting algorithm that can maintain load balancing and a high local data access ratio to improve the graph stream processing performance. All our methods have been integrated into the Arkouda framework and users can use a Jupyter Notebook to drive the interactive graph stream analytics pipeline easily.
Experimental results on two kinds of large sparse graph streams-normal distribution and power law distribution-show that the proposed method can achieve very high approximate results with a mean error no larger than 4%. The average local access ratio is 74% and the graph stream processing speedup increases almost linearly with the shrinking factor. This work shows that Arkouda is a promising framework for supporting large scale interactive graph stream analytics. Of course, the reported work is just the first step in supporting a typical graph query, triangle counting. In future work, we will provide more graph stream algorithms based on our sketch methodology and will further optimize the performance of our algorithms in Arkouda.