Continuous k Nearest Neighbor Queries over Large-Scale Spatial–Textual Data Streams

: Continuous k nearest neighbor queries over spatial–textual data streams (abbreviated as CkQST) are the core operations of numerous location-based publish / subscribe systems. Such a system is usually subscribed with millions of CkQST and evaluated simultaneously whenever new objects arrive and old objects expire. To e ﬃ ciently evaluate CkQST, we extend a quadtree with an ordered, inverted index as the spatial–textual index for subscribed queries to match the incoming objects, and exploit it with three key techniques. (1) A memory-based cost model is proposed to ﬁnd the optimal quadtree nodes covering the spatial search range of CkQST, which minimize the cost for searching and updating the index. (2) An adaptive block-based ordered, inverted index is proposed to organize the keywords of CkQST, which adaptively arranges queries in spatial nodes and allows the objects containing common keywords to be processed in a batch with a shared scan, and hence a signiﬁcant performance gain. (3) A cost-based k-skyband technique is proposed to judiciously determine an optimal search range for CkQST according to the workload of objects, to reduce the re-evaluation cost due to the expiration of objects. The experiments on real-world and synthetic datasets demonstrate that our proposed techniques can e ﬃ ciently evaluate CkQST. Deﬁnition 3 (Minimum Bounding Node). Given ∀ q ∈ Q and search range q . SR k , if ∃ N,q . SR k ⊆ N . R, and for any child node n i of N, q . SR k (cid:42) n i . R, N is the minimum bounding node of q , where N . R, n . R are the region where N and n locate respectively. The minimum bounding node of q is the node, which covers its search range, but any of its child nodes cannot completely cover the search range .


Introduction
The continuous k nearest neighbor queries over spatial-textual data streams (abbreviated as CkQST) retrieve to and continuously monitor at most k nearest neighbor (abbreviated as kNN) objects at the user-specified location containing all the user-specified keywords, which have been widely used in a variety of location-based applications, such as location-aware targeting of advertisements, analysis of micro-blogs, and mobile navigation-services.
In an e-coupon recommendation system or a Weibo publish/subscribe system, users register his/her interests (e.g., favorite food or clothing brand for the former, and news or persons for the latter) as a query. A stream of spatial-textual objects (e.g., e-coupons or Weibos) generated are fed to the relevant users. Continuous queries over spatial-textual data streams studied by existing work [1][2][3][4][5][6][7][8][9][10][11][12] are primarily in terms of Boolean matching or approximate matching, which return an unpredictable number of objects or approximate results. The number of qualified objects containing all keywords specified by a user can be far larger than k, because the objects (e.g., tweets, news) usually contain much more keywords than queries do. This motivates us to study CkQST, which return at most k nearest neighbor objects containing all the query keywords. Example 1. Figure 1 depicts a running example used throughout this paper. At timestamp t 0 , there are five subscribed 2-NN (i.e., k = 2) queries q 1 , q 2 , · · · , q 5 with a small circle representing their geo-location, and five objects o 1 , o 2 , · · · , o 5 with a small square representing their geo-location in Figure 1a, while corresponding keywords and expiration times are shown in Figure 1e. The spatial ISPRS Int. J. Geo-Inf. 2020, 9,  region is organized by a three-layer quadtree, where the spatial nodes are numbered successively, and the root node is n 0 . Taking the evaluation of q 1 as an example, {o 4 , o 1 } is returned. For q 1 , the spatial search range, thereafter "search range", is defined as a minimal circle centered at the geo-location of q 1 and covering {o 4 , o 1 }, i.e., C 1 . At timestamp t 1 , an object o 6 arrives, as shown in Figure 1b, with keywords {w 1 , w 2 , w 3 } and expiration time t 2 . o 6 contains all the keywords of q 1 and q 4 , but only C 1 is hit by o 6 . The result and search range of q 1 are updated to {o 6 , o 4 } and the circle C 1 , respectively, while the result and search range of q 4 are not affected. At timestamp t 2 , o 6 expires. For q 1 , the number of qualified objects in C 1 is less than 2, so the result should be re-evaluated. The result and search range of q 1 are updated to {o 4 , o 1 } and the circle C 1 , respectively. Therefore, for CkQST, the spatial search range covering kNN objects changes dynamically with the arrival and expiration of qualified objects.
ISPRS Int. J. Geo-Inf. 2020, 9, x FOR PEER REVIEW 2 of 24 Figure 1a, while corresponding keywords and expiration times are shown in Figure 1e. The spatial region is organized by a three-layer quadtree, where the spatial nodes are numbered successively, and the root node is 0 n . Taking the evaluation of 1 q as an example, 4 1 { , } o o is returned. For 1 q , the spatial search range, thereafter "search range", is defined as a minimal circle centered at the geolocation of 1 q and covering 4 1 { , } o o , i.e., 1 C . At timestamp 1 t , an object 6 o arrives, as shown in Figure 1b, with keywords 1 2 3 { , , } w w w and expiration time 2 t . 6 o contains all the keywords of 1 q and 4 q , but only 1 C is hit by 6 o . The result and search range of 1 q are updated to 6 4 { , } o o and the circle 1 C ′ , respectively, while the result and search range of 4 q are not affected. At timestamp 2 t , 6 o expires. For 1 q , the number of qualified objects in 1 C ′ is less than 2, so the result should be reevaluated. The result and search range of 1 q are updated to 4 1 { , } o o and the circle 1 C , respectively.
Therefore, for CkQST, the spatial search range covering kNN objects changes dynamically with the arrival and expiration of qualified objects. Challenges. The solution framework for evaluating generic continuous queries over spatialtextual data streams consists of selecting an appropriate spatial index and a textual index to form a hybrid spatial-textual index, and exploiting it with appropriate spatial and/or textual filtering strategies to process the incoming objects according to the features of queries [1][2][3][4][5][6][7][8][9][10][11][12]. There are three key challenges in constructing such an index for CkQST.
First, regarding the spatial filtering, evaluating CkQST is essentially identifying queries whose search range is hit by the incoming objects. It is very important to efficiently organize the search ranges of CkQST; therefore, how to map the search range of CkQST to the spatial nodes is the focus. The search range of CkQST covering kNN objects changes frequently with the arrival and expiration Challenges. The solution framework for evaluating generic continuous queries over spatial-textual data streams consists of selecting an appropriate spatial index and a textual index to form a hybrid spatial-textual index, and exploiting it with appropriate spatial and/or textual filtering strategies to process the incoming objects according to the features of queries [1][2][3][4][5][6][7][8][9][10][11][12]. There are three key challenges in constructing such an index for CkQST.
First, regarding the spatial filtering, evaluating CkQST is essentially identifying queries whose search range is hit by the incoming objects. It is very important to efficiently organize the search ranges of CkQST; therefore, how to map the search range of CkQST to the spatial nodes is the focus. The search range of CkQST covering kNN objects changes frequently with the arrival and expiration of qualified objects, which requires the index to have both strong filtering ability and low update cost. For most spatial indexes, having strong filtering ability and low update cost are contradictory. There are two approaches to mapping the search range of queries to spatial nodes to improve the filtering ability and reduce the update cost of the index. (1) Queries are mapped to the leaf nodes in the spatial index, which minimizes the spatial region of the nodes covering the search range of queries to reduce the number of objects to be verified [2,5,[7][8][9][10][13][14][15]. (2) Queries are mapped to the spatial nodes according to the spatial distribution [10][11][12], the keyword distribution [6], or the corresponding cost model [1,3,4]. These approaches are appropriate in the scenarios where the search range of the queries rarely changes, but inappropriate to CkQST, where frequent update of the search range of the queries results in high costs.
Second, regarding the textual filtering, evaluating CkQST is essentially identifying queries whose keywords are fully contained in a given object. An inverted index is usually used to organize continuous queries [1,2,6,10]. A large number of queries make the posting lists very long, and the fast-arriving objects are verified against the corresponding posting lists in multiple rounds in a short time, which becomes the bottleneck of textual filtering. There are three ways to improve textual filtering capabilities.
(1) Insert queries into the shortest posting list according to the frequency of query keywords to reduce the number of queries in posting lists, such as in the ranked-key inverted index [1,6]. The posting lists may still be long. (2) Increase the depth of textual partition, such as the ordered keyword trie [3,4,6]. It takes much time to construct the index, and nodes must be reconstructed if queries are updated, which is not appropriate for the scenarios like CkQST where the queries are frequently updated.
(3) Organize queries in posting lists in the ascending order according to the ranking score [10]. However, there is no corresponding concept in CkQST. None of the above approaches can efficiently support CkQST textual filtering.
Third, the kNN re-evaluation is frequently triggered by object expiration. When an object expires, several CkQST have to be re-evaluated from scratch, which is expensive. Several techniques have been proposed to solve the similar problems in approximate top-k query (e.g., [10,13,16,17]). They all favor maintaining more than k results to reduce the chances for re-evaluation. However, they either maintain all the skyline objects in the entire region [13,16], or maintain a k-skyband containing skyline objects whose scores were larger than a threshold [10,17], which are not designed for the CkQST returning exact results.
In view of the challenges, we extend a quadtree with an ordered, inverted index to organize CkQST. Three key techniques are proposed to exploit the spatial-textual index and address the above three challenges. The contributions of this paper follow.
(1) To support the frequent change of search ranges of CkQST, a memory-based cost model is proposed to map the search ranges of CkQST to the quadtree nodes, which minimizes the verification cost and index update cost.
(2) To reduce the number of queries verified and process objects in batches, an adaptive block-based ordered, inverted index is proposed to organize the query keywords at quadtree nodes, which allow multiple objects containing common texts to be verified concurrently. For this index, an insertion strategy is proposed to adaptively insert queries in views of the skewed distributions of CkQST and objects.
(3) To reduce the re-evaluation cost, a cost-based k-skyband technique is proposed to judiciously determine the search range for CkQST according to the workload of objects, which minimize the verification cost, update cost, and the re-evaluation cost.
The experiments on real-world and synthetic datasets demonstrate that the proposed techniques can efficiently evaluate CkQST. Compared with the state-of-the-art techniques, when the number of CkQST reaches 20 M, the average index updating time caused by incoming objects decreases by 61%, and the average incoming object processing time decreases by 36%. Compared with the re-evaluation from scratch, the average processing time for expired objects decreases by 99.99%. The rest of this paper is organized as follows. Section 2 formally defines CkQST and presents a framework for evaluating CkQST. Section 3 presents three key techniques for evaluating CkQST. Section 4 reports the experimental studies. Finally, Section 5 concludes this paper.

The Framework for Evaluating CkQST
In this section, we formally define CkQST in Section 2.1 and present a framework to evaluate CkQST in Section 2.2.

Problem Definition
A spatial-textual object is defined as o = (loc, ψ, t e ), where o.loc is the geo-location, o.ψ is a set of keywords (terms) from a vocabulary set V, and o.t e is a timestamp indicating the expiration time of o. All the spatial-textual objects over the data streams are denoted as O. A CkQST is defined as q = (loc, ψ, k, t e ), where q.loc, q.ψ, and q.t e follow the similar meaning to o, q.k is the number of returned objects, i.e., at most q.k (abbreviated as k) results are maintained for q. The result list of q, denoted as q(O) k , contains a set of k objects, each of which covers all the keywords in q.ψ. q(O) k is organized by a linked list, in which objects are arranged in the ascending order according to the distances to q.
is the Euclidean distance between o and q. Let q.kdist be the distance between q and its k th nearest neighbor result. The search range for q, denoted as q.SR k , is defined as a circle centered at q.loc with radius q.kdist.
Spatial-textual objects are usually advertisements published by merchants or the latest breaking news, and CkQST are users' search requests. Hereafter spatial-textual object and CkQST are abbreviated as object and query, respectively, if there is no ambiguity. To simplify the calculation, the terms in the vocabulary set V are mapped to integers between 1 and |V| according to the alphabetical order, where |V| is the number of terms in V. We assume that the terms in V, and the terms contained in queries and objects are sorted in increasing order. Specifically, for ∀q ∈ Q, we use q.ψ[i] to denote the i th keyword of q, q.ψ[i : j] to denote a subset of q.ψ, i.e., ∪ i≤l≤ j q.ψ[l] , q.ψ[: i] to denote ∪ 1≤l≤i q.ψ[l] , q.ψ[i :] to denote ∪ i≤l≤|q.ψ| q.ψ[l] , and q.ψ to denote the number of keywords in q.ψ. Objects follow the similar notations. Table 1 summarizes the notations used throughout this paper.
Problem Statement. Given a set of CkQST Q and spatial-textual data streams O, for each CkQST, find the kNN objects containing all the query keywords over O whenever objects arrive or expire. The subset of q.ψ q.ψ , o.ψ The number of keywords in q.ψ and o.ψ V, |V| A vocabulary set and the number of terms in V N, n The quadtree node PL w i1 w i2 The posting list of the ordered, inverted index b, b r The block of a posting list b r .minw, b r .maxw The minimum and maximum q i .ψ [3] for any query q i in b r b r .ψ The terms contained in [b r .minw, b r .maxw] |b| The number of queries in b |B| The number of blocks in a posting list The verification cost and update cost of PL w i1 w i2 The verification cost and update cost within unit time interval if q is associated with N p B V (b r ) The probability that the block b r is verified p The probability that q is verified if it is inserted into N The probability that these queries subjected to q i .ψ [3] = w j are verified in b r ISPRS Int. J. Geo-Inf. 2020, 9, 694 5 of 22

The Framework for Evaluating CkQST
The framework for evaluating CkQST shown in Figure 2 consists of two indexes and four key techniques. The object index organizes the objects and can be implemented with any existing spatial-textual index, and we adopt the inverted linear quadtree (IL-quadtree) [18] as an example. The query index organizes queries, which is essentially a quadtree integrated with an ordered, inverted index described in Section 3. The arrival and expiration of objects. When multiple objects arrive in a batch, they are inserted into the object index, and processed by the object-batch processing algorithm with the help of the query index to find all the affected queries and update the corresponding queries' results and search ranges. When objects expire, the result list of affected queries is checked. Those queries that cannot be refilled through their result list are re-evaluated from scratch against the object index. To save computational cost, the expired objects are removed lazily from the object index until they are accessed again.
The arrival and deletion of queries. When a new query is submitted, it is initially evaluated using the object index with several strategies. A cost-based k-skyband technique is used to find an optimal search range for the query to reduce the cost for updating the index by sacrificing a little bit of filtering performance. A memory-based cost model is used to get the corresponding mapped spatial nodes. An adaptive insertion strategy is used to get the posting list and the corresponding block to be inserted. These strategies can further improve the filtering performance of the index and reduce the cost for updating the index. When a query is deleted or its search range shrinks, a flag is set in the corresponding nodes, where a query table is maintained, and it is not removed from the query index until accessed again, which is called delayed deletion and is necessary in an update-friendly system. A query insertion request might cancel the marked items, which avoid the deletion of objects changing frequently. If a query is deleted, its result list is also removed.

The Query Index
According to the above discussions, the query index is essentially a quadtree extended with an ordered, inverted index. Three techniques are proposed to enhance the filtering ability and reduce the update cost of the index. Section 3.1 introduces the motivations. Section 3.2 describes the ordered, inverted index, followed by a detailed adaptive query inserting algorithm in Section 3.3. Section 3.4 proposes the memory-based cost model to quantitatively analyze how to find optimal associated nodes for CkQST. The algorithm for processing objects in batches is presented to improve the throughput in Section 3.5. The re-evaluation technique is introduced in Section 3.6.

Motivations
Organizing the search range of CkQST. The first issue of using a quadtree to organize the search range of CkQST is how to map the search range to the quadtree nodes. Given that ∀q ∈ Q, q.SR k can be mapped to any set of quadtree nodes NS = {n 1 , n 2 , · · · , n i }, only if the union of the spatial region corresponding to these nodes in NS covers q.SR k , which is also called that q is associated with NS. Associating a query with the quadtree nodes is challenging because it affects two computation costs: (1) Verification cost, i.e., the cost of verifying the query with the objects falling in the associated nodes.
(2) Update cost, i.e., the cost of inserting or deleting the query in or from the associated nodes. If the search range is organized by nodes with large regions, the index update cost is small, and the verification cost is large; otherwise, if multiple nodes with small regions are used, the situation is reversed. Therefore, a cost model is required to trade off the verification cost and update cost, and find the optimal associated nodes for CkQST.
Organizing the keywords of CkQST. When new objects arrive, the cost of verifying these objects with queries in spatial nodes is expensive. How to reduce the verification cost is the key to improve the filtering ability of the index. We discuss three aspects of constructing an inverted index. (1) For an inverted index, queries in posting lists are usually unordered. For the five queries in Figure 1, we attached them to the posting list of a single keyword. Figure 3a is the inverted index in which queries are attached to the posting list corresponding to the first query keyword, and Figure 3b is the ranked-key inverted index in which queries are attached to the posting list corresponding to the least frequent keyword. If the incoming objects contain the corresponding term, all queries in posting lists are verified [1,2], which is inefficient. In this work, we use an ordered, inverted index to solve the above problem. Figure 3c is the ordered, inverted index if queries are attached to the posting list corresponding to the first keyword, i.e., queries in posting lists are organized in the ascending order according to the keywords. When o 6 with keywords {w 1 , w 2 , w 3 } arrives, the posting list corresponding to w 1 is verified. When o 6 is verified with q 2 , its keywords are smaller than q 2 , so we can terminate the verification early and speed up processing objects. the update cost of the index. Section 3.1 introduces the motivations. Section 3.2 describes the ordered, inverted index, followed by a detailed adaptive query inserting algorithm in Section 3.3. Section 3.4 proposes the memory-based cost model to quantitatively analyze how to find optimal associated nodes for CkQST. The algorithm for processing objects in batches is presented to improve the throughput in Section 3.5. The re-evaluation technique is introduced in Section 3.6.

Motivations
Organizing the search range of CkQST. The first issue of using a quadtree to organize the search range of CkQST is how to map the search range to the quadtree nodes. Given that q ∀ ∈ , . k q SR can be mapped to any set of quadtree nodes , only if the union of the spatial region corresponding to these nodes in NS covers . k q SR , which is also called that q is associated with NS . Associating a query with the quadtree nodes is challenging because it affects two computation costs: (1) Verification cost, i.e., the cost of verifying the query with the objects falling in the associated nodes.
(2) Update cost, i.e., the cost of inserting or deleting the query in or from the associated nodes.
If the search range is organized by nodes with large regions, the index update cost is small, and the verification cost is large; otherwise, if multiple nodes with small regions are used, the situation is reversed. Therefore, a cost model is required to trade off the verification cost and update cost, and find the optimal associated nodes for CkQST.
Organizing the keywords of CkQST. When new objects arrive, the cost of verifying these objects with queries in spatial nodes is expensive. How to reduce the verification cost is the key to improve the filtering ability of the index. We discuss three aspects of constructing an inverted index. (1) For an inverted index, queries in posting lists are usually unordered. For the five queries in Figure 1, we attached them to the posting list of a single keyword. Figure 3a is the inverted index in which queries are attached to the posting list corresponding to the first query keyword, and Figure 3b is the rankedkey inverted index in which queries are attached to the posting list corresponding to the least frequent keyword. If the incoming objects contain the corresponding term, all queries in posting lists are verified [1,2], which is inefficient. In this work, we use an ordered, inverted index to solve the above problem. Figure   q contains less than three keywords, we expand its keywords by duplicating the last keyword to construct the ordered index.
(2) Compared with the ordered, inverted index constructed by single keyword, the ordered, inverted index constructed by multiple keywords has more advantages. The length is shorter and the verification probability is smaller. As Figure 3d shows, 6 o is verified with the first two posting lists and contains all the keywords of the queries in these posting lists. However, the number of posting lists might grow sharply. If the number of terms contained in vocabulary set  is 1 M, and the number of query keywords are not more than 5, total 10 6*5 posting lists are required, which is difficult to implement by hash table due to the need for large continuous memory. Like other works in [1][2][3][4][5][6][7][8][9][10][11][12], this paper uses the Map class in Microsoft Visual Studio [19] to build the ordered, inverted index. Lemma 1 describes the verification efficiency with the number of keywords for constructing the ordered, inverted index. Section 4.2 verifies this lemma through experiments. Based on the The keywords {w 1 w 2 }{w 1 w 2 w 3 }{w 1 w 2 w 5 }{w 1 w 2 w 5 }{w 1 w 2 w 7 } q 2 q 5 w 1 w 2 w 5 q 3 w 1 w 2 w 7 q 4 w 1 w 2 w 2 q 1 w 1 w 2 w 3 (d) ordered, inverted index constructed by three keywords. As q 4 contains less than three keywords, we expand its keywords by duplicating the last keyword to construct the ordered index.
(2) Compared with the ordered, inverted index constructed by single keyword, the ordered, inverted index constructed by multiple keywords has more advantages. The length is shorter and the verification probability is smaller. As Figure 3d shows, o 6 is verified with the first two posting lists and contains all the keywords of the queries in these posting lists. However, the number of posting lists might grow sharply. If the number of terms contained in vocabulary set V is 1 M, and the number of query keywords are not more than 5, total 10 6×5 posting lists are required, which is difficult to implement by hash table due to the need for large continuous memory. Like other works in [1][2][3][4][5][6][7][8][9][10][11][12], this paper uses the Map class in Microsoft Visual Studio [19] to build the ordered, inverted index. Lemma 1 describes the verification efficiency with the number of keywords for constructing the ordered, inverted index. Section 4.2 verifies this lemma through experiments. Based on the discussions, we select two keywords to construct the ordered, inverted index.
(3) Usually, there are many queries in posting lists, but only a small number match the incoming objects. Therefore, quickly locating the queries to be verified in posting lists is another way to improve the efficiency of evaluating CkQST. The queries in posting list are partitioned into multiple blocks such that objects are verified with the queries in a few blocks rather than the whole posting list. The only problem is how to partition these queries in posting lists. It is inefficient to have too many or too few queries in a block. An adaptive insertion strategy is proposed in Section 3.3.

Ordered, Inverted Index
The formal definition of an ordered, inverted index constructed using two keywords follows. Queries are attached to the posting list of their first two keywords, and arranged in ascending order according to their keywords.

Definition 1 (Ordered Posting List/Ordered, Inverted Index).
Given a set of queries q 1 , q 2 , · · · , q i to be inserted into a quadtree node, if q 1 .ψ [1 : 2] ], the posting list determined by the two terms w i1 , w i2 at the node is denoted as PL w i1 w i2 , in which these queries are successively inserted. PL w i1 w i2 is called an ordered posting list. Specifically, if these queries only contain one keyword, the corresponding posting list is denoted as PL w i1 w i1 . All the ordered posting lists constitute the ordered, inverted index.
Hereafter the ordered posting list is abbreviated as posting list, if there is no ambiguity. To quickly locate the queries to be verified, posting lists are divided into multiple blocks.

Definition 2 (Block).
Given any ordered posting list PL w i1 w i2 ,w i1 w i2 ,b r is the r th block of PL w i1 w i2 . For any query qinb r ,q.ψ [3] ∈ [b r .minw, b r .maxw], where b r .minw = min q i ∈b r q i .ψ [3], b r .maxw = max q i ∈b r q i .ψ [3]. b r .ψ denotes all the keywords satisfying b r .ψ = w w ∈ [b r .minw, b r .maxw] . Specially, if q only contains one or two keywords, it is inserted into the block b 0 of the corresponding posting list. Lemma 1. If the number of keywords for constructing an ordered, inverted index is m(m ≥ 1), there are at most |V| m posting lists at a node. For any object o containing more than two keywords, the verification cost can be estimated by Equation (1). Where |B| is the number of blocks in a posting list, |b| is the number of queries in b, Q contains queries whose keywords are contained in o, and the number of keywords is less than m. The proof is shown in Appendix A.
For ∀w j ∈ b.ψ, the verification probability, denoted as p w V (w j ), is maintained, i.e., the probability of verifying these queries subjected to q i .ψ [3] = w j in b r . For ∀b r , the verifying probability, denoted as p B V (b r ), is maintained, i.e., the probability that the block b r is verified, which can be estimated by Equation (2).
The following theorems claim that the incoming object is verified with few queries in posting lists. For any incoming object, the blocks being verified can be located according to block keyword interval

Adaptive Query Insertion Algorithm
Given any posting list at a node, we consider two extreme situations: (1) the posting list only contains a block which contains all queries; (2) the posting list contains many blocks, each of which only contain one query. The former has poor filtering ability and the latter has high update cost. Neither is what we expect. In the real world, people are concerned with different interests and often pay high attention to the breaking news or topical issues, so the keywords of the queries and objects vary over time. For each query, we adaptively insert it into the posting lists according to the historical queries and objects. We expect that the increase of the verification cost and update cost of the posting list is minimal after the query being inserted.
Given a posting list PL w i1 w i2 , the update cost is denoted as C PL U (PL w i1 w i2 ), and the verification cost is denoted as C PL V (PL w i1 w i2 ), which can be estimated by Equation (3), where C B V (b r ) = p B V (b r ) * (log|B| + |b r |) represents the verification cost of the block b r in PL w i1 w i2 .
Theorem 3. Let q be the query to be inserted into PL w i1 w i2 , q.ψ[1 : 3] = w i1 , w i2 , w j , PL w i1 w i2 has |B|(|B| ≥ 1) blocks, ∆C PL V is the increase of verification cost of PL w i1 w i2 after q being inserted. We have the following conclusions. Proof.
(1) Case 1: If q is inserted into b r , the verifying probability p B (b r ) does not change since w j ∈ [b r .minw, b r .maxw], but the number of queries in b r increases by 1.
(2) Case 2:  To compare the verification costs and update costs, we introduce a normalization parameter θ U (0 < θ U ≤ 1) to represent the ratio of the update operation to the verification operation, i.e., if a query is inserted into a node, there will be 1/θ U objects being verified with it. A query is adaptively inserted into the posting list according to the following theorem.
Theorem 4. Let q be the query to be inserted into PL w i1 w i2 , q.ψ[1 : 3] = w i1 , w i2 , w j . If ∃b r in PL w i1 w i2 satisfies w j ∈ [b r .minw, b r .maxw], q is inserted into b r . Otherwise, the minimum ∆C PL V + θ U ∆C PL U of the cases 2-4 is taken.
Given ∀q ∈ Q, if q contains no more than two keywords, it is directly inserted into the b 0 of the corresponding posting list. Algorithm 1 shows how a query q containing more than two keywords is adaptively inserted into a posting list. If PL q.ψ [1]q.ψ [2] does not exist, a new block b is constructed, and q is inserted into b (lines 1-2). Otherwise, a block in PL q.ψ [1]q.ψ [2] is found for q to minimize ∆C PL V + θ U ∆C PL U (lines 3-12). First, we find the block, denoted as b r , whose b r .minw is the smallest-no smaller than q.ψ [3] (line 3). If q.ψ [3] = b r .minw, q is inserted into b r (line 4). If q.ψ[3] ∈ [b r−1 .minw, b r−1 .maxw] (r > 1), q is inserted into block b r−1 (lines 5-6). Otherwise, we compute ∆C PL V + θ U ∆C PL U according to cases 2-4 in Theorem 3 and select the minimum case (lines 7-12). It is worth noting that when compared with the first block of the list, there are only cases 3-4, and if q.ψ [3] is larger than b r .minw of all the blocks, there are only cases 2 and 4.
Computation complexity. In the worst case, the computation cost of Algorithm 1 is shown as Lemma 1. That is, in posting lists constructed by two keywords, the complexity of inserting a query at a node is O( o.ψ 2 log|V| 2 + o.ψ 3 (log|B| + |b|/|B| b.ψ )). The algorithm can adaptively adjust |B| and |b|.

The Memory-Based Cost Model
A memory-based cost model associates queries with the optimal quadtree nodes. Given the search range of CkQST, the model traversals the quadtree from the root node, compares the sum of the verification cost and index update cost if the query is associated with the current node and its child nodes, and selects the smaller one. The verification cost is the product of the number of verified objects and the expectation of the verification cost, and the update cost is the expectation of the update cost if the query is inserted into the corresponding block of the posting list.

Definition 3 (Minimum Bounding Node).
Given ∀q ∈ Q and search range q.SR k , if ∃N,q.SR k ⊆ N.R, and for any child node n i of N, q.SR k n i .R, N is the minimum bounding node of q , where N.R, n.R are the region where N and n locate respectively. The minimum bounding node of q is the node, which covers its search range, but any of its child nodes cannot completely cover the search range.
Verification cost. Given ∀q ∈ Q and its minimum bounding node N, q.ψ[1 : 3] = {w i1 , w i2 , w i3 }, if q is associated with N, the verification cost within unit time interval, denoted as C q V (q, N), can be estimated by Equation (4). We assume that the query and object contain more than two keywords, and the average verification cost is unit time.
Specifically, if the query or the object contains one or two keywords, the verification cost is estimated by Equation (5). This case is simple, so we omit the details.
where Num N o (N) is the number of objects falling in N within the unit time interval. p q V (q N) is the probability that q is verified if it is inserted into b r in N, i.e., the probability that the objects contain the terms w i1 , w i2 , and w j ∈ [w i3 , b r .maxw], and can be estimated by Equation (6).
where b r .ψ is the number of keywords contained in b r .ψ. E V (q N) is the verification cost if q is inserted into b r in N, and can be estimated with the expectation of verification cost of the queries in b r , i.e., Equation (7).
where (Num q ) ≤w j is the number of queries subjected to q i .ψ [3] ≤ w j in b r . Similarly, if the query q is associated with a set of non-overlapping nodes, denoted as NS, the verification cost is denoted as C q V (q, NS) and can be estimated by Equation (8).
For ∀q ∈ Q, we find the optimal associated nodes starting from its minimum bounding node, and check whether the query is associated with the current node or associated with its child nodes. The difference of two verification costs is estimated by Equation (9).
where INS keeps the intermediate result, n.child contains the child nodes of n that intersect with the search range of the query. It is worth noting that if ∆C q V ≤ 0, we terminate the iteration. Update cost. When inserting or deleting queries in nodes, it will incur an index update cost. We delay deleting queries until these queries are accessed again, so the deletion cost is ignored. If a query q is associated with its minimum bounding node N, and is inserted into a block b r of posting list PL q.ψ [1]q.ψ [2] in N, the insertion cost consists of two parts, the time to find the corresponding block and the time to find the insertion position. The update cost, denoted as C q U (q, N), can be estimated by Equation (10).
If q is associated with a set of non-overlapping nodes, denoted as NS, the update cost is denoted as C q U (q, NS) and can be estimated by Equation (11).
Similarly to ∆C q V , the difference of two update costs between the query being associated with the node and associated with the child nodes is estimated by Equation (12).
Given ∀q ∈ Q and search range q.SR k , we start from the minimum bounding node, and computes ∆C q V and ∆C q U between the query being associated with the node and associated with the child nodes. If ∆C q V ≥ θ U · ∆C q U , the child nodes are the optimal. Otherwise the node is optimal. The computation cost consists of two parts, finding the minimum bounding node of q, and finding an optimal association in the descendant nodes of minimum bounding node. The computation cost of the first part is O(θ h ), and the second part is O((4 θ h − 1)/3), i.e., in the worst case, the node will be partitioned until the leaf node, where θ h is the height of the quadtree.

Processing Objects in Batches
For these objects being verified with the same posting list, an object processing algorithm, which is a group matching technique that follows the filtering and verification strategy, is proposed to process objects in batches.
A data structure bid, w, oset is defined to group the objects being verified with the same posting list, where bid(bid > 0) is the block id, w(w ∈ [b bid .minw, b bid .maxw]) is a term, oset is a set of objects being verified with block b bid and containing w. For the convenience of the description, wset bid is a set of terms satisfying w ∈ [b bid .minw, b bid .maxw], oset bid,w is a set of objects which are verified with the queries in block b bid and contain w.
Algorithm 2 describes how to process a set of objects represented by bid, w, oset , which are verified with the queries in the posting list PL w i1 w i2 . If b bid is b 0 , the queries in b 0 is verified with the objects in oset (lines 1-5). Otherwise, for any term w j in wset bid , according to Theorem 2, if b bid .minw > w j , we check the next term (line 7); if b bid .maxw < w j , we check the next block (line 8); otherwise, for each query q in b bid , if q.ψ [3] > w j , we check the next term (line 10); if q.ψ[ q.ψ ] < w j , we check the next query (line 11); otherwise, we verify whether the object is the results of q. If yes, q, o is added to QOS (lines [12][13]. Moreover, the result list and search range of q are updated. Computation complexity. Algorithm 2 describes how to process objects in batches in an ordered posting list. In the worst case, objects are processed individually. As Lemma 1 shows, in posting lists constructed by two keywords, for an object, the time complexity of finding the qualified queries at a node is O( o.ψ 2 log|V| 2 + o.ψ 3 (log|B| + |b|/|B| b.ψ )).

Cost-Based k-Skyband Technique
To reduce the re-evaluation cost, a cost-based k-skyband technique is proposed to judiciously determine an optimal search range for CkQST such that the overall cost defined in the cost model can be minimized. Specifically, for ∀q ∈ Q, three parameters are defined: an extended search range is denoted as q.SR, where q.SR ⊇ q.SR k ; a k-skyband, i.e., an extended result list, denoted as q(O), where q(O) ⊇ q(O) k ; the number of objects containing all query keywords within q.SR in the initial timestamp is denoted as q.θ k , where q.θ k ≥ q.k.

Definition 4 (Loose Matching).
Given ∀o ∈ O and ∀q ∈ Q, o loosely matches q only if q.ψ ⊆ o.ψ and o.loc ∈ q.SR. All the objects that loosely match q are denoted as q On the other hand, if q(O) < k, q(O) sup ≥ k, i.e., ∃o ∈ q(O) sup and o q(O), which means o loosely matches q, but o is dominated by more than k other objects, which is contradicted by q(O) < k. The theorem is proved.
According to Theorem 5, the extended result list is the super set of the exact result list, from which we can extract the kNN objects, and the number of objects in extended result list is less than k only if the number of objects in q(O) sup is less than k.
Given ∀q ∈ Q, an extended search range q.SR, and the corresponding extended result list q(O), three costs are defined in the cost-based k-skyband technique: the verification cost of q within q.SR, the update cost of q(O), and the re-evaluation cost.
Verification cost. The verification cost of q within q.SR, within the unit time interval, denoted as C R V (q q.SR), is estimated by Equation (13), i.e., the verification cost if q is inserted into all the leaf nodes that intersect with q.SR. , is estimated by Equation (14). Where f req o U is the number of object updates within unit time interval, p q M (q.SR) is the probability that the objects loosely match q within the search range q.SR, and is estimated by p q M (q.SR) = q.θ k /Num N o (n 0 ), 1/2 is the probability that a qualified object arrival, q(O) can be estimated by q(O) = max k · ln(θ k /k), θ k .
Re-evaluation cost. The re-evaluation cost within the unit time interval is denoted as 1/θ t C Ie (q). Where θ t is the re-evaluation period, i.e., the shortest time required between two consecutive independent evaluations, and 1/θ t is the frequency of re-evaluation. C Ie (q) is the re-evaluation cost, and is approximated to the verification cost in q.SR k , i.e., C Ie (q) = C R V (q q.SR k ) = n.R∩q.SR k ∅ C q V (q, n). The overall cost in the cost-based k-skyband technique, denoted as C Re (q), is shown in Equation (15). When C Re (q) is minimal, the search range is optimal.
In the following, we discuss how to get θ t . θ t is the re-evaluation period, i.e., the shortest time that q(O) is reduced to k − 1 since the last re-evaluation. For ∀q ∈ Q, the update process of number of objects in q(O) can be modeled as a simple random walk, which is a stochastic sequence S l , with S 0 being the original status, defined by S l = l i=1 X i , where X i is the object update, which is an independent and identically distributed random variable. In q(O), if an object is inserted, X i = 1; if an object expires or is dominated, X i = −1; otherwise X i = 0. It's difficult to estimate X i due to the eviction of objects by the dominance relationship in q(O). For example, an object is inserted, but the number of objects decreases due to the eviction of objects with dominance counters reaching k. According to Theorem 5, the number of objects in q(O) is less than k only if the number of objects in q(O) sup is less than k, and the objects in q(O) sup don't dominate each other. Therefore we estimate the shortest time that q(O) sup is reduced to k − 1, denoted as θ t , where θ t = θ t . The object update in q(O) sup at any timestamp can be estimated as Equation (16).
The number of object updates required to reduce the number of objects from q.θ k to k − 1 in q(O), denoted as Z(q), is estimated by Equation (17). θ t is estimated by For ∀q ∈ Q, the variables in Equation (15) are q.SR and q.θ k . To minimize Equation (15), we employ the incremental estimation algorithm to compute the optimal q.θ k and the corresponding q.SR.
To accommodate our extended search range with the objects processing algorithm and index construction and maintenance algorithm, we replace q(O) k with q(O) and replace q.SR k with q.SR.

Experiments
In this section, we conduct a set of comprehensive experiments to evaluate the efficiency and scalability of the key techniques. Section 4.1 introduces the experimental environment. Section 4.2 evaluates the effect of three tuning parameters and the re-evaluation technique. Section 4.3 evaluates the efficiency and scalability of our index techniques.
Datasets. Three datasets are collected for experimental evaluations. The statistics are shown in Table 2. TWEETS contains twitters collected from Twitter [8]. TWEETS is the default dataset. GN is obtained from the US Board on Geographic Names, in which each record contains a geo-location and some terms (http://geonames.usgs.gov/). GOWALLA is a synthetic dataset, in which each record contains a geo-location collected from the Gowalla (https://snap.stanford.edu/data/loc-gowalla.html), and less than 50 terms randomly assigned from 20 Newsgroups (http://people.csail.mit.edu/jrennie/ 20Newsgroups). Based on the datasets, we generate queries and objects. Query Workload. For each sample dataset, we take the geo-location as the geo-location of the query and randomly select j terms of the sample data as the query keywords, where 1 ≤ j ≤ 5. The number of returned kNN results k is set to a default value. At any timestamp, the expired queries are randomly selected.
Object Workload. For each sample dataset, we take all terms as the object keywords, and take the geo-locations deviating from the original geo-location by 0.01% to 1% of the maximum distance in the region. At any timestamp, the expired objects are randomly selected.
Set of Queries and Objects. For each dataset, unless otherwise specified, we select 5 M objects and queries to construct the query index and object index initially, and generate three test sets, each of which contains 2 M objects and 2 M queries. The evaluation criteria take the average performance of three test sets.
Baseline. We compare our index techniques with IQ-tree [1], Ap-tree [3] and FAST [6]. By default, for Ap-tree, the fanout, partition threshold, and KL-Divergence threshold are set to 200, 40, and 0.001. We use the number of verifications to replace the number of I/O in the cost model of IQ-tree. In the following sections, we use AOIQ-tree to represent the index integrated the quadtree with the ordered, inverted index, and three key techniques. We compare the cost-based k-skyband technique with the Kmax [16] when they are integrated in the AOIQ-tree.
Evaluation criteria. We report four criteria: (1) the index construction time (i.e., ICT), i.e., the time of inserting queries into index after finding their search range; (2) the average incoming object processing time (i.e., AOPT), i.e., the time of finding the affected queries and modifying their corresponding parameters when an object arrives; (3) the average index updating time caused by objects (i.e., AIUT), i.e., the time of updating query index after processing objects; (4) index size, i.e., the memory used for constructing the query index. By default, the number of keywords for constructing ordered, inverted index m, the number of kNN results returned for CkQST k, the height of the quadtree θ h , the ratio of the update operation to the verification operation θ U , and the number of object updates within unit time interval f req o U are set to 2, 20, 10, 0.001, and 20,000.

Experimental Tuning
In this section, a series of experiments are conducted to evaluate the effect of parameters in techniques on the AOIQ-tree.
Effect of m. Figure 4 shows the evaluation criteria of the AOIQ-tree when m takes 1, 2, 3, 4, and 5. According to Lemma 1, if m is small, the number of queries in posting lists is large, so it takes a long time to verify queries in posting lists; contrarily, if m is large, it takes a long time to find the posting lists to be verified. Therefore, the optimal m is neither too small nor too large. As shown in Figure 4, when m takes 2, the performance of the index is the best. The larger m is, the larger the index size. When m > 3, the verification cost and update cost in a single posting list decrease, so the cost model maps queries to nodes with large regions, so ICT, AIUT and index size decrease, while AOPT increases.
Effect of θ U . Figure 5 shows the evaluation criteria of the AOIQ-tree when θ U takes 0.0001, 0.001, 0.01, and 0.1. When θ U takes 0.0001, the verification cost plays a major role in finding the associated nodes, therefore, queries are associated with many small nodes, so ICT is long, AOPT and AIUT are short, and index size is large. When θ U increases, the index update cost is more important, so queries are associated with fewer larger nodes, so ICT and index size decrease, while AOPT and AIUT increase. k when an object expires. The number of objects maintained in AOIQ-tree_Kmax is more than AOIQ-tree_Skyband, which is more than AOIQ-tree. The ICT of AOIQ-tree_Kmax is shortest, and that of AOIQ-tree is longest. The AOPT of AOIQ-tree and AOIQ-tree_Skyband are shorter than that of AOIQ-tree_Kmax. The EOPT of AOIQ-tree_Kmax and AOIQ-tree_Skyband are shorter than that of AOIQ-tree. This phenomenon is related to the numbers of objects maintained in their result list. Compared with AOIQ-tree, the EOPT of the other two techniques are much less, and if k takes 10, 20, the average update time is close to 0.   Effect of k. Figure 6 shows the evaluation criteria of the AOIQ-tree when k takes 10, 20, 30, 40, and 50. As k is small, the number of returned objects for queries is few, the search range of queries is small, and queries are associated with many small nodes, so the ICT is long, AOPT and AIUT are short, and index size is large. On the contrary, when k is large, the search range of the queries becomes larger, and queries are associated with few larger nodes, the ICT and index size decrease, and the AOPT and AIUT increase.
Effect of re-evaluation techniques. We evaluate the re-evaluation performance in three cases, denoted as AOIQ-tree, AOIQ-tree_Kmax, and AOIQ-tree_Skyband. AOIQ-tree only keeps k objects in the result list, AOIQ-tree_Kmax keeps k max = 2k objects in the result list, and the number of objects in the result list for AOIQ-tree_Skyband is calculated according to the cost-based k-skyband technique. Figure 7 shows the ICT, AOPT, and EOPT in three cases with varied k, where EOPT is the average processing time for expired objects, i.e., the average time of modifying the parameters of the affected queries or re-evaluating the queries if the number of objects in their result list is less than k when an object expires. The number of objects maintained in AOIQ-tree_Kmax is more than AOIQ-tree_Skyband, which is more than AOIQ-tree. The ICT of AOIQ-tree_Kmax is shortest, and that of AOIQ-tree is longest. The AOPT of AOIQ-tree and AOIQ-tree_Skyband are shorter than that of AOIQ-tree_Kmax. The EOPT of AOIQ-tree_Kmax and AOIQ-tree_Skyband are shorter than that of AOIQ-tree. This phenomenon is related to the numbers of objects maintained in their result list. Compared with AOIQ-tree, the EOPT of the other two techniques are much less, and if k takes 10, 20, the average update time is close to 0.

Performance Evaluation
Evaluation on different datasets. We evaluate the efficiency of the key techniques against three datasets in Figure 8. The number of queries is 5 M. As shown in Figure 8, besides the index size, AOIQ-tree is always the best one. IQ-tree has a good spatial filtering performance, but its textual filtering ability is weak; AP-tree comprehensively considers the spatial and textual distribution of queries, but the index construction and update cost are expensive; FAST has a good textual filtering performance, but its spatial filtering ability is weak. The memory-based cost model in AOIQ-tree can minimize the verification cost and update cost, which makes the number of queries in spatial nodes neither too many nor too few; the ordered, inverted index in AOIQ-tree is constructed by two keywords, which makes the verification cost close to the ordered keyword trie, and the update cost close to the ranked-key inverted index. AOIQ-tree takes up the most memory, which is determined by the structure of its posting lists. The evaluation criteria of GN are the smallest, and Gowalla are the largest, which is because the data in Gowalla contain far more keywords than the other two datasets. For Gowalla, the AOPT of AP-tree is shorter than that of FAST. This is because a large number of queries in Gowalla only contain frequent keywords, compared with FAST, AP-tree comprehensively considers the spatial and textual distribution of queries, i.e., index filtering is more powerful, so AOPT is shorter.
Effect of number of queries. To evaluate the scalability of the key techniques on the number of queries and objects, we increase the number of queries and objects from 1 M to 20 M to construct the object index and query index. As Figure 9 shows, the ICT, AOPT, and AIUT of all indexes increase as the number of queries increases, and AOIQ-tree is much more scalable. For instance, it only takes 0.23 ms on average to process the incoming objects when the number of queries reaches 20 M, which is 54% faster than Ap-tree and 36% faster than FAST. This shows that our techniques have good scalability. The AIUT of AOIQ-tree is the shortest, and Ap-tree is the longest. That's because AOIQtree associates queries with optimal nodes to adapt to the objects on data streams, and some of the Ap-tree nodes are re-constructed if many queries updates in these nodes. Compared with AP-tree, only some queries update in FAST nodes, so AOIQ-tree's AIUT is shorter.
Effect of number of query keywords . qψ . To evaluate the scalability of key techniques on . qψ , we increase . qψ from 1 to 5. As shown in Figure 10, the evaluation criteria of IQ-tree are insensitive to the number of keywords since it focuses on the spatial distribution of the queries. Ap-tree, FAST, and our index consider the keyword distribution of the queries, so the evaluation criteria vary with . qψ . As . qψ increases, the ICT and index size increase, and the AOPT decreases. That is because Ap-tree continuously calculates how to partition queries into nodes according to query keywords, and increases the number of textual nodes and height of the tree. For FAST, as . qψ increases, more keywords being attached to posting lists become frequent, and queries are more likely to be inserted

Performance Evaluation
Evaluation on different datasets. We evaluate the efficiency of the key techniques against three datasets in Figure 8. The number of queries is 5 M. As shown in Figure 8, besides the index size, AOIQ-tree is always the best one. IQ-tree has a good spatial filtering performance, but its textual filtering ability is weak; AP-tree comprehensively considers the spatial and textual distribution of queries, but the index construction and update cost are expensive; FAST has a good textual filtering performance, but its spatial filtering ability is weak. The memory-based cost model in AOIQ-tree can minimize the verification cost and update cost, which makes the number of queries in spatial nodes neither too many nor too few; the ordered, inverted index in AOIQ-tree is constructed by two keywords, which makes the verification cost close to the ordered keyword trie, and the update cost close to the ranked-key inverted index. AOIQ-tree takes up the most memory, which is determined by the structure of its posting lists. The evaluation criteria of GN are the smallest, and Gowalla are the largest, which is because the data in Gowalla contain far more keywords than the other two datasets. For Gowalla, the AOPT of AP-tree is shorter than that of FAST. This is because a large number of queries in Gowalla only contain frequent keywords, compared with FAST, AP-tree comprehensively considers the spatial and textual distribution of queries, i.e., index filtering is more powerful, so AOPT is shorter.
Effect of number of queries. To evaluate the scalability of the key techniques on the number of queries and objects, we increase the number of queries and objects from 1 M to 20 M to construct the object index and query index. As Figure 9 shows, the ICT, AOPT, and AIUT of all indexes increase as the number of queries increases, and AOIQ-tree is much more scalable. For instance, it only takes 0.23 ms on average to process the incoming objects when the number of queries reaches 20 M, which is 54% faster than Ap-tree and 36% faster than FAST. This shows that our techniques have good scalability. The AIUT of AOIQ-tree is the shortest, and Ap-tree is the longest. That's because AOIQ-tree associates queries with optimal nodes to adapt to the objects on data streams, and some of the Ap-tree nodes are re-constructed if many queries updates in these nodes. Compared with AP-tree, only some queries update in FAST nodes, so AOIQ-tree's AIUT is shorter. Effect of number of query keywords q.ψ . To evaluate the scalability of key techniques on q.ψ , we increase q.ψ from 1 to 5. As shown in Figure 10, the evaluation criteria of IQ-tree are insensitive to the number of keywords since it focuses on the spatial distribution of the queries. Ap-tree, FAST, and our index consider the keyword distribution of the queries, so the evaluation criteria vary with q.ψ . As q.ψ increases, the ICT and index size increase, and the AOPT decreases. That is because Ap-tree continuously calculates how to partition queries into nodes according to query keywords, and increases the number of textual nodes and height of the tree. For FAST, as q.ψ increases, more keywords being attached to posting lists become frequent, and queries are more likely to be inserted into the multiple higher-level nodes. For AOIQ-tree, the time of sorting the queries increases when the ordered, inverted index is constructed.

Conclusions and Future Research Perspectives
The challenging for evaluating CkQST is how to strike the balance between the filtering ability and the update cost of the spatial-textual index. To address the challenging, we use quadtree and inverted index to organize millions of CkQST with three techniques. A memory-based cost model maps the search range of CkQST to the quadtree nodes to balance the spatial filtering ability of the indexes and the cost for updating the indexes. The balance can be further tuned by the cost-based kskyband technique, which judiciously determines the search range for CkQST according to the workload of objects. An adaptive block-based ordered, inverted index enhances the textual filtering ability. The experimental results on the real-world and synthetic datasets show that the proposed techniques are effective and scalable, and can significantly improve the evaluation efficiency of CkQST. The future work for evaluating the continuous query over spatial-textual data streams includes solving the challenges of continuous queries in mobile and other relevant scenarios, and exploring efficient evaluation techniques using hardware technologies such as Graphics Processing Unit and distributed clusters and trade-off strategy for precision and evaluation efficiency.
Author Contributions: Rong Yang proposed the methods, implemented the algorithms for the experiments, and wrote the manuscript; Baoning Niu provided suggestions for the methods and experiments, reviewed and

Conclusions and Future Research Perspectives
The challenging for evaluating CkQST is how to strike the balance between the filtering ability and the update cost of the spatial-textual index. To address the challenging, we use quadtree and inverted index to organize millions of CkQST with three techniques. A memory-based cost model maps the search range of CkQST to the quadtree nodes to balance the spatial filtering ability of the indexes and the cost for updating the indexes. The balance can be further tuned by the cost-based k-skyband technique, which judiciously determines the search range for CkQST according to the workload of objects. An adaptive block-based ordered, inverted index enhances the textual filtering ability. The experimental results on the real-world and synthetic datasets show that the proposed techniques are effective and scalable, and can significantly improve the evaluation efficiency of CkQST. The future work for evaluating the continuous query over spatial-textual data streams includes solving the challenges of continuous queries in mobile and other relevant scenarios, and exploring efficient evaluation techniques using hardware technologies such as Graphics Processing Unit and distributed clusters and trade-off strategy for precision and evaluation efficiency.
Author Contributions: Rong Yang proposed the methods, implemented the algorithms for the experiments, and wrote the manuscript; Baoning Niu provided suggestions for the methods and experiments, reviewed and modified the manuscript. All authors have read and agreed to the published version of the manuscript.