Fair Max–Min Diversity Maximization in Streaming and Sliding-Window Models

Diversity maximization is a fundamental problem with broad applications in data summarization, web search, and recommender systems. Given a set X of n elements, the problem asks for a subset S of k≪n elements with maximum diversity, as quantified by the dissimilarities among the elements in S. In this paper, we study diversity maximization with fairness constraints in streaming and sliding-window models. Specifically, we focus on the max–min diversity maximization problem, which selects a subset S that maximizes the minimum distance (dissimilarity) between any pair of distinct elements within it. Assuming that the set X is partitioned into m disjoint groups by a specific sensitive attribute, e.g., sex or race, ensuring fairness requires that the selected subset S contains ki elements from each group i∈[m]. Although diversity maximization has been extensively studied, existing algorithms for fair max–min diversity maximization are inefficient for data streams. To address the problem, we first design efficient approximation algorithms for this problem in the (insert-only) streaming model, where data arrive one element at a time, and a solution should be computed based on the elements observed in one pass. Furthermore, we propose approximation algorithms for this problem in the sliding-window model, where only the latest w elements in the stream are considered for computation to capture the recency of the data. Experimental results on real-world and synthetic datasets show that our algorithms provide solutions of comparable quality to the state-of-the-art offline algorithms while running several orders of magnitude faster in the streaming and sliding-window settings.


Introduction
Data summarization is a common approach to tackling the challenges of a large volume of data in data-intensive applications. That is because, rather than performing high-complexity analyses on the whole dataset, it is often beneficial to perform them on a representative and significantly smaller summary of the dataset, thus reducing the processing costs in terms of both running time and space usage. Typical techniques for data summarization [1] include sampling, sketching, coresets, and diverse data selection.
In this paper, we focus on diversity-aware data summarization, which finds application in a wide range of real-world problems. For example, in database query processing [2,3], web search [4,5], and recommender systems [6], the output might be too large to be presented to the user in its entirety, even after filtering the results by relevance. One feasible solution, then, is to present the user with a small but diverse subset that is easy to process and representative of the complete results. As another example, when training machine learning models on massive data, feature and subset selection is a standard method to improve efficiency. As indicated by [7,8], selecting diverse features or subsets can lead to a better balance between efficiency and accuracy. A key technical problem in such cases is diversity maximization [9][10][11][12][13][14][15][16][17][18][19][20].
In more detail, for a given set X of elements in some metric space and a size constraint k, diversity maximization asks for a subset of k elements with maximum diversity. Formally, diversity is quantified by a function that captures how well a subset spans the range of elements in X, and is typically defined in terms of distances or dissimilarities among elements in the subset. Prior studies [3,4,6,12] have suggested many different objectives of this kind. Two of the most popular ones are max-sum dispersion, which aims to maximize the sum of the distances between all pairs of elements in the selected subset S, and max-min dispersion, which aims to maximize the minimum distance between any pair of distinct elements in S. Figure 1 illustrates a selection of the 10 most diverse points from a twodimensional point set with each of the two objectives for diversity maximization. As shown in Figure 1, max-sum dispersion tends to select "outliers" and may include highly similar elements in the solution, making it unsuitable for applications requiring more uniform coverage of the span of data. Therefore, we focus on diversity maximization with the objective of max-min dispersion, referred to as max-min diversity maximization, in this paper. In addition to diversity, fairness in data summarization is also attracting increasing attention [8,[21][22][23][24][25][26][27]. Several studies reveal that the biases with respect to (w.r.t.) sensitive attributes, such as sex, race, or age, in underlying datasets can be retained in the summaries and could lead to unfairness in data-driven social computational systems such as education, recruitment, and banking [8,23,26]. One of the most common notions for fairness in data summarization is group fairness [8,[21][22][23]27], which partitions the dataset into m disjoint groups based on a specific sensitive attribute and introduces a fairness constraint that limits the number of elements from group i in the data summary to k i for every group i ∈ [m] (see Figure 2 for an illustrative example). However, most existing methods for diversity maximization cannot easily be adapted to satisfy such fairness constraints. Moreover, a few methods that can deal with fairness constraints are specific to max-sum diversity maximization [9,11,13]. To the best of our knowledge, the methods in [17,20] are the only means of max-min diversity maximization with fairness constraints.
Furthermore, since many applications of diversity maximization are in the realm of massive data analysis, it is essential to design efficient algorithms for processing largescale datasets. The (insert-only) streaming and sliding-window models are well-recognized frameworks for big data processing. In the streaming model, an algorithm is only permitted to process each element in the dataset sequentially in one pass, is allowed to take time and space that are sublinear to or even independent of the dataset size, and is required to provide solutions of comparable quality to those returned by the offline algorithms. In the sliding-window model, the computation is further restricted to the latest w elements in the stream, and an algorithm is required to find good solutions in sublinear time and space w.r.t. the window size. However, the only known algorithms [17,20] for fair max-min diversity maximization are designed for the offline setting and are very inefficient in the streaming and sliding-window models.  Figure 2. Comparison of (a) unconstrained max-min diversity maximization and (b) fair max-min diversity maximization. We have a set of individuals, each described by two attributes, partitioned into two disjoint groups of red and blue, respectively. Fair diversity maximization returns a subset of size 10 that maximizes diversity in terms of attributes and contains an equal number (i.e., k i = 5) of elements from both groups.
Our Contributions: In this paper, we propose novel streaming and sliding-window algorithms for the max-min diversity maximization problem with fairness constraints. Our main contributions are summarized as follows: • We formally define the problem of fair max-min diversity maximization (FDM) in metric spaces. Then, we describe the existing streaming and sliding-window algorithms for (unconstrained) max-min diversity maximization [14]. In particular, we improve the approximation ratio of the existing streaming algorithm from 1−ε 5 to 1−ε 2 for any parameter ε ∈ (0, 1) by refining the analysis of [14]. • We propose two novel streaming algorithms for FDM. Our first algorithm, called SFDM1, is 1−ε 4 -approximate for FDM when there are two groups in the dataset. It takes O( k log ∆ ε ) time per element in the streaming processing, where ∆ is the ratio of the maximum and minimum distances between any pair of elements, spends O( k 2 log ∆ ε ) time for post-processing, and stores O( k log ∆ ε ) elements in memory. Our second algorithm, called SFDM2, is 1−ε 3m+2 -approximate for FDM with an arbitrary number m of groups. SFDM2 also takes O( k log ∆ ε ) time per element in the streaming processing but requires a longer O k 2 m log ∆ ε · (m + log 2 k) time for post-processing and stores O( We further extend our two streaming algorithms to the sliding-window model. The extended SWFDM1 and SWFDM2 algorithms achieve approximation factors of Θ(1) and Θ(m −1 ) for FDM with m = 2 and an arbitrary m when any Θ(1)-approximation algorithm for unconstrained max-min diversity maximization is used for post-processing. Additionally, their time and space complexities increase by a factor of O( log ∆ ε ) compared with SFDM1 and SFDM2, respectively. • Finally, we evaluate the performance of our proposed algorithms against the state-ofthe-art algorithms on several real-world and synthetic datasets. The results demonstrate that our algorithms provide solutions of comparable quality for FDM to those returned by the state-of-the-art algorithms while running several orders of magnitude faster in the streaming and sliding-window settings.
A preliminary version of this paper is published in [28]. In this extended version, we make the following novel contributions with respect to [28]: (1) We propose two novel algorithms for FDM in the sliding-window model along with the implementation of an existing algorithm for unconstrained max-min diversity maximization in the slidingwindow model [14]. Moreover, we analyze the approximation factors and complexities of the two algorithms for fair sliding-window diversity maximization; (2) We conduct more comprehensive examinations of our streaming algorithms by implementing and comparing them with a new offline baseline called FairGreedyFlow [20], which achieves a better approximation factor than previous offline algorithms. The additional results further confirm the superior performance of our streaming algorithms; (3) We conduct new experiments for FDM in the sliding-window setting to evaluate the performance of our sliding-window algorithms compared with the existing offline algorithms. The new experimental results validate their efficiency, effectiveness, and scalability.
Paper Organization: The rest of this paper is organized as follows. The related work is reviewed in Section 2. In Section 3, we introduce the basic concepts and formally define the FDM problem. In Section 4, we first propose our streaming algorithms for FDM. In Section 5, we further design our sliding-window algorithms for FDM. Our experimental setup and results are described in Section 6. Finally, we conclude the paper in Section 7.
An early study [33] proved that both max-sum and max-min diversity maximization problems are NP-hard even in metric spaces. The classic approaches to both problems are the greedy algorithms [34,35], which achieves the best possible approximation ratio of 1 2 unless P = NP. Indyk et al. [12] proposed composable coreset-based approximation algorithms for diversity maximization. Aghamolaei et al. [31] improved the approximation ratios in [12]. Ceccarello et al. [16] proposed coreset-based approximation algorithms for diversity maximization in MapReduce and streaming settings where the metric space has a bounded doubling dimension. Borassi et al. [14] proposed sliding-window algorithms for diversity maximization. Epasto et al. [36] further proposed improved sliding-window algorithms for diversity maximization specific to the Euclidean space. Drosou and Pitoura [18] studied max-min diversity maximization on dynamic data. They proposed a b−1 2b 2 -approximation algorithm using a cover tree of base b. Bauckhage et al. [15] proposed an adiabatic quantum computing solution for max-sum diversification. Zhang and Gionis [19] extended diversity maximization to clustered data. Nevertheless, all the above methods only consider diversity maximization problems without fairness constraints.
There were several studies on diversity maximization under matroid constraints, of which the fairness constraints are special cases. Abbassi et al. [11] proposed a ( 1 2 − ε)approximation local search algorithm for max-sum diversification under matroid constraints. Borodin et al. [9] proposed a ( 1 2 − ε)-approximation algorithm for maximizing the sum of a submodular function and a max-sum dispersion function. Cevallos et al. [30] extended the local search algorithm for distances of a negative type. They also proposed a PTAS for this problem via convex programming [29]. Bhaskara et al. [37] proposed a 1 8 -approximation algorithm for sum-min diversity maximization under matroid constraints using linear relaxations. Ceccarello et al. [13] proposed a coreset-based approach to matroid-constrained max-sum diversification in metric spaces of bounded doubling dimension. Nevertheless, the above methods are still not applicable to the max-min dispersion problem. The only known algorithms for fair max-min diversity maximization in [17,20,38] are offline algorithms and inefficient for data streams. We will compare our proposed algorithms with these, both theoretically and empirically. To the best of our knowledge, there has not been any previous streaming or sliding-window algorithm for fair max-min diversity maximization.
In addition to diversity maximization, fairness has also been considered in many other data summarization problems, such as k-center [21][22][23], determinantal point processes [8], coresets for k-means clustering [24,25], and submodular maximization [26,27]. However, since their optimization objectives differ from diversity maximization, the proposed algorithms for their fair variants cannot be directly used for our problem.

Preliminaries
In this section, we introduce the basic concepts and formally define the fair max-min diversity maximization problem.
Let X be a set of n elements from a metric space with distance function d(·, ·) capturing the dissimilarities among elements. Recall that d(·, ·) is nonnegative, symmetric, and satisfies the triangle inequality-i.e., d(x, y) + d(y, z) ≥ d(x, z) for any x, y, z ∈ X. Note that all the algorithms and analyses in this paper are general for any distance metric. We further generalize the notion of distance to an element x and a set S as the distance between x and its nearest neighbor in S-i.e., d(x, S) = min y∈S d(x, y).
Our focus in this paper is to find a small subset of most diverse elements from X. Given a subset S ⊆ X, its diversity div(S) is defined as the minimum of the pairwise distances between any two distinct elements in S, i.e., div(S) = min x,y∈S,x =y d(x, y). The unconstrained version of diversity maximization (DM) asks for a subset S ⊆ X of k elements maximizing div(S), i.e., S * = arg max S⊆X:|S|=k div(S). We use OPT = div(S * ) to denote the diversity of the optimal solution S * for DM. This problem has been proven to be NPcomplete [33], and no polynomial-time algorithm can achieve an approximation factor that is better than 1 2 unless P=NP. One approach to DM is the 1 2 -approximation greedy algorithm [34,39] (known as GMM) in the offline setting.
We introduce fairness to diversity maximization when X is composed of several demographic groups defined by a certain sensitive attribute, e.g., sex or race. Formally, suppose that X is divided into m disjoint groups [1, . . . , m] ([m] for short) and a function c : X → [m] maps each element x ∈ X to its group. Let X i = {x ∈ X : c(x) = i} be the subset of elements from group i in X. Obviously, we have m i=1 X i = X and X i ∩ X j = ∅ for any i = j. The fairness constraint assigns a positive integer k i to each of the m groups and restricts the number of elements from group i in the solution to k i . We assume that ∑ m i=1 k i = k. The fair max-min diversity maximization problem is defined as: . Given a set X of n elements with X = m i=1 X i and m size constraints k 1 , . . . , k m ∈ Z + , find a subset S that contains k i elements from X i and maximizes div(S)-i.e., S * f = arg max S⊆X : |S∩X i |=k i ,∀i∈[m] div(S).
We use OPT f = div(S * f ) to denote the diversity of the optimal solution S * f for FDM. Since DM is a special case of FDM when m = 1, FDM is NP-hard up to a 1 2 -approximation. In addition, our FDM problem is closely related to the concept of matroid [40] in combinatorics. Given a ground set V, a matroid is a pair M = (V, I) where I is a family of subsets of V (called independent sets) with the following properties: (i) ∅ ∈ I; (ii) for each A ⊆ B ⊆ V, if B ∈ I then A ∈ I (hereditary); and (iii) if A ∈ I, B ∈ I, and |A| > |B|, then there exists x ∈ A \ B such that B ∪ {x} ∈ I (augmentation). An independent set is maximal if it is not a proper subset of any other independent set. A basic property of M is that all its maximal independent sets have the same size, denoted as the matroid's rank. As is easy to verify, our fairness constraint is a case of rank-k partition matroids, where the ground set is partitioned into disjoint groups and the independent sets are exactly the sets in which, for each group, the number of elements from this group is, at most, the group capacity. Our algorithms for general m in Sections 4 and 5 will be built on matroids.
In this paper, we first consider FDM in the streaming setting, where the elements in X arrive one at a time. Here, we use t(x) to denote the time when an element x is observed and X (T) = {x ∈ X : t(x) ≤ T} to denote the subset of elements observed from X until time T. A streaming algorithm should process each element sequentially in one pass using limited space (typically independent of n) and return a valid approximate solution S (if it exists) for FDM on X (T) at any time T. We further study FDM in the sliding-window setting, where the window W (T) always contains the last w elements observed from X until time T, i.e., W (T) = {x ∈ X : T − w + 1 ≤ t(x) ≤ T}. A sliding-window algorithm should provide a valid approximate solution S (if it exists) for FDM on W (T) at any time T.

Streaming Algorithms
As has been shown in Section 3, FDM is NP-hard. Thus, we focus on efficient approximation algorithms for FDM. In this section, we first describe the existing algorithms for unconstrained diversity maximization in the streaming model on which our streaming algorithms will be built. We then propose a 1−ε 4 -approximation streaming algorithm for FDM in the special case that there are only two groups in the dataset. Finally, we propose a 1−ε 3m+2 -approximation streaming algorithm for FDM on a dataset with an arbitrary number m of groups.

(Unconstrained) Streaming Algorithm
We first present the streaming algorithm of [14] for (unconstrained) diversity maximization in Algorithm 1. Let d min = min x,y∈X,x =y d(x, y), d max = max x,y∈X,x =y d(x, y) and ∆ = d max d min . Obviously, it always holds that OPT ∈ [d min , d max ]. First, it maintains a sequence U of values for guessing OPT within a relative error of 1 − ε and initializes an empty solution S µ for each µ ∈ U before processing the stream (Lines 1 and 2). Then, for each x ∈ X and each µ ∈ U , if S µ contains less than k elements and the distance between x and S µ is at least µ, it will add x to S µ (Lines 3-6). After processing all elements in X, the candidate solution that contains k elements and maximizes the diversity is returned as the solution S for DM (Line 7). Algorithm 1 is proven to be a 1−ε 5 -approximation algorithm for max-min diversity maximization [14]. In Theorem 1, its approximation ratio is improved to 1−ε 2 by refining the analysis of [14].
Proof. For each µ ∈ U , there are two cases for S µ after processing all elements in X: (1) If |S µ | = k, the condition of Line 5 guarantees that div(S µ ) ≥ µ; (2) If |S µ | < k, it holds that d(x, S µ ) < µ for every x ∈ X \ S µ since the fact that x is not added to S µ implies that d(x, S µ ) < µ, as |S µ | < k. Let us consider a candidate solution S µ with |S µ | < k. Suppose that S * = {s * 1 , . . . , s * k } is the optimal solution for DM on X. We define a function f : S * → S µ that maps each element in S * to its nearest neighbor in S µ . As is shown above, d(s * , f (s * )) < µ for each s * ∈ S * . Because |S µ | < k and |S * | = k, two distinct elements according to the triangle inequality. Thus, Let µ be the smallest µ ∈ U with |S µ | < k. We obtained div(S * ) < 2µ from the above results.
In terms of complexity, Algorithm 1 stores O( k log ∆ ε ) elements and takes O( k log ∆ ε ) time per element, since it makes O( log ∆ ε ) guesses for OPT, keeps, at most, k elements in each candidate and requires, at most, k distance computations to decide whether to add an element to a candidate.

Fair Streaming Algorithm for m = 2
The procedure of our streaming algorithm in case of m = 2, called SFDM1, is described in Algorithm 2 and illustrated in Figure 3. In general, the algorithm runs in two phases: stream processing and post-processing. In the stream processing (Lines 1-6), for each guess µ ∈ U of OPT f , it utilizes Algorithm 1 to keep a group-blind candidate S µ with size constraint k and two group-specific candidates S µ,1 and S µ,2 with size constraints k 1 and k 2 for X 1 and X 2 , respectively. The only difference from Algorithm 1 is that the elements are filtered by group to maintain S µ,1 and S µ,2 . After processing all elements of X in one pass, it will post-process the group-blind candidates to make them satisfy the fairness constraint (Lines 7-15). The post-processing is only performed on a subset U of U , where S µ contains k elements and S µ,i contains k i elements for each group i ∈ {1, 2}. For each µ ∈ U , S µ , either has satisfied the fairness constraint or has one over-filled group i o and another under-filled group i u . If S µ is not yet a fair solution, S µ will be balanced for fairness by first adding and then removing the same number of elements from S µ ∩ X i o . The elements to be added and removed are selected greedily, as in GMM [39], to minimize the loss in diversity: the element in S µ,i u that is the furthest from S µ ∩ X i u is picked for each insertion; and the element in S µ ∩ X i o that is the closest to S µ ∩ X i u is picked for each deletion. Finally, the fair candidate with the maximum diversity after post-processing is returned as the final solution for FDM (Line 16). Next, we will theoretically analyze the approximation ratio and complexity of SFDM1.

Stream Processing
Post-Processing …… Final Solution Figure 3. Illustration of the SFDM1 algorithm. During stream processing, one group-blind and two group-specific candidates are maintained for each guess µ of OPT f . Then, a subset of group-blind candidates is selected for post-processing by adding the elements from the under-filled group before deleting the elements from the over-filled one.
Proof. First of all, we have OPT f ≤ OPT, where OPT is the optimal diversity of unconstrained DM with k = k 1 + k 2 on X, since any valid solution for FDM must also be a valid solution for DM. Moreover, it holds that OPT f ≤ OPT k i , where OPT k i is the optimal diversity of unconstrained DM with size constraint k i on X i for both i ∈ {1, 2}, because the optimal solution must contain k i elements from X i and div(·) is a monotonically non-increasing function-i.e., div(S ∪ {x}) ≤ div(S) for any S ⊆ X and x ∈ S \ X. Therefore, we prove that Then, according to the results of Theorem 1, we have OPT < 2µ if S µ < k and OPT k i < 2µ if S µ,i < k i for each i ∈ {1, 2}. Note that µ is the largest µ ∈ U , such that |S µ | = k, |S µ,1 | = k 1 , and |S µ,2 | = k 2 after stream processing. For µ = µ 1−ε ∈ U , we have either |S µ | < k or |S µ ,i | < k i for some i ∈ {1, 2}. Therefore, it holds that OPT f < 2µ ≤ 2 1−ε · µ and we conclude the proof.
Proof. The candidate S µ before post-processing has exactly k = k 1 + k 2 elements but may not contain k 1 elements from X 1 and k 2 elements from X 2 . If S µ has exactly k 1 elements from X 1 and k 2 elements from X 2 , and thus the post-processing is skipped, we have div(S µ ) ≥ µ according to Theorem 1. Otherwise, assuming that |S µ ∩ X 1 | = k 1 < k 1 , we will add k 1 − k 1 elements from S µ,1 to S µ and remove k 1 − k 1 elements from S µ ∩ X 2 to ensure the fairness constraint. In Line 11, all the k 1 elements in S µ,1 can be selected for insertion. Since the minimum distance between any pair of elements in S µ,1 is at least µ, we can find, at most, one element x ∈ S µ,1 , such that d(x, y) < µ 2 for each y ∈ S µ ∩ X 1 . This means that there are at least k 1 − k 1 elements from S µ,1 whose distances to all the existing elements in S µ ∩ X 1 are greater than µ 2 . Accordingly, after adding k 1 − k 1 elements from S µ,1 to S µ greedily, it still holds that d(x, y) ≥ µ 2 for any x, y ∈ S µ ∩ X 1 . In Line 14, for each element x ∈ S µ ∩ X 2 , there is, at most, one (newly added) element y ∈ S µ ∩ X 1 such that d(x, y) < µ 2 . Meanwhile, it is guaranteed that y is the nearest neighbor of x in S µ in this case. Therefore, in Line 14, every x ∈ S µ ∩ X 2 with d(x, S µ ∩ X 2 ) < µ 2 is removed, since there are, at most, k 1 − k 1 such elements and the one with the smallest d(x, S µ ∩ X 2 ) is removed at each step. Therefore, S µ contains k 1 elements from X 1 and k 2 elements from X 2 and div(S µ ) ≥ µ 2 after post-processing. Proof. According to the results of Lemmas 1 and 2, we have div Proof. SFDM1 keeps three candidates for each µ ∈ U and O(k) elements in each candidate. Hence, the total number of stored elements is O( . The stream processing performs, at most, O( k log ∆ ε ) distance computations per element. Finally, for each µ ∈ U in the post-processing, at most k i (k i − k i ) distance computations are performed to select the elements in S µ,i that are to be added to S µ . To find the elements that are to be removed, at most k(k i − k i ) distance computations are needed. Thus, the time complexity for post-processing is O( Comparison with Prior Art: The idea of finding a solution and balancing it for fairness in SFDM1 has also been used for FairSwap [17]. However, FairSwap only works in the offline setting, which keeps the dataset in memory and requires random accesses for computation, whereas SFDM1 works in the streaming setting, which scans the dataset in one pass and uses only the elements in the candidates for post-processing. Compared with FairSwap, ) at the expense of lowering the approximation ratio by a factor of 1 − ε.

Fair Streaming Algorithm for General m
The detailed procedure of our streaming algorithm, which can work with an arbitrary m ≥ 2, called SFDM2, is presented in Algorithm 3. Similar to SFDM1, it also has two phases: stream processing and post-processing. In the stream processing (Lines 1-7), it utilizes Algorithm 1 to keep a group-blind candidate S µ and m group-specific candidates S µ,1 , . . . , S µ,m for all the m groups. The difference from SFDM1 is that the size constraint of each group-specific candidate for each group i is k instead of k i . Then, after processing all elements in X, a post-processing scheme is required to ensure the fairness of candidates. Nevertheless, the post-processing procedures are totally different from SFDM1, since the swap-based balancing strategy cannot guarantee the validity of the solution with any theoretical bound. Like SFDM1, the post-processing is performed on a subset U , where S µ has k elements and S µ,i has at least k i elements for each group i (Line 8). For each µ ∈ U , it initializes with a subset S µ of S µ (Line 10). For an over-filled group i, i.e., |S µ ∩ X i | > k i , S µ contains k i arbitrary elements from S µ . For an under-filled or exactly filled group i, Next, new elements from under-filled groups should be added to S µ so that S µ is a fair solution. The method used to find the elements that are to be added is to divide the set S all of elements in all candidates into a set C of clusters, which guarantees that d(x, y) ≥ µ m+1 for any x ∈ C a and y ∈ C b (Lines 12-15), where C a and C b are two different clusters in C. Then, S µ is limited to contain, at most, one element from each cluster after new elements are added so that div(S µ ) ≥ µ m+1 . Meanwhile, S µ should still satisfy the fairness constraint. To meet both requirements, the problem of adding new elements to S µ is formulated as an instance of matroid intersection [41][42][43], as will be discussed subsequently (Line 17). Finally, it returns S µ containing k elements with maximum diversity after post-processing as the final solution for FDM (Line 18). An illustration of the post-processing procedure of SFDM2 is given in Figure 4.

Algorithm 3 SFDM2
for all µ ∈ U and i ∈ [m] do 5: Run Lines 3-6 of Algorithm 1 to update S µ w.r.t. x 6: if c(x) = i then 7: Run Lines 3-6 of Algorithm 1 to update S µ,i w.r.t. x Post-processing Create l clusters C = {C 1 , . . . , C l }, each of which contains one element in S all 13: while Run Algorithm 4 to augment S µ such that S µ is a maximum cardinality set in I 1 ∩ I 2 18: Figure 4. Illustration of post-processing in SFDM2. For each µ ∈ U , an initial S µ is first extracted from S µ by removing the elements from over-filled groups. Then, the elements in all candidates are divided into clusters. The final S µ is augmented from the initial solution by adding new elements from under-filled groups based on matroid intersection.
Matroid Intersection: Next, we describe how to use matroid intersection for solution augmentation in SFDM2. We define the first rank-k matroid M 1 = (V, I 1 ) based on the fairness constraint, where the ground set V is S all and S ∈ I 1 iff |S ∩ Intuitively, a set S is fair if it is a maximal independent set in I 1 . Moreover, we define the second rank-l (l = |C|) matroid M 2 = (V, I 2 ) on the set C of clusters, where the ground set V is also S all and S ∈ I 2 if |S ∩ C| ≤ 1, ∀C ∈ C. Accordingly, the problem of adding new elements to S µ to ensure fairness is an instance of the matroid intersection problem, which aims to find a maximum cardinality set S ∈ I 1 ∩ I 2 for M 1 = (S all , I 1 ) and M 2 = (S all , I 2 ). Here, we adopt Cunningham's algorithm [41], a well-known solution for the matroid intersection problem based on the augmentation graph in Definition 2.
Specifically, the Cunningham's algorithm [41] is initialized with S = ∅ (or any S ∈ I 1 ∩ I 2 ). At each step, it builds an augmentation graph G for M 1 , M 2 , and S. If there is no directed path from a to b in G, then S is already a maximum cardinality set. Otherwise, it finds the shortest path P * from a to b in G, and augments S according to P * . For each x ∈ V \ S, except a and b, add x to S; for each x ∈ S, remove x from S. We adapt Cunningham's algorithm [41] for our problem, as shown in Algorithm 4. Our algorithm is initialized with S µ instead of ∅. In addition, to reduce the cost of building G and maximize the diversity, it first adds the elements in V 1 ∩ V 2 greedily to S µ until V 1 ∩ V 2 = ∅. This is because a shortest path, P * = a, x, b in G, exists for any x ∈ V 1 ∩ V 2 , which is easy to verify from Definition 2. Finally, if |S| < k after the above procedures, the standard Cunningham's algorithm will be used to augment S to ensure the maximality of S.

Algorithm 4 Matroid Intersection
for all x ∈ V 1 do 5: for all x ∈ V 2 do 7: : Build an augmentation graph G for S 9: while there is a directed path from a to b in G do 10: Let P * be a shortest path from a to b in G 11: for all x ∈ P * \ {a, b} do 12: Rebuild G for the updated S 15: return S Theoretical Analysis: We prove that SFDM2 achieves an approximation ratio of 1−ε 3m+2 for FDM. The high-level idea of the proof is to connect the clustering procedure in postprocessing with the notion of matroid and then to utilize the geometric properties of the clusters and the theoretical results of matroid intersection for approximation. Next, we first show that the set C of clusters has several important properties (Lemma 3). Then, we prove that Algorithm 4 can return a fair solution for a specific µ based on the properties of C (Lemma 4). Finally, we analyze the time and space complexities of SFDM2 in Theorem 5.

Lemma 3.
The set C of clusters has the following properties: (i) for any x ∈ C a and y ∈ C b (a = b), d(x, y) ≥ µ m+1 ; (ii) each cluster C contains, at most, one element from S µ and S µ,i for any i ∈ [m]; (iii) for any x, y ∈ C, d(x, y) < m m+1 · µ.
Proof. First of all, Property (i) holds from Lines 12-15 of Algorithm 3, since all clusters that do not satisfy it have been merged. Then, we prove Property (ii) by contradiction. Let us construct an undirected graph G = (V, E) for a cluster C ∈ C, where V is the set of elements in C and there exists an edge (x, y) ∈ E iff d(x, y) < µ m . Based on Algorithm 3, for any x ∈ C, there must exist some y ∈ C (x = y) such that d(x, y) < µ m . Therefore, G is a connected graph. Suppose that C contains more than one element from S µ or S µ,i for some i ∈ [m]. Let P x,y = (x, . . . , y) be the shortest path of G between x and y, where x and y are both from S µ or S µ,i . Next, we show that the length of P x,y is, at most, m + 1. If the length of P x,y is longer than m + 1, there will be a sub-path P x ,y of P x,y where x and y are both from S µ or S µ,i , and this violates the fact that P x,y is the shortest. Since the length of P x,y is, at most, m + 1, we have d(x, y) < (m + 1) · µ m+1 = µ, which contradicts the fact that d(x, y) ≥ µ, as they are both from S µ or S µ,i . Finally, Property (iii) is a natural extension of Property (ii): since each cluster C contains, at most, one element from S µ and S µ,i for any i ∈ [m], C has, at most, m + 1 elements. Therefore, for any two elements x, y ∈ C, the length of the path between them is, at most, m in G and d(x, y) < m · µ m+1 = m m+1 · µ.
Proof. First of all, the initial S µ is a subset of S µ . According to Property (ii) of Lemma 3, all elements of S µ are in different clusters of C, and thus S µ ∈ I 1 ∩ I 2 . The theoretical results in [41] guarantee that Algorithm 4 can find a size-k set in I 1 ∩ I 2 , as long as it exists. Next, we will show such a set exists when OPT f ≥ 3m+2 m+1 · µ. To verify this, we need to identify k i clusters of C that contain at least one element from X i for each i ∈ [m] and show that all k = ∑ m i=1 k i clusters are distinct. Here, we consider two cases for each group i ∈ [m].
By identifying all the clusters that contain f (x * ) for all x * ∈ S * f , we found k i clusters for each group i ∈ [m] such that k i ≤ |S µ,i | < k. All the clusters that were found are guaranteed to be distinct. • Case 2: For all i ∈ [m] such that |S µ,i | = k, we are able to find k clusters that contain one element from S µ,i based on Property (ii) of Lemma 3. For such a group i, even though k − k i clusters have been identified for all other groups, there are still at least k i clusters available for selection. Therefore, we can always find k i clusters that are distinct from all the clusters identified by any other group for such a group X i .
Considering both cases, we have proven the existence of a size-k set in I 1 ∩ I 2 . Finally, for any set S ∈ I 2 , we have div(S) ≥ µ m+1 according to Property (i) of Lemma 3.

Comparison with Prior Art:
Existing methods have aimed to find a fair solution based on matroid intersection for fair k-center [21,22,44] and fair max-min diversity maximization [17]. SFDM2 adopts a similar method to FairFlow [17] to construct the clusters and matroids. However, FairFlow solves matroid intersection as a max-flow problem on a directed graph. Its solution is of poor quality in practice, particularly when m is large. Therefore, SFDM2 uses a different method for matroid intersection based on Cunningham's algorithm, which initializes with a partial solution instead of an empty set for higher efficiency and adds elements greedily like GMM [39] for higher diversity. Hence, SFDM2 has a significantly higher solution quality than FairFlow in practice, though it has a slightly lower approximation ratio.

Sliding-Window Algorithms
In this section, we extend our streaming algorithms, i.e., SFDM1 and SFDM2, to the sliding-window model. In Section 5.1, we first present the existing sliding-window algorithm for (unconstrained) diversity maximization [14]. In Section 5.2, we propose our extended sliding-window algorithms for FDM based on the algorithms in Sections 4 and 5.1.

(Unconstrained) Sliding-Window Algorithm
The unconstrained sliding-window algorithm is shown in Algorithm 5 and illustrated in Figure 5. First of all, it keeps two sequences Λ, U , both ranging from d min to d max , to guess the optimum OPT[W] of DM on the window W (Line 1). For each combination of λ ∈ Λ and µ ∈ U , it initializes two candidate solutions A λ,µ and B λ,µ , each of which will be maintained by Algorithm 1 on two consecutive sub-sequences of X. Two sets A λ,µ and B λ,µ to store the replacements of the elements in A λ,µ and B λ,µ , in case that they fall out of the sliding window, are also initialized as empty sets (Lines 2 and 3). Then, for each element x ∈ X, it adds x to each B λ,µ using the same method as Algorithm 1. Once x is added to B λ,µ , it will be set as its own replacement in B λ,µ (Lines 7 and 8). Otherwise, it checks whether the distance between x and any existing element in B λ,µ is, at most, µ and assigns x as the replacement of such an element in B λ,µ (Line 10). Similarly, it also checks whether x can replace any element in A λ,µ and perform the assignment if so (Line 12). After that, if the diversity of any candidate B λ,µ with |B λ,µ | = k exceeds λ, it will remove x from B λ,µ , B λ,µ and set them as A λ,µ , A λ,µ , and then re-initialize a new B λ,µ and B λ,µ with x (Lines 13-16). We describe the post-processing procedure for the window W containing the last w elements in X, which can be easily extended to any window W (T) at time T, in Lines 17-23. It considers two cases for different values of λ, µ: (i) when A λ,µ ⊆ W, it runs any algorithm ALG for (centralized) max-min diversity maximization on A λ,µ ∪ B λ,µ to find a size-k candidate solution S λ,µ (Line 20); (ii) when B λ,µ ⊆ W, ALG is run on (W ∩ A λ,µ ) ∪ B λ,µ , i.e., the non-expired elements from A λ,µ and B λ,µ , instead (Line 22). Finally, the best solution found after post-processing all candidates is returned as the solution S for the window W (Line 23).
Final Solution Figure 5. Illustration of the framework of sliding-window algorithms. During stream processing, two candidate solutions A λ,µ and B λ,µ , along with their backups A λ,µ and B λ,µ , are maintained for each guess λ, µ of OPT[W]. Then, during post-processing, the elements in B λ,µ and A λ,µ (or non-expired elements in A λ,µ if A λ,µ has expired) are passed to an existing algorithm for solution computation.

Fair Sliding-Window Algorithms
Generally, to extend SFDM1 and SFDM2 so that they can work in the sliding-window model, we need to modify them in two aspects: (i) the stream processing should follow the procedure of Algorithm 5 instead of Algorithm 1 to maintain the candidate solutions for the case when old elements are deleted from the window W; (ii) the post-processing should be adjusted for the candidate solutions kept by Algorithm 5 during stream processing with theoretical guarantees.
Specifically, the procedures of our extended algorithms, i.e., SWFDM1 and SWFDM2 are presented in Algorithm 6. Here, we put the descriptions of both algorithms together because they share many common subroutines and inherit some others from Algorithms 2-5. Following the procedure of Algorithm 5, they initialize the candidate solutions for different guesses λ, µ of OPT[W] in the sequences Λ and U . In the stream processing (Lines 1-11), SWFDM1 and SWFDM2 adopts the same method as used in Algorithm 5 to maintain the unconstrained candidate solutions as well as the monochromatic candidate solutions for each group i ∈ [m]. The only difference is the solution size of each monochromatic candidate, which is k i for i ∈ {1, 2} in SWFDM1 but k for each i ∈ [m] in SWFDM2.
The following theorem indicates the approximation factor of Algorithm 5. Theorem 6. Algorithm 5 is a ξ−ε 5 -approximation algorithm for max-min diversity maximization when a ξ-approximation algorithm ALG for (centralized) max-min diversity maximization is used for post-processing.
We refer readers to Lemma 4.7 in [14] for the proof of Theorem 6. Here, if GMM [39], which is 1 2 -approximate for max-min diversity maximization, is used as ALG, the approximation factor of Algorithm 5 will be 1−ε 10 . In terms of complexity, Algorithm 5 stores O( k log 2 ∆ ε 2 ) elements, takes O( k log 2 ∆ ε 2 ) time per element for stream processing, and spends O( ) time for post-processing.

Algorithm 6 SWFDM
Input: Stream X = m i=1 X i , distance metric d(·, ·), parameter ε ∈ (0, 1), window size w ∈ Z + , size constraints Stream processing 1: Run Lines 5-16 of Algorithm 5 to update A λ,µ , A λ,µ , B λ,µ , and B λ,µ w.r.t. x 8: if m = 2 ∧ c(x) = i and 'SWFDM1' is used then 9: Run Lines 5-16 of Algorithm 5 to update to update A x under size constraint k i 10: else if c(x) = i and 'SWFDM2' is used then 11: Run Lines 5-16 of Algorithm 5 to update to update A Post-processing 12: W ← {x ∈ X : max{1, |X| − w + 1} ≤ t(x) ≤ |X|} 13: for all λ ∈ Λ and µ ∈ U do 14: if A λ,µ ⊆ W then 15: Run Lines 10-15 of Algorithm 2 using S λ,µ and S (i) λ,µ as input to find a fair solution S λ,µ 25: else if 'SWFDM2' is used then 26: Run Lines 10-17 (with d(x, y) < ξµ m+1 in Line 13) of Algorithm 3 using S λ,µ and S all = m i=1 S (i) λ,µ ∪ S λ,µ as input to find a fair solution S λ,µ 32: return S ← arg max λ∈Λ,µ∈U :|S λ,µ |=k div(S λ,µ ) The post-processing steps of both algorithms for the window W containing the last w elements in X are shown in Lines 12-31. Note that these steps can be trivially used for any window W (T) based on the intermediate candidate solutions at time T. It first computes an unconstrained solution S λ,µ for each λ ∈ Λ and µ ∈ U from the (unconstrained) candidates kept during stream processing based on Algorithm 5. For SWFDM1, it next checks whether S λ,µ has contained k elements and an under-filled group exists in S λ,µ . If |S λ,µ | < k, the post-processing procedure is skipped because S λ,µ cannot produce any valid solution. Moreover, if |S λ,µ | = k and has already satisfied the fairness constraint, the post-processing will not be required anymore. Otherwise, it computes a group-specific solution S λ,µ constitutes S all for post-processing. Then, using the same method as Algorithm 3, it picks a subset S λ,µ of S λ,µ , divides S all into clusters, and augments S λ,µ via matroid intersection as the new solution S λ,µ . Both algorithms return the fair solution with maximum diversity after post-processing as the final solution for FDM on the window W.
Theoretical Analysis: Subsequently, we will analyze the theoretical soundness and complexities of the extended SWFDM1 and SWFDM2 algorithms for FDM in the sliding-window model by generalizing the analyses for SFDM1 and SFDM2 in Section 4. ) time for post-processing.
Proof. First, based on the analyses in [14], when µ ≤ OPT[W] Accordingly, we can find the values λ ∈ Λ and µ ∈ U with div(S λ ,µ ) ≥ ξµ . Then, Lemma 2 guarantees that div(S λ ,µ ) ≥ ξµ 2 after the post-processing procedure. Combining the above results, we have div( , where S is the solution for FDM on W returned by SWFDM1. Finally, since the number of candidates increases · (m + log 2 k) time for post-processing.
Proof. Similar to the proof of Theorem 7, we find the values λ ∈ Λ and µ ∈ U such that is the optimal diversity value for FDM on W. Then, Lemmas 3 and 4 guarantee that div(S λ ,µ ) ≥ ξµ 3m+2 after the post-processing procedure. Combining the above results, we have div(S) ≥ div(S λ ,µ ) ≥ ε 2 ) and the complexities of the remaining steps are not changed, the time and space complexities of SWFDM2 grows by a factor of log ∆ ε compared with SFDM2. Finally, since the approximation factor ξ of the algorithm ALG we use is Θ(1), e.g., ξ = 1 2 for GMM [39], the approximation factors of SWFDM1 and SWFDM2 are written as Θ(1) and Θ(m −1 ), respectively, for simplicity.

Experiments
In this section, we evaluate the performance of our proposed algorithms on several real-world and synthetic datasets. We first introduce our experimental setup in Section 6.1. Then, experimental results in the streaming setting are presented in Section 6.2. Finally, experimental results in the sliding-window setting are presented in Section 6.3.

Experimental Setup
Datasets: Our experiments are conducted on four publicly available real-world datasets, as follows: • Adult (https://archive.ics.uci.edu/dataset/2/adult, accessed on 12 July 2023) is a collection of 48,842 records from the 1994 US Census database. We select six numeric attributes as features and normalize each of them to have zero mean and unit standard deviation. The Euclidean distance is used as the distance metric. The groups are generated from two demographic attributes: sex and race. By using them individually and in combination, there are two (sex), five (race), and ten (sex + race) groups, respectively. • CelebA (https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html, accessed on 12 July 2023) is a set of 202,599 images of human faces. We use 41 pre-trained class labels as features and the Manhattan distance as the distance metric. We generate two groups from sex {'female', 'male'}, two groups from age {'young', 'not young'}, and four groups from their combination, respectively. • Census (https://archive.ics.uci.edu/dataset/116/us+census+data+1990, accessed on 12 July 2023) is a set of 2,426,116 records from the 1990 US Census data. We take 25 (normalized) numeric attributes as features and use the Manhattan distance as the distance metric. We generate 2, 7, and 14 groups from sex, age, and both of them, respectively. • Lyrics (http://millionsongdataset.com/musixmatch, accessed on 12 July 2023) is a set of 122,448 documents, each of which is the lyrics of a song. We train a topic model with 50 topics using LDA [45] implemented in Gensim (https://radimrehurek.com/gensim, accessed on 12 July 2023). Each document is represented as a 50-dimensional vector and the angular distance is used as the distance metric. We generate 15 groups based on the primary genres of songs.
We also generate different synthetic datasets with varying n and m for scalability tests. In each synthetic dataset, we generate ten two-dimensional Gaussian isotropic blobs with random centers in [−10, 10] 2 and identity covariance matrices. We assign points to groups uniformly at random. The Euclidean distance is used as the distance metric. The number n of points varies from 10 3 to 10 7 with fixed m = 2 or 10. The number m of groups varies from 2 to 20 with fixed n = 10 5 . The statistics of all datasets are summarized in Table 1. Algorithms: We compare our streaming algorithms, i.e., SFDM1 and SFDM2, and slidingwindow algorithms, i.e., SWFDM1 and SWFDM2, with four existing offline FDM algorithms: the 1 3m−1 -approximation FairFlow algorithm for an arbitrary m, the 1 5 -approximation FairGMM algorithm for small k and m, and the 1 4 -approximation FairSwap algorithm specific for m = 2 in [17], and the 1−ε m+1 -approximation FairGreedyFlow algorithm for an arbitrary m in [20]. Since no implementation for the algorithms in [17,20] is available, we implement them by ourselves, following the description in the original paper. All the algorithms are implemented in Python 3. All experiments were run on a desktop with an Intel ® Core ™ i5-9500 3.0 GHz processor and 32 GB RAM running Ubuntu 20.04.3 LTS. Each algorithm was run on a single thread. For a given solution size k, the group-specific size constraint k i for each group i ∈ [m] is set based on equal representation, which has been widely used in the literature [21][22][23]27]: If k is divisible by m, k i = k m for each i ∈ [m]. If k is not divisible by m, k i = k m for some groups or k m for the others while ensuring ∑ m i=1 k i = k. We also compare the performance of different algorithms for proportional representation [8,26,27], another popular notion of fairness that requires a proportion of elements from each group in the solution that are generally preserved in the dataset.
Performance Metrics: The performance of each algorithm is evaluated in terms of efficiency, quality, and space usage. The efficiency is measured as average update time, i.e., the average wall-clock time used to compute a solution for each arrival element in the stream. The quality is measured by the value of the diversity function of the solution returned by an algorithm. Since computing the optimal diversity OPT f of FDM is infeasible, we run GMM [39] for unconstrained diversity maximization to estimate an upper bound of OPT f for comparison. Space usage is measured by the number of distinct elements stored by each algorithm. However, only the numbers of elements stored by our proposed algorithms are presented because offline algorithms should keep all elements in memory for random access, and thus their space usage is always equal to the dataset (or window) size. We run each experiment 10 times with different permutations of the same dataset and report the average of each measure over 10 runs.

Results in Streaming Setting
Effect of Parameter ε: Figure 6 illustrates the performance of SFDM1 and SFDM2 with different values of ε when k = 20. We range the value of ε from 0.05 to 0.25 on Adult, CelebA, and Census and from 0.02 to 0.1 on Lyrics. Since the angular distances between any two vectors are at most π 2 , larger values of ε (e.g., >0.1) leads to greater estimation errors for OPT f and thus significantly lower solution quality in Lyrics. Generally, SFDM1 has higher efficiency and smaller space usage than SFDM2 for different values of ε, but SFDM2 exhibits a better solution quality. Furthermore, the running time and numbers of stored elements of both algorithms significantly decrease when the value of ε increases. This is consistent with our analyses in Section 4 because the number of guesses for OPT f , and thus the number of candidates maintained by both algorithms is O( log ∆ ε ). A slightly surprising result is that the diversity values of the solutions do not obviously degrade, even when ε = 0.25. This can be explained by the fact that both algorithms return the best solutions after post-processing among all candidates, which means that they provide good solutions as long as there is some µ ∈ U close to OPT f . We infer that µ still exists when ε = 0.25. Nevertheless, we note that the chance of finding an appropriate value of µ will be smaller when the value of ε is larger, which will result in a less stable solution quality. Therefore, in the experiments for streaming setting, we always use ε = 0.1 for both algorithms on all datasets, except Lyrics, where the value of ε is set to 0.05. The impact of ε on the performance of SWFDM1 and SWFDM2 is generally similar to that of SFDM1 and SFDM2. However, since the number of candidate solutions is quadratic with respect to ε, we use a larger ε = 0.25 for SWFDM1 and SWFDM2 on all datasets except Lyrics, where the value of ε is set to 0.1. Overview: Table 2 presents the performance of different algorithms for FDM in the streaming setting on four real-world datasets with different group partitions when the solution size k is fixed to 20. FairGMM is not included because it needs to enumerate, at most, ( km k ) = O (em) k candidates for solution computation and cannot scale to k > 10 and m > 5. First, compared with the unconstrained solution returned by GMM, all the fair solutions are less diverse because of the additional fairness constraints. Since GMM is a 1 2approximation algorithm and OPT ≥ OPT f , 2 · div_GMM is the upper bound of OPT f , from which we observe that all five fair algorithms return solutions with much better approximation ratios than their lower bounds. In case of m = 2, SFDM1 runs the fastest among all five algorithms, which achieves a speed-up of from two to four orders of magnitude over FairSwap, FairFlow and FairGreedyFlow. At the same time, its solution quality is close to or equal to that of FairSwap in most cases. SFDM2 shows lower efficiency than SFDM1 due to the higher cost of post-processing. However, it is still much more efficient than offline algorithms by taking advantage of stream processing. In addition, the solution quality of SFDM2 benefits from the greedy selection procedure in Algorithm 4, which is not only consistently better than that of SFDM1 but also better than that of FairSwap on the Adult and Census datasets. In the case of m > 2, SFDM1 and FairSwap are not applicable anymore. In addition, FairGreedyFlow cannot finish within one day on the Census dataset, and the corresponding results are also ignored. SFDM2 shows significant advantages compared to FairFlow and FairGreedyFlow in terms of both solution quality and efficiency. It provides up to 3.4 times more diverse solutions than FairFlow and FairGreedyFlow while running several orders of magnitude faster. In terms of space usage, both SFDM1 and SFDM2 store very small portions of elements (<0.1% on Census) on all datasets. SFDM2 contains slightly more elements than SFDM1 because the capacity of each group-specific candidate for group i is set to k instead of k i . For SFDM2, the number of stored elements increases nearly linearly with m since the number of candidates is linear to m.

Effect of Solution Size k:
The impact of solution size k on the performance of different algorithms in the streaming setting is shown in Figures 7 and 8 . Here, we vary k in [5,50] when m ≤ 5, or [10,50] when 5 < m ≤ 10, or [15,50] when m > 10, since we restrict that an algorithm must pick at least one element from each group. For each algorithm, the diversity value drops with k as the diversity function is monotonically non-increasing. Equal vs. Proportional Representation: Figure 10 compares the solution quality and running time of different algorithms for two popular notions of fairness, i.e., equal representation (ER) and proportional representation (PR), when k = 20 on Adult with highly skewed groups, where 67% of the records are for males and 87% of the records are for Whites. The diversity value of the solution of each algorithm is slightly higher for PR than ER, as the solution for PR is closer to the unconstrained one. The running time of SFDM1 and SFDM2 is slightly shorter for PR than ER, since fewer swapping and augmentation steps are performed on each candidate during post-processing. The results for SWFDM1 and SWFDM2 are similar and will be omitted.

Results in Sliding-Window Setting
Overview: Table 3 shows the performance of different algorithms for sliding-window FDM on four real-world datasets with different group settings when the solution size k is fixed to 20 and the window size w is set to 25 k on Adult (as its size is smaller than 100 k) or 100 k on other datasets. FairGMM is also omitted in Table 3 due to its high complexity. Compared with the streaming setting, the "price of fairness" becomes higher in the slidingwindow setting, for two possible reasons. First, the approximation factors of our proposed algorithms are lower. Second, some minor groups contain too few elements in the window when the value of m is large (marginally larger than k i ). Thus, the selection of elements from such groups is very restricted to ensure fairness. Nevertheless, we still find that all fair algorithms provide solutions with much better approximations than their lower bounds. We observe that SWFDM2 runs the fastest of all five algorithms, which achieves 5-150× speedups over FairSwap, FairFlow and FairGreedyFlow. Moreover, SWFDM1 and SWFDM2 have a slightly lower solution quality than FairSwap when m = 2. Nevertheless, SWFDM2 shows significant advantages over FairFlow and FairGreedyFlow in terms of both solution quality and efficiency when m > 2. Unlike the streaming setting, SWFDM2 shows higher efficiency than SWFDM1. This is because SWFDM2 maintains group-specific solutions with size constraints k, instead of k i for SWFDM1, in stream processing. Consequently, its groupspecific solutions often expire (i.e., A (i) λ,µ ⊆ W), and thus are not eligible for post-processing. However, such efficiency improvements come at the expense of less diverse solutions. In terms of space usage, both SWFDM1 and SWFDM2 store very small portions of elements (at most 3.2% · w) across all datasets. SWFDM2 keeps slightly more elements than SWFDM1 also because the capacity of each group-specific solution is k instead of k i .

Effect of Solution Size k:
The impact of solution size k on the performance of different algorithms in the sliding-window setting is illustrated in Figures 11 and 12. We use the same values of k as in the streaming setting. The window size w is set to w = 25 k for Adult and 100 k for others. For each algorithm, the diversity value drops with k as the diversity function is monotonically non-increasing. At the same time, the update time grows with k as their time complexities are linear or quadratic w.r.t. k. The gaps in diversity values between unconstrained and fair solutions are much larger than those in the streaming setting. The reasons for this were explained in the previous paragraph. The solution quality of SWFDM1 and SWFDM2 is slightly lower than FairSwap when m = 2, but is still better than that of FairFlow and FairGreedyFlow. However, their efficiencies are always much higher than those of offline algorithms. Finally, when m > 2, SWFDM2 outperforms FairFlow and FairGreedyFlow in terms of efficiency and effectiveness across all k values.

Scalability:
We evaluate the scalability of each algorithm in the sliding-window setting on synthetic datasets by varying the number of groups m from 2 to 20 and the window size w from 10 3 to 10 6 . The results regarding solution quality and update time for different values of w and m when k = 20 are presented in Figure 13. First of all, SWFDM2 shows a much better scalability than FairFlow and FairGreedyFlow w.r.t. m in terms of solution quality. The diversity value of the solution of SWFDM2 only slightly decreases with m. However, for FairFlow and FairGreedyFlow, the diversity values drop drastically with m. Nevertheless, its update time increases more rapidly with m since its time complexity is quadratic w.r.t. m. Furthermore, the results of the diversity values of different algorithms with varying w are similar to those for varying k. As expected, the running time of offline algorithms is nearly linear to w. However, unlike the streaming setting, the update time of SWFDM1 and SWFDM2 increases with w because more candidates are non-expired and thus considered in post-processing for a larger value of w.

Conclusions
In this paper, we studied the diversity maximization problem with fairness constraints in the streaming and sliding window settings. We first proposed a 1−ε 4 -approximation streaming algorithm for this problem when there were two groups in the dataset and a 1−ε 3m+2 -approximation streaming algorithm that could deal with an arbitrary number m of groups. Moreover, we extended the two proposed streaming algorithms to the sliding-window model while maintaining approximation factors of Θ(1) and Θ(m −1 ), respectively. Extensive experiments on real-world and synthetic datasets confirmed the efficiency, effectiveness, and scalability of our proposed algorithms.
In future work, we would like to improve the approximation ratios of the proposed algorithms. It would also be interesting to consider diversity maximization problems with other objective functions and fairness constraints defined on multiple sensitive attributes.

Institutional Review Board Statement: Not applicable
Data Availability Statement: Real-world datasets that we use in our experiments are publicly available. The code for generating synthetic data and for our experiments is available at https: //github.com/yhwang1990/code-FDM (accessed on 12 July 2023).

Conflicts of Interest:
The authors declare no conflict of interest.