Understanding collections of related datasets using dependent MMD coresets

Understanding how two datasets differ can help us determine whether one dataset under-represents certain sub-populations, and provides insights into how well models will generalize across datasets. Representative points selected by a maximum mean discrepency (MMD) coreset can provide interpretable summaries of a single dataset, but are not easily compared across datasets. In this paper we introduce dependent MMD coresets, a data summarization method for collections of datasets that facilitates comparison of distributions. We show that dependent MMD coresets are useful for understanding multiple related datasets and understanding model generalization between such datasets.


Introduction
When working with large datasets, it is important to understand your data. If a dataset is not representative of your population of interest, and no appropriate correction is made, then models trained on this data may perform poorly in the wild. Sub-populations that are underrepresented in the training data are likely to be poorly served by the resulting algorithm, leading to unanticipated or unfair outcomes-something that has been observed in numerous scenarios including medical diagnoses [23,7] and image classification [6,35].
In low-dimensional settings, it is common to summarize data using summary statistics such as marginal moments or label frequencies, or to visualize univariate or bivariate marginal distributions using histograms or scatter plots. As the dimensionality of our data increase, such summaries and visualizations become unwieldy, and ignore higher-order correlation structure. In structured data such as images, such summary statistics can be hard to interpret, and can exclude important information about the distribution [3,24]-the per-pixel mean and standard deviation of a collection of images tells us little about the overall distribution. Further, if our data isn't labeled, or is only partially labeled, we cannot make use of label frequencies to assess class balance.
In such settings, we can instead choose to present a set of exemplars that capture the diversity of the data. This is particularly helpful for structured, high-dimensional data such as images or text, that can easily be qualitatively assessed by a person. Random sampling provides a simple way of obtaining such a collection, but there is no guarantee that a given data point will be close to one of the resulting representative pointsparticularly in high dimensions or multi-modal distributions. A number of algorithms have been proposed to provide more representative exemplars [19,5,26,27,21,36,15,8]. Many of these algorithms can be seen as constructing a coreset for the dataset-a (potentially weighted) set of exemplars that behave similarly to the full dataset under a certain class of functions. In particular, coresets that minimize the maximum mean discrepency [MMD,14] between coreset and data have recently been used for understanding data distributions [21,15]. Further, evaluating models on such MMD-coresets have been shown to aid in understanding model performance [21].
In addition to summarizing a single dataset, we may also wish to compare and contrast multiple related datasets. For example, a company may be interested in characterizing differences and similarities between arXiv:2006.14621v2 [stat.ME] 4 Aug 2021 T set that indexes datasets and associated measures X t = (x t,1 , . . . , x t,n t ) ∈ X n t a dataset indexed by t ∈ T P t true distribution at t ∈ T , X t ∼ P t U = (u 1 , . . . , u n u ) set of candidate locations δ u Dirac measure (i.e. point mass) at u. Q t a probability measure used to approximate P t , that takes the form ∑ i∈S w t,i δ u i , where S ⊂ [n u ] Table 1: Notation used in this paper different markets. A machine learning practitioner may wish to know whether their dataset is similar to that used to train a given model. A researcher may be interested in understanding trends in posts or images on social media. Here, summary statistics offer interpretable comparisons: we can plot the mean and standard deviation of a given marginal quantity over time, and easily see how it changes [33,17]. By contrast, coresets are harder to compare, since the exemplars selected for two datasets X 1 and X 2 will not in general overlap.
In this paper, we introduce dependent MMD coresets {Q t } t∈T for a collection {X t } t∈T of related datasets. Each marginal coreset Q t is an MMD coreset for the corresponding dataset X t -i.e., it provides a set of weighted exemplars that can be used to understand the dataset X t and evaluate model performance in the context of that dataset. However, the marginal coresets have shared support: a shared set of exemplars are used to describe all datasets, but the weight assigned to each exemplar varies between datasets.
This shared support makes it easy to understand differences between datasets. Consider comparing two datasets of faces. Independent MMD coresets would provide two disjoint set of weighted exemplars; visually assessing the similarity between these coresets involves considering both the similarities of the images and the similarities in the weights. Conversely, with a dependent MMD coreset, the exemplars would be shared between the two datasets. Similarity can be assessed by considering the relative weights assigned in the two marginal coresets. This in turn leads to easy summarization of the difference between the two datasets, by identifying exemplars that are highly representative of one dataset, but less representative of the other.
We begin by considering existing coreset methods for data and model understanding in Section 2, before discussing their limitations and proposing our dependent MMD coreset in Section 3. A greedy algorithm to select dependent MMD coresets is provided in Section 4. In Section 5 we evaluate the efficacy of this algorithm, and show how the resulting dependent coresets can be used for understanding collections of image datasets and probing the generalization behavior of machine learning algorithms.

Coresets and measure-coresets
A coreset is a "small" summary of a dataset X, which can act as a proxy for the dataset under a certain class of functions F . Concretely, a weighted set of points {(w i , u i )} i∈S are an strong coreset for a size-n dataset A measure coreset [9] generalizes this idea to assume that X are independently and identically distributed samples from some distribution P. A measure Q is an -measure coreset for P with respect to some class F of functions if The left hand side of Equation 1 describes an integral probability metric [29], a class of distances between probability measures parametrized by some class F of functions. Different choices of F yield different distributions (Table 2). Table 2: Some examples of integral probability metrics

MMD-coresets
In this paper, we consider the case where F is the class of all functions that can be represented in the unit ball of some reproducing kernel Hilbert space (RKHS) H-a very rich class of continuous functions on X . This corresponds to a metric known as the maximum mean discrepancy [MMD,14], An RKHS can be defined in terms of a mapping Φ : X → H, which in turn specifies a kernel function k(x, x ) = Φ(x), Φ(x ) H . A distribution P can be represented in this space in terms of its mean embedding, The MMD between two distributions equivalently can be expressed in terms of their mean embeddings, MMD(P, Q) 2 = ||µ P − µ Q || 2 H , An -MMD coreset for a distribution P is a finite, atomic distribution Q = ∑ i∈S w i δ u i such that MMD(P, Q) 2 ≤ 2 . We will refer to the set {u i } i∈S as the support of Q, and refer to individual locations in the support of Q as exemplars.
In practice, we are unlikely to have access to P directly, but instead have samples X : We therefore define an -MMD coreset for a dataset X as a finite, atomic distribution Q such that MMD 2 (X, Q) ≤ 2 -or equivalently, whose mean embedding µ Q in H is close to the empirical mean embeddingμ X so that ||µ Q −μ X || 2 H ≤ 2 . A number of algorithms have been proposed that correspond to finding -MMD coresets, under certain restrictions on Q. 1 The kernel herding algorithm [8,4,22] can be seen as finding an MMD coreset Q for a known distribution P, with no restriction on the support of Q. The greedy prototype-finding algorithm used by [21] can be seen as a version of kernel herding, where P is only observed via a set of samples X, and where the support of Q is restricted to be some subset of a collection of candidates U (often chosen to be the data set X). Versions of this algorithm that assign weights to the atoms in Q are proposed in [15].
As in [21] and [15], in this paper we require the support of our coreset to be a subset of some finite set of candidates U, indexed by 1, . . . , n U . In other words, our measure coresets will take the form Q = ∑ i∈S w i δ u i , where S ⊂ [n U ].

Coresets for understanding datasets and models
The primary application of coresets is to create a compact representation of a large dataset, to allow for fast inference on downstream tasks (see [12] for a recent survey). However, such compact representations have also proved beneficial in interpretation of both models and datasets.
While humans are good at interpreting visual data [32], visualizing large quantities of data can become overwhelming due to the sheer quantity of information. Coresets can be used to filter such large datasets, while still retaining much of the relevant information.
The MMD-critic algorithm [21] uses a fixed-dimension, uniformly weighted MMD coreset, which they refer to as "prototypes", to summarize collections of images. [15] extends this to use a weighted MMD coreset, showing that weighted prototypes allow us to better model the data distribution, leading to more interpretable summaries. [37] show how coresets designed to approximate a point cloud under kernel density estimation can be used to represent spatial point processes such as spatial location of crimes.
Techniques such as coresets that produce representative points for a dataset can also be used to provide interpretations and explanations of the behavior of models on that dataset. Case-based reasoning approaches use representative points to describe "typical" behavior of a model [20,1,30,21]. Considering the model's output on such representative points can allow us to understand the model's behavior.
Viewing the model's behavior on a collection of "typical" points in our dataset also allows us to get an idea of the overall model performance on our data. Evaluating a model on a coreset can give an idea of how we expect it to perform on the entire dataset, and can help identify failure modes or subsets of the data where the model performs poorly.

Criticising MMD coresets
While MMD coresets are good at summarizing a distribution, since the coreset is much smaller than the original dataset, there are likely to be outliers in the data distribution that are not well explained by the coreset. The MMD-critic algorithm supplements the "prototypes" associated with the MMD coreset with a set of "criticisms"-points that are poorly modeled by the coreset [21] .
Recall from Equation 5 that the MMD between two distributions P and Q corresponds to the maximum difference in the expected value on the two spaces, of a function that can be represented in the unit ball of a Hilbert space H. The function f that achieves this maximum is known as the witness function, and is given by When we only have access to P via a size-n sample X, and where Q = ∑ i∈S w i δ u i , we can approximate this asf Criticisms of an MMD coreset for a data set X are selected as the points in X with the largest values of the witness function. Kim et al. [21] show that the combination of prototypes and criticisms allow us to visually understand large collections of images: the prototypes summarize the main structure of the dataset, while the criticisms allow us to represent the extrema of a distribution. Criticisms can also augment an MMD coreset in a case-based reasoning approach to model understanding, by allowing us to consider model behavior on both "typical" and "atypical" exemplars.

Dependent and correlated random measures
Dependent random measures [25,34] are distributions over collections of countable measures P t = ∑ ∞ i=1 w t,i δ u t,i , indexed by some set T , such that the marginal distribution at each t ∈ T is described by a specific distribution. In most cases, this marginal distribution is a Dirichlet process, meaning that the P t are probability distributions. Most dependent random measures either keep the weights w t,i or the atom locations u t,i constant accross t, to assist identifiability and interpretability.
In a Bayesian framework, dependent Dirichlet processes are often used as a prior for time-dependent mixture models. In settings where the atom locations (i.e. mixture components) are fixed but the weights vary, the posterior mixture components can be used to visualize and understand data drift [10,11]. The dependent coresets presented in this paper can be seen as deterministic, finite-dimensional analogues of these posterior dependent random measures.

Understanding multiple datasets using coresets
As we have seen, coresets provide a way of summarizing a single distribution. In this section, we discuss interpretational limitations that arise when we attempt to use coresets to summarize multiple related datasets (Section 3.1), before proposing dependent MMD coresets in Section 3.2 and discussing their uses in Section 3.3.

Understanding multiple datasets using MMD-coresets
If we have a collection {X t } t∈T of datasets, we might wish to find -MMD coresets for each of the X t , in the hope of not just summarizing the individual datasets, but also of easily comparing them. However, if we want to understand the relationships between the datasets, in addition to their marginal distributions, comparing such coresets in an interpretable manner is challenging.
An MMD coreset selects a set {u i } i∈S of points from some set of candidates U. Even if two datasets X and Y are sampled from the same underlying distribution (i.e., X, Y iid ∼ P), and the set U of available candidates is shared, the optimal MMD-coreset for the two datasets will differ in general. Sampling error between the two distributions means that MMD 2 (X, Q) = MMD 2 (Y, Q) for any candidate coreset Q unless X ≡ Y, and so the optimal coreset will typically differ between the two datasets.   Figure 1 shows that, even if two distributions X and Y are sampled from the same underlying distribution, and their coreset locations are selected from the same collection U, the two coresets will not be identical.
Here, we see two datasets ( Figures 1b and 1c) generated from the same mixture of three equally weighted Gaussians ( Figure 1a). Below (Figures 1d and 1e), we have selected a coreset for each dataset (using the algorithm that will be introduced in Section 4), with locations selected from a shared set U. While the associated coresets are visually similar, they are not the same.
This is magnified if we look at a high-dimensional dataset. Here, the relative sparsity of data points (and candidate points) in the space means that individual locations in Q X might not have close neighbors in Q Y , even if X and Y are sampled from the same distribution. Further, in high dimensional spaces, it is harder to visually assess the distance between two exemplars. These observations make it hard to compare two coresets, and gain insights about similarities and differences between the associated datasets. To demonstrate this, we constructed two datasets, each containing 250 randomly selected, female-identified US highschool yearbook photos from the 1990s. Figure 2 shows MMD-coresets obtained for the two datasets. 2 While both datasets were selected from the same distribution, there is no overlap in the support of the two coresets. Visually, it is hard to tell that these two coresets are representing samples from the same distribution.

Dependent MMD coresets
The coresets in Figure 2 are hard to compare due to their disjoint supports (i.e., the fact that there are no shared exemplars). Comparing the two coresets involves comparing multiple individual photos and assessing their similarities, in addition to incorporating the information encoded in the associated weights. To avoid the lack of interpretability resulting from dissimilar supports, we introduce the notion of a dependent MMD coreset.
Given a collection of datasets {X t } t∈T , the collection of finite, atomic for all t ∈ T , and if the Q t have common support, i.e.
where {u i } i∈S is a subset of some candidate set U.
In Equation 4, the exemplars u i are shared between all t ∈ T , but the weights w t,i associated with these exemplars can vary with t. Taking the view from Hilbert space, we are restricting the mean embeddings µ Q t of the marginal coresets to all lie within a convex hull defined by the exemplars {u i } i∈S .
By restricting the support of our coresets in this manner, we obtain data summaries that are easily comparable. Within a single dataset, we can look at the weighted exemplars that make up the coreset and use these to understand the spread of the data, as is the case with an independent MMD coreset. Indeed, since Q t still meets the definition of an MMD coreset for X t (see Equation 3), we can use it analogously to an independently generated coreset. However, since the exemplars are shared across datasets, we can directly compare the exemplars for two datasets. We no longer need to intuit similarities between disjoint sets of exemplars and their corresponding weights; instead we can directly compare the weights for each exemplar to determine their relative relevance to each dataset. We will show in Section 5.2.1 that this facilitates qualitative comparison between the marginal coresets, when compared to independently generated coresets.
We note that the dependent MMD coresets introduced in this paper are directly extensible to other integral probability measures; we could, for example, construct a dependent version of the Wasserstein coresets introduced by [9].

Model understanding and extrapolation
As we discussed in Section 2.3, MMD coresets can be used as tools to understand the performance of an algorithm on "typical" data points. Considering how an algorithm performs on such exemplars allows the practitioner to understand failure modes of the algorithm, when applied to the data. In classification tasks where labeling is expensive, or on qualitative tasks such as image modification, looking at an appropriate coreset can provide an estimate of how the algorithm will perform across the dataset.
In a similar manner, dependent coresets can be used to understand generalization behavior of an algorithm. Assume a machine learning algorithm has been trained on a given dataset X a , but we wish to apply it (without modification) to a dataset X b . This is frequently done in practice, since many machine learning algorithms require large training sets and high computational cost; however if the training distribution differs from the deployment distribution, the algorithm may not perform as intended. In general, we would expect the algorithm to perform well on data points in X b that have many close neighbors in X a , but perform poorly on data points in X b that are not well represented in X a .
for the pair (X a , X b ) allows us to identify exemplars that are highly representative of X a or X b (i.e. have high weight in the corresponding weighted coreset). Further, by comparing the weights in the two coreset measures -e.g, by calculating f i = w b,i /w a,i -we can identify exemplars that are much more representative of one dataset than another.
Rather than look at all points in the coreset, if we are satisfied with the performance of our model on X a , we can choose to only look at points with high values of f i -points that are representative of the new dataset X b , but not the original dataset X a . Further, if we wish to consider generalization to multiple new datasets, a shared set of exemplars reduces the amount of labeling or evaluation required.
An MMD coreset, dependent or otherwise, will only contain exemplars that are representative of the dataset(s). There are likely to be outliers that are less well represented by the coreset. Such outliers are likely to be underserved by a given algorithm-for example, yielding low accuracy or poor reconstructions.
As we saw in Section 2.4, MMD coresets can be augmented by criticisms-points in X that are poorly approximated by Q. We can equivalently construct criticisms for each dataset represented by a dependent MMD coreset. In the example above, we would select criticisms for the dataset X b by selecting points in X b that maximizef In addition to evaluating our algorithm on the marginal dependent coreset for dataset X b , or the subset of the coreset with high values of f i , we can evaluate on the criticisms C b . In conjunction, the dependent MMD coreset and its criticisms allow us to better understand how the algorithm is likely to perform on both typical, and atypical, exemplars of X b .

A greedy algorithm for finding dependent coresets
Given a collection {X t } t∈T of datasets, where we assume X t := {x t,1 , . . . , x t,n t } ∼ P t , and a set of n U candidates U, our goal is to find a collection {Q t } t∈T with shared support {u i : i ∈ S ⊂ [n U ]} such that MMD 2 (X t , Q t ) ≤ 2 for all t ∈ T . We begin by constructing an algorithm for a related task: to minimize where Q t = ∑ i∈S w t,i δ u i . If we ignore terms in Equation 5 that do not depend on the Q t , we obtain the following loss: We can use a greedy algorithm to minimize this loss. Let Q dexes the first m exemplars to be added. We wish to select the exemplar u * , and set of weights } i∈S (m) for each dataset X t , that minimize the loss. However, searching over all possible combinations of exemplars and weights is prohibitively expensive, as it involves a non-linear optimization to learn the weights associated with each candidate. Instead, we assume that, for each t ∈ T , there is some . In other words, we assume that the relative weights in each Q t of the previously added exemplars do not change as we add more exemplars.
Fortunately, the value of α * t that minimizes t t +1 δ u * can be found analytically for each candidate u * by differentiating the loss in step 1, yielding We can therefore set for all t ∈ T and i ∈ S (m) . As written, the procedure will greedily minimize the sum of the per-dataset losses. However, the definition of an MMD coreset involves satisficing, not minimizing: we want MMD 2 (X t , Q t ) ≤ 2 for all t ∈ T . To achieve this, we modify the sum in Equation 8 so that it only includes terms for which MMD 2 The resulting procedure is summarized in Algorithm 1.

Experimental evaluation
We begin in Section 5.1 by evaluating performance of Algorithm 1. Next, we show, using image datasets as an example, how dependent MMD coresets can be used to understand differences between related datasets (Section 5.2) and gain insights about model generalization (Section 5.3).

Evaluation of dependent MMD coreset algorithm
We begin by evaluating the performance of our algorithm (which we denote DMMD) for finding dependent MMD coresets, compared with two alternative methods. PROTODASH The PROTODASH algorithm [15] for weighted MMD coresets greedily selects exemplars that minimize the gradient of the loss in Equation 6 (for a single dataset). Having selected an exemplar to add to the coreset, PROTODASH then uses an optimization procedure to find the weights that minimize MMD 2 (X, Q (m+1) ). We modify this algorithm for the dependent MMD setting by summing the gradients across all datasets for which the 2 threshold is not yet satisficed, leading to the dependent PROTODASH algorithm shown in Algorithm 2.
DMMD-OPT Unlike the dependent version of protodash in Algorithm 2, our algorithm assigns weights before selection, which should encourage adding points that would help some of the marginal coresets, but not others. However, there is no post-exemplar-addition optimization of the weights. Inspired by the post-addition optimization in PROTODASH, we also compare our algorithm with a variant of Algorithm 1 that

Algorithm 1 DMMD: Selecting dependent MMD coresets
Require: Datasets {X t } t∈T ; candidate set U; kernel k(·, ·); threshold 2 > 0 optimizes the weights after each step-allowing the relative weights of the exemplars to change between each iteration. We will refer to this variant of DMMD with post-exemplar-addition optimization as DMMD-OPT.
We evaluate all three methods using a dataset of photographs of 15367 female-identified students, taken from yearbooks between 1905 and 2013 [13]. We show a random subset of these images in Figure 3. We generated 512-dimensional embeddings of the photos using the torchvision pre-trained implementation of ResNet [28,16]. We then partitioned the collection into 12 datasets, each containing photos from a single decade.

Algorithm 2 A dependent protodash algorithm
Require: Datasets {X t } t∈T , candidate set U, kernel k(·, ·), threshold 2 > 0 In order to capture lengthscales appropriate for the variation in each decade, we use an additive kernel, setting where K all is a squared exponential kernel with bandwidth given by the overall median pairwise distances; T is the set of decades that index the datasets; K t is a squared exponential kernel with bandwidth given by the median pairwise distance between images in dataset t.
We begin by considering how good a dependent MMD coreset each algorithm is able to construct, for a given number of exemplars m = |S|. To do so, we ran all algorithms without specifying a threshold 2 , t ) for each value of m. All algorithms were run for one hour on a 2019 Macbook Pro (2.6 GHz 6-Core Intel Core i7, 32 GB 2667 MHz DDR4), excluding time taken to generate and store the kernel entries, which only occurs one time. As much code as possible was re-used between the three algorithms. Where required, optimization of weights was carried out using a BFGS optimizer. Code is available at https://github.com/sinead/dmmd. Figure 4a shows the per-dataset estimates MMD 2 (X t , Q (m) t ), and Figure 4b shows the average performance across all 12 datasets. We see that the three algorithms perform comparably in terms of coreset quality. DMMD-OPT seems to perform slightly better than DMMD, as might be expected due to the additional optimization step. PROTODASH, by comparison, seems to perform slightly worse, which we hypothesise is because it has no mechanism by which weights can be incorporated at selection time. However, in both cases, the difference is slight. DMMD is however much faster at generating coresets, since it does not optimize the full set of weights at each iteration. This can be seen in Figure 5, which shows the time taken to generate coresets of a given size. The cost of the optimization-based algorithms grows rapidly with coreset size (m); the rate of growth of the DMMD coresets is much smaller.
In practice, rather than endlessly minimizing ∑ t∈T MMD 2 (X t , Q t ), we will aim to find Q t such that MMD 2 (X t , Q t ) < 2 for all t ∈ T . In Figure 6, we show the coreset sizes required to obtain an -MMD dependent coreset on the twelve decade-specific yearbook datasets, for each algorithm. Again, a maximum runtime of one hour was specified. When all three algorithms were able to finish, the coreset sizes are comparable (with DMMD-OPT finding slightly smaller coresets than DMMD, and PROTODASH finding slightly larger coresets). However the optimization-based methods are hampered by their slow runtime.     Based on these analyses, it appears there is some advantage to additional optimization of the weights. However, in most cases, we do not feel the additional computational cost merits the improved performance.

Interpretable data summarizations
In high-dimensional, highly structured datasets such as collections of images, traditional summary statistics such as the mean of a dataset are particularly uninterpretable, as they convey little of the shape of the underlying distribution. A better approach is to show the viewer a collection of images that are representative of the dataset. MMD coresets allow us to obtain such a representative set, making them a better choice than displaying a random subset.
As we discussed in Section 3.1, if we wish to summarize a collection of related datasets, independently generated MMD coresets can help us understand each dataset individually, but it may prove challenging to compare datasets. This challenge becomes greater in high dimensional settings such as image data, where we cannot easily intuit a distance between exemplars. To showcase this phenomenom, and demonstrate how dependent MMD coresets can help, we return to the 15367 yearbook photos introduced in Section 5.1. For all experiments in this section, we use the additive kernel described in Section 5.1.

A shared support allows for easier comparison
In Section 3.2, we argued that the shared support provided by dependent MMD coresets facilitates comparison of datasets, since we only need to consider differences in weights. To demonstrate this, we constructed four datasets, each a subset of the entire yearbook dataset containing 250 photos. The first two datasets contained only faces from the 1990s; the second two, only faces from the 2000s. The datasets were generated by sampling without replacement from the associated decades, to ensure no photo appeared more than once across the four datasets.
We begin by independently generating MMD coresets for the four datasets, using Algorithm 1 independently on each dataset, with a threshold of 2 = 0.01. The set of candidate images, U, was the entire dataset of 15367 images. The resulting coresets are shown in Figure 7; the areas of the bubbles correspond to the weights associated with each exemplar. 3 We can see that, considered individually, each coreset appears to be doing a good job of capturing the variation in students for each dataset. However, if we compare the four coresets, it is not easy to tell that Figures 7a and 7b represent the same underlying distribution, and Figures 7c and 7d represent a second underlying distribution-or to interpret the difference between the two distributions. We see that the highest weighted exemplar for the two 2000s datasets is the same(top left of Figures 7c and 7d), but only one other image is shared between the two coresets. Meanwhile, the first coreset for the 1990s shares the same highest-weighted image with the two 2000s datasets-but this coreset does not appear in the first 1990s coreset, and the two 1990s coresets have no overlap. Overall, it is hard to compare between the marginal coresets.
By contrast, the shared support offered by dependent coresets means we can directly compare the distributions using their coresets. In Figure 8, we show a dependent MMD coreset ( 2 = 0.01) for the same collection of datasets. The shared support allows us to see that, while the two decades are fairly similar, there is clearly a stronger similarity between the pairs of datasets from the same year (i.e., similarly sized photos), than between pairs from different years. We can also identify images that exemplify the difference between the two decades, by looking at the difference in weights. We see that many of the faces towards the top of the bubble plot have high weights in the 2000s, but low weights in the 1990s. Examining these exemplars suggests that straight hair became more prevalent in the 2000s. Conversely, many of the faces towards the bottom of the bubble plot have high weights in the 1990s, but low weights in the 2000s. These photos tend to have wavy/fluffy hair and bangs. In conjunction, these plots suggest a tendency in the 2000s away from bangs and towards straight hair, something the authors remember from their formative years. However, there is still a significant overlap between the two decades: many of the exemplars have similar weights in the 1990s and the 2000s.
We can also see this in Figure 9, a bar chart shows the average weights associated with each exemplar in each decades (i.e., the blue bar above a given image is the average weight for that exemplar across the two 1990s datasets, and the red bar is the average weight across the two 2000s datasets). We see that most of the exemplars have similar weights in both scenarios, but that we have a number of straight-haired exemplars disproportionally representing the 2000s, and a number of exemplars with bangs and/or wavy hair disproportionately representing the 1990s.

Dependent coresets allow us to visualize data drift in collections of images
Next, we show how dependent MMD coresets can be used to understand and visualize variation between collections of multiple datasets. As in Section 5.1, we partition the 15367 yearbook images based on their decade, with the goal of understanding how the distribution over yearbook photos changes over time. Table 3 shows the number of photos in each resulting dataset. 1900s 1910s 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s 2000s 2010s  35  98  308  1682  2650  2093  2319  2806  2826  2621  2208  602  Table 3: Number of yearbook photos for each decade.   In each case, a red, vertical line indicates the year of the yearbook from which the exemplar was taken. We are able to see how styles change over time, moving away from the formal styles of the early 20th century, through waved hairstyles popular in the midcentury, towards longer, straighter hairstyles in later decades. In general, the relevance of an exemplar peaks around the time it was taken (although, this information is not used to select exemplars). However, some styles remain relevant over longer time periods (see many exemplars in the first column). Most of the early exemplars are highly peaked on the 00s or 10s; this is not surprising since these pre-WW1 photos tend to have very distinctive photography characteristics and hair styles. Figure 10 appears to show that the marginal coresets have high weights on exemplars from the corresponding decade. To look at this in more detail, we consider the distributions over the dates of the exemplars associated with each decade. Figure 11 shows the weighted mean and standard deviation of the years associated with the exemplars, with weights given by the coreset weights. We see that the mean weighted year of the exemplars increases with the decade. However, we notice that it is pulled towards the 1940s and 1950s in each case: this is because we must represent all datasets using a weighted combination of points taken from the convex hull of all datapoints.

Dependent coresets allow us to understand model generalization
To see how dependent coresets can be used to understand how a model trained on one dataset will generalize to others, we create two datasets of image digits. We started with the USPS handwritten digits dataset [18], which comprises a train set of 2791 handwritten digits, and a test set of 2001 handwritten digits. We split the train set into two datasets, X a and X b , where X a is skewed towards the earlier digits and X b towards the later digits. Figure 12 shows the resulting label counts for each dataset.
Our goal is to use dependent MMD coresets to qualitatively assess how well algorithms trained on X a will generalize to X b . To ensure the exemplars in our coreset have not been seen in training, we let our set of candidate points U be the union of X b and the USPS test set. As with the yearbook data, we use an additive squared exponential kernel, with bandwidths of the composite kernels being the median withinclass pairwise distances, and the overall median pairwise distance. Distances were calculated using the raw pixel values.
We begin by generating a dependent MMD coreset for the two datasets, with 2 = 0.005. The coreset was selected from the union of X b and the USPS test set; candidates from X a were not included as these will later be used to train the algorithms to be assessed. Figure 13 shows the resulting dependent MMD coreset, with the bars showing the weights w a,i , w b,i associated with the two datasets, and the images below the x axis showing the corresponding images u i . In Figure 14, the u i and w i have been grouped by number, so that if y(u) is the label of image u, the jth bar for Q a has weight ∑ i∈S:y(u i )=j w a,j .
We can see that the coreset has selected points that cover the spread of the overall dataset. However, looking at Figure 14, we see that the weights assigned to these exemplars in Q a and Q b mirror the relative frequencies of each digit in the corresponding datasets X a and X b (Figure 12).
We will now make use of this dependent coreset to explore the generalization performance of three classifiers. In this case, since we have labeled data, we can qualitatively assess generalization performance as a benchmark. We trained three classifiers-a decision tree with maximum depth of 8, a random forest with 100 trees, and a multilayer perceptron (MLP) with a single hidden layer with 100 units-on X a and the corresponding labels. In each case, we used the implementation in scikit-learn [31], with parameters chosen to have similar train set accuracy on X a . Table 4 shows the associated classification accuracies on the datasets X a and X b .

Model
Accuracy on X a Accuracy on X b MLP 0.9998 0.8531 Random Forest 1. 0.7129 Decision Tree 0.9585 0.5880 Table 4: Accuracies of three classification algorithms, on datasets X a and X b We then considered all points u i in our dependent coreset (Q a , Q b ) where f i = w b,i /w a,i > 2-i.e., points that are much more representative of X b than X a . We then looked at the class probabilities of the three algorithms, on each of these points, as shown in Figure 15. We see that the decision tree mis-classifies nine of the 21 exemplars, and is frequently highly confident in its misclassification. The random forest misclassifies three examples, and the MLP misclassifies two. We see this agrees with the ordering provided by empirically evaluating generalization in Table 4-the MLP generalizes best, and the decision tree worst. Across all algorithms, we see that performance on the numbers 8 and 9 seems to generalize worst-to be expected, since these digits are most underrepresented in X a .
For comparison, in Figure 16 we show the points where f i < 0.5-i.e., points that are much more representative of X a than X b . Note that, since our candidate set did not include any members of X a , none of these points were in our training set. Despite this, the accuracy is high, and fairly consistent between the three classifiers (the decision tree misclassifies two exemplars; the other two algorithms make no errors).
The dependent coreset only provides information about performance on "representative" members of X b . Since classifiers will tend to underperform on outliers, looking only at the dependent MMD coreset does not give us a full picture of the expected performance. We can augment our dependent MMD coreset with criticisms-points that are poorly described by the dependent coreset. Figure 17 shows the performance of the three algorithms on a size-20 set of criticisms for X b . Note that, overall, accuracy is lower than for the coreset-unsurprising, since these are outliers. However, as before, we see that the decision tree performs worst on these criticisms (nine mis-classifications), with the other two algorithms performing slightly better (six mis-classifications for the random forest, and seven for the MLP).
Note that, since accuracy does not correspond to a function in a RKHS, we cannot expect to use the coreset to bound the expected accuracy of an algorithm on the full dataset. Indeed, while the coresets and critics correctly suggest that the decision tree generalizes poorly, they do not give conclusive evidence on the relative generalization abilities of the other two algorithms. However, they do highlight what sort of data points are Figure 11: Distribution over the year associated with the marginal coresets for each decade. Plot shows weighted mean ± weighted standard deviation.
(a) Label frequencies for dataset X a (b) Label frequencies for dataset X b Figure 12: Frequency with which each digit occurs in two datasets of handwritten digits likely to be poorly modeled under each algorithm. By providing a qualitative assessment of performance modalities and failure modes on either typical points for a dataset, or points that are disproportionately representative of a dataset (vs the original training set) dependent MMD coresets allows users to identify potential generalization concerns for further exploration.

Discussion
MMD coresets have already proven to be a useful tool for summarizing datasets and understanding models. However, as we have shown in Section 5.2, their interpretability wanes when used to compare related datasets. Dependent MMD coresets provide a tool to jointly model multiple datasets using a shared set of exemplars, facilitating comparison of the coresets. By considering the relative weights of exemplars across two dataset, we can identify areas of domain mis-match. By exploring performance of algorithms on such points, we can glean insights about the ability of a model to generalize to new datasets.
Dependent MMD coresets are just one example of a dependent coreset that could be constructed using this framework. Future directions include exploring dependent analogues of other measure coresets [9].   Figure 13 have been combined based on their label. Weights for X a are shown in blue and weights for X b are shown in red . Figure 15: Exemplars over-represented in Q b , with class probabilities under three algorithms trained on X a . The true class is shown in blue ; where the highest probability class differs from the true class, the highest probability class is shown in red . Figure 16: Exemplars over-represented in Q a , with class probabilities under three algorithms trained on X a . The true class is shown in blue ; where the highest probability class differs from the true class, the highest probability class is shown in red . Figure 17: Criticisms of Q b from the dataset X b , with class probabilities under three algorithms trained on X a . The true class is shown in blue ; where the highest probability class differs from the true class, the highest probability class is shown in red .