Next Article in Journal
When Does a Spin Flip? Arrival Time Distributions and Information Propagation in Discrete Quantum Systems
Previous Article in Journal
Classical and Bayesian Inference for the Two-Parameter Rayleigh Distribution with Random Censored Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Moment Estimation in Paired Comparison Models with a Growing Number of Subjects

1
School of Mathematics and Statistics, Zhaoqing University, Zhaoqing 526000, China
2
School of Mathematics and Statistics, Shangqiu Normal University, Shangqiu 476000, China
3
Department of Statistics, Central China Normal University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Entropy 2026, 28(3), 314; https://doi.org/10.3390/e28030314
Submission received: 19 February 2026 / Revised: 5 March 2026 / Accepted: 9 March 2026 / Published: 11 March 2026
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

When the number of subjects, n, is large, paired comparisons are often sparse. Here, we study statistical inference in a class of paired comparison models parameterized by a set of merit parameters, under an Erdös–Rényi comparison graph, where the sparsity is measured by a probability p n tending to zero. We use the moment estimation base on the scores of subjects to infer the merit parameters. We establish a unified theoretical framework in which the uniform consistency and asymptotic normality of the moment estimator hold as the number of subjects goes to infinity. A key idea for the proof of the consistency is that we obtain the convergence rate of the Newton iterative sequence for solving the estimator. We use the Thurstone model to illustrate the unified theoretical results. Further extensions to a fixed sparse comparison graph are also provided. Numerical studies and real data analysis illustrate our theoretical findings.

1. Introduction

Subjects are repeatedly compared in pairs in a wide spectrum of situations, including sports games [1], ranking of scientific journals [2,3], the quality of product brands [4] and crowdsourcing [5]. For instance, one team plays with another team in basketball; papers in one journal cite papers in another journal; one consumer chooses one product over another; workers in a crowdsourcing setup are asked to compare pairs of items.
One of the fundamental problems in paired comparison analysis is to derive a fair and reliable ranking of all subjects based on observed comparison data. In round-robin tournaments—where every pair of subjects competes sufficiently many times—a natural ranking can be directly obtained from the number of wins, as the full pairwise comparisons eliminate biases from incomplete matchups. However, in most practical scenarios (e.g., sports leagues, crowdsourcing evaluations, or journal rankings), comparisons are often sparse (not all pairs interact) and stochastic (outcomes contain random noise), leading to unreliable direct rankings based solely on raw win counts. To address this issue, paired comparison models have been developed to statistically infer the underlying merit parameters of subjects and generate objective rankings; see the classic monograph by [6] for a comprehensive overview of such models and their theoretical foundations. Statistical models not only provide a method of ranking all subjects but are also tools for making inferences on the merits of subjects (e.g., testing whether two subjects have the same merit).
Here, we are concerned with a class of paired comparison models that assign one merit parameter to each subject and assume that the win–loss probability of any pair only depends on the difference between their merit parameters. Specifically, the probability of subject i winning j is
P ( i wins j ) = F ( β i β j ) , i , j = 0 , 1 , , n ; i j ,
where F is a known cumulative distribution function satisfying F ( x ) = 1 F ( x ) , β i is the merit parameter of subject i and n + 1 is the total number of subjects. The well-known Bradley–Terry model [7], which dates back to at least 1929 [8], and the Thurstone model [9], are two special cases of Model (1). The former postulates the logistic distribution of F ( x ) , while the latter postulates the normal distribution.
In the standard setting that n is fixed and the number of comparisons in each pair goes to infinity, the theoretical properties of Model (1) have been widely investigated in Chapter 4 of [6]. In the opposite scenario that n goes to infinity and each pair has a fixed number of comparisons, ref. [10] proved the uniform consistency and asymptotic normality of the maximum likelihood estimator (MLE) in the Bradley–Terry model.
When the number of subjects is large, paired comparisons are often sparse. Taking the NCAA Division I FBS (Football Bowl Subdivision) regular season, for example, a team plays with at most 14 other teams among a total of 120 teams. The observed comparisons can be represented in a comparison graph with n + 1 nodes denoting subjects and a weighted edge between two nodes denoting the number of comparisons. The Erdös–Rényi comparison graph has been widely considered in the literature, e.g., [1,11,12,13], where the number of comparisons between any two subjects follows a binomial distribution ( T , p n ) , and p n measures the sparsity. Under a very weak condition on the sparsity on p n , ref. [13] established the uniform consistency and asymptotic normality of the MLE in the Bradley–Terry model by extending the proof strategies in [10].
Moreover, ref. [14] considered a fixed sparse comparison graph by controlling the length from one subject to another subject with 2 or 3, in which the consistency and asymptotic normality of the MLE also hold. Inference in the high-dimensional setting under the Bradley–Terry model and some generalized versions has also attracted great interest in the machine learning literature; the upper bounds of various errors are established under different conditions [e.g., the 1 error β ^ β 1 in [15,16], the mean square error in [17], the bias E β ^ β in [18]. Under the assumption that the log-likelihood function is strictly convex, ref. [1] establish the uniform consistency of the MLE in general paired comparison models. However, the asymptotic theory of moment estimation under sparse paired comparison models remains largely underdeveloped in the literature. Existing theoretical developments focus almost exclusively on maximum likelihood estimation (MLE), which relies on strict distributional assumptions and is computationally demanding in high-dimensional settings. In contrast, this paper develops the method of moments (MOM), which avoids full distribution assumptions and maintains computational simplicity while achieving comparable asymptotic properties. The primary novelty of this work is to extend the asymptotic theory of high-dimensional sparse paired comparison models from MLE to the method of moments, establishing a parallel and complementary theoretical framework.
We further elaborate on the advantages of the method of moments (MOM) for high-dimensional sparse paired comparison models. Beyond computational efficiency, MOM has two key strengths over maximum likelihood estimation (MLE) for practical inference: (1) Robustness—the moment estimator is much less sensitive to outliers (e.g., upsets in our NFL data) in sparse settings, where MLE can be biased by extreme observations; (2) Quasi-likelihood compatibility—MOM relies only on moment conditions and avoids strict distributional assumptions, making it robust to model misspecification in high-dimensional sparse scenarios. These merits justify the use of MOM in this study, and our subsequent analysis establishes its asymptotic properties as the core theoretical contribution.
The main contributions of this paper are as follows. First, we develop the moment estimation, instead of the maximum likelihood estimation (MLE) or Bayesian estimation, based on the scores of subjects (i.e., the number of wins) to estimate the merit parameters in Model (1). The reason why we prioritize moment estimation over MLE is that it is natural to rank subjects according to their scores and the computation based on moment equations is simpler, especially in high-dimensional sparse settings where MLE may suffer from numerical instability due to nonlinear optimization. When F ( · ) belongs to the exponential family distribution, both estimations are identical. Second, under an Erdös–Rényi comparison graph, we establish a unified theoretical framework in which the uniform consistency and asymptotic normality of the moment estimator hold when n goes to infinity and p n tends to zero. A key idea for the proof of the consistency is that we obtain the convergence rate of the Newton iterative sequence for solving the estimator. The asymptotic normality is proved by applying Taylor expansions to a series of functions constructed from estimating equations and showing that remainder terms in the expansions are asymptotically neglected. Although each pair of subjects is assumed to have a comparison with the same probability p n , our proof strategy can be easily extended to the case with different comparison probabilities at the order of p n . Third, we use the Thurstone model to illustrate the unified theoretical results. Further extensions to a fixed sparse comparison graph in [14] are also derived. Numerical studies and real data analysis illustrate our theoretical findings.
The rest of this paper is organized as follows. In Section 2, we present the moment estimation. In Section 3, we present the consistency and asymptotic normality of the moment estimator. We illustrate our unified results with one application in Section 4. We extend the asymptotic results to a fixed comparison graph in Section 5. In Section 6, we carry out simulations and give real data analysis. We give a summary and further discussion in Section 7. The proofs of the main results are relegated to Appendix A. The proofs of supported lemmas are relegated to Appendix B.

2. Moment Estimation

Assume that n + 1 subjects that are labeled as “ 0 , , n ”, are compared in pairs repeatedly. Let t i j be the times that subject i compares with subject j and a i j be the times that subject i wins subject j out of t i j comparisons. As a result, a i j + a j i = t i j . By convention, define t i i = 0 and a i i = 0 . The comparison matrix ( t i j ) ( n + 1 ) × ( n + 1 ) is generated from an Erdös–Rényi comparison graph, where t i j follows a binomial distribution B i n ( T , p n ) with p n measuring the sparsity of comparisons. More generally, t i j B i n ( T i j , p n ) . We set T i j to be the same for ease of exposition. Recall that β 0 , , β n are the merit parameters of subjects 0 , , n . The probability in Model (1) implies that the winning probability only depends on the difference in merit between two subjects. For the identification of models, we normalize β i , i = 0 , 1 , , n by setting β 0 = 0 as in [10]. We assume that all paired comparisons are independent and a i j follows a binomial distribution B i n ( t i j , p i j ) conditional on t i j .
Let a i = j = 0 n a i j be the total wins of subject i and a = ( a 1 , , a n ) . To motivate the estimating equations, we compare the maximum likelihood equation and the moment equation under the Thurstone model described in Section 4. The maximum likelihood equations are
j i a i j ϕ ( β i β j ) Φ ( β i β j ) ( t i j a i j ) ϕ ( β i β j ) 1 Φ ( β i β j ) = 0 , i = 1 , , n .
where ϕ ( · ) is the density function of the standard normality and Φ ( · ) is its distribution function. The corresponding moment equations are
a i = j i t i j Φ ( β i β j ) , i = 1 , , n .
We can see that the latter is simpler and easier to compute. On the other hand, it is natural to rank subjects according to their scores. Thus, we use the moment estimation here. When F ( · ) in Model (1) belongs to the exponential family distributions, both are the same.
Write μ ( · ) as the expectation of F ( · ) and μ i j ( β ) = μ ( β i β j ) . Then, the estimating equations are
a i = j i t i j μ i j ( β ) , i = 1 , , n .
The solution to the above equations is the moment estimator denoted by β ^ = ( β ^ 1 , , β ^ n ) and β ^ 0 = 0 . Let
φ ( β ) = ( j 1 t i j μ 1 j ( β ) , , j n t n j μ n j ( β ) ) .
If φ ( β ) : R n ( 0 , ) is a one-to-one mapping, then β ^ exists and is unique, i.e., β ^ = φ 1 ( a ) . When φ 1 does not exist (i.e., φ is not one-to-one), any solution β ^ of Equation (2) is a moment estimator of β . The Newton–Raphson algorithm can be used to solve Equation (2). Moreover, the R language provides the package “BradleyTerry2” to solve the estimator in the Bradley–Terry model.
We discuss the existence of β ^ from the viewpoint of graph connection. If the comparison graph with the matrix ( t i j ) i , j = 0 , , n as its adjacency matrix is not connected, then there are two empty sets such that there are no comparisons between subjects in the first set and those in the second. In this case, there is no basis for ranking subjects in the first set and those in the second set. Further, a necessary condition for the existence of β ^ is that the directed graph G n with the win–loss matrix A = ( a i j ) as its adjacency matrix is strongly connected. In other words, for every partition of the subjects into two nonempty sets, a subject in the second set beats a subject in the first at least once. To see this, assume that there are two empty sets B 1 and B 2 such that all subjects in B 1 win all comparisons with subjects in B 2 . Without loss of generality, we set B 1 = { 0 , , m } and B 2 = { m + 1 , , n } with 0 m < n , where a i j = t i j for i B 1 and j B 2 . By summing a i over i = 0 , , m , we have
i = 0 m a i = i = 0 m j = 0 m t i j μ ( β i β j ) + i = 0 m j = m + 1 n t i j μ ( β i β j ) .
Because a i is a sum of a i j , j = 0 , , n , and μ ( β i β j ) + μ ( β j β i ) = 1 , we have
i = 0 m j = m + 1 n a i j = i = 0 m j = m + 1 n t i j μ ( β i β j ) .
Because a i j = t i j for i = 0 , , m and j = m + 1 , , n and at least such one t i j > 0 , it must be μ ( β i β j ) = 1 when t i j > 0 in order to guarantee both sides in the above equation to be equal. In this case, at least one such difference β i β j must go to infinity such that the moment estimate does not exist. The strong connection of G n is also sufficient for guaranteeing the existence of the MLE in the Bradley–Terry model [19] in which the moment estimator is equal to the MLE. It is interesting to see whether the strong connection of G n is sufficient to guarantee the existence of β ^ in a general model. In the next section, we will show that β ^ exists with probability approaching one under some mild conditions.

3. Asymptotic Properties

In this section, we present the consistency and asymptotic normality of the moment estimator. We first introduce some notations. For a subset C R n , let C 0 and C ¯ denote the interior and closure of C, respectively. For a vector x = ( x 1 , , x n ) R n , denote x by a vector norm with the -norm, x = max 1 i n | x i | , and the 1 -norm, x 1 = i | x i | . Let B ( x , ϵ ) = { y : x y ϵ } be an ϵ -neighborhood of x. For an n × n matrix J = ( J i j ) , let J denote the matrix norm induced by the -norm on vectors in R n , i.e.,
J = max x 0 J x x = max 1 i n j = 1 n | J i j | ,
and let J be a general matrix norm. Define the matrix maximum norm: J max = max i , j | J i j | . We use the superscript “*” to denote the true parameter under which the data are generated. When there is no ambiguity, we omit the superscript “*”.
Recall that μ ( · ) is the expectation of F ( · ) . We assume that μ ( · ) is a continuous function with the third derivative. Write μ and μ as the first and second derivatives of μ ( π ) on π , respectively. Let ϵ n be a small positive number. When β B ( β , ϵ n ) , we assume that there are three positive numbers, b n 0 , b n 1 , b n 2 , such that
[ min i , j μ ( π i j ) ] · [ max i , j μ ( π i j ) ] > 0 ,
b n 0 min i , j | μ ( π i j ) | max i , j | μ ( π i j ) | b n 1 ,
max i , j | μ ( π i j ) | b n 2 ,
where π i j : = β i β j .
We use the Bradley–Terry model to illustrate the above inequalities, where μ ( x ) = e x / ( 1 + e x ) . A direct calculation gives that
μ ( x ) = e x ( 1 + e x ) 2 , μ ( x ) = e x ( 1 e x ) ( 1 + e x ) 3 .
It is easy to show that
b n 0 = e 2 β + 2 ϵ n ( 1 + e 2 β + 2 ϵ n ) 2 | μ ( x ) | b n 1 = 1 4 , | μ ( x ) | b n 2 = 1 4 .
If ϵ n = o ( 1 ) , then 1 / b n 0 = O ( e 2 β ) .

3.1. Consistency

To establish the consistency of β ^ , let us first define a system of functions:
H i ( β ) = j = 0 n t i j μ i j ( β ) a i , i = 0 , , n ,
and H ( β ) = ( H 1 ( β ) , , H n ( β ) ) . It is clear that H ( β ^ ) = 0 . Let H ( β ) be the Jacobian matrix of H ( β ) on the parameter β . The asymptotic behavior of β ^ depends crucially on the inverse of H ( β ) . For convenience, denote H ( β ) as V = ( v i j ) i , j = 1 , , n , where
v i j = t i j μ ( π i j ) , i j , v i i = j = 0 , j i n t i j μ ( π i j ) .
Define
v i 0 = v 0 i : = j = 0 , j i n v i j v i i = t 0 i μ ( π 0 i ) , i = 1 , , n , v 00 = j = 1 n t 0 j μ ( π 0 j ) .
When β B ( β , ϵ n ) and min i , j μ ( π i j ) > 0 , in view of inequality (3b), the entries of V satisfy the following inequalities:
if t i 0 > 0 , t i 0 b n 0 v i i + j = 1 , j i n v i j t i 0 b n 1 , i = 1 , , n , if t i j > 0 , t i j b n 0 v i j t i j b n 1 , i , j = 1 , , n ; i j .
Without loss of generality, we assume that min i j μ ( π i j ) > 0 when β B ( β , ϵ n ) hereafter (otherwise, we redefine H i ( β ) = a i j i μ i j ( β ) and repeat a similar process). Our strategy for the proof of consistency crucially depends on the existence of the inverse of V, which requires that V is a full rank matrix. It is easy to show that V is positively semi-definite. Thus, if V has a full rank, then V must be positively definite. The following lemma assures the existence of the inverse of V.
Lemma 1.
Assume that min i , j μ i j ( β ) > 0 . With probability at least 1 ( 1 p n ) n T , H ( β ) is positively definite.
Because log ( 1 x ) x when x ( 0 , 1 ) , we have
e n T log ( 1 p n ) e p n T n .
The probability of the nonexistence of V 1 is less than e p n T n , going exponentially fast to zero. Generally, the inverse of V does not have a closed form. Ref. [20] proposed to approximate the inverse of V, V 1 , by the matrix S = ( s i j ) n × n , where
s i j = δ i j v i i + 1 v 00 .
In the above equation, δ i j = 1 if i = j ; otherwise, δ i j = 0 . By extending the proof of [20] to the sparse case, the upper bound of the approximate error V 1 S max is given in Lemma A2.
Recall that the main idea of the proof of the consistency in the Bradley–Terry model [10,14] contains two parts. Let u ^ i = e β ^ i , u i = e β i , i 0 = arg max i u ^ i / u i and i 1 = arg min i u ^ i / u i . Since u ^ 0 / u 0 = 1 , it suffices to show that the ratio of subject i 0 , u ^ i 0 / u i 0 , and the ratio of i 1 , u ^ i 1 / u i 1 are very close. With the nice mathematical properties of the logistic function μ ( x ) = e x / ( 1 + e x ) , the first part is to show that there are a number of subjects satisfying the following inequalities:
b { j : t i 0 , j > 0 } ( u ^ i 0 / u i 0 u ^ j / u j ) c , b { j : t i 1 , j > 0 } ( u ^ j / u j u ^ i 0 / u i 0 ) c ,
where b and c are certain numbers. The second part is to eliminate common terms u ^ j / u j based on the condition that the number of the common neighbors between any two subjects, min i , j # { k : t i k > 0 , t j k > 0 } , is at least τ n , where τ = 1 in [10] and τ ( 0 , 1 ) in [14]. In the Erdös–Rényi comparison graph, [13] further showed that there is at least one subject with its ratio close to both u ^ i 0 / u i 0 and u ^ i 1 / u i 1 .
The aforementioned strategies for the proof of consistency are built on the the premise of the existence of the MLE, which is guaranteed by the necessary and sufficient condition that the directed graph with the win–loss matrix as its adjacency matrix is strongly connected [19]. As discussed before, it may be difficult to find the minimal sufficient condition to guarantee the existence of β ^ in general paired comparison models. To overcome this difficulty, we aim to obtain the convergence rate of the Newton iterative sequence for solving Equation (2). Under the well-known Newton–Kantorich conditions, the Newton iterative sequence converges, and its limiting point is the solution. We apply an adjusted version of the Newton–Kantorich theorem in [21] to this end, which not only guarantees the existence of the solution but also gives an optimal error bound for the Newton iterative sequence.
Now, we formally state the consistency result.
Theorem 1.
Assume that Conditions (3a), (3b) and (3c) hold. If b n 1 4 b n 2 / ( b n 0 6 p n 4 ) = o ( ( n / log n ) 1 / 2 ) , then β ^ exists with probability approaching one and is uniformly consistent in the sense that
β ^ β = O p b n 1 2 b n 0 3 p n 2 log n n = o p ( 1 ) .
To see how small p n could be, we consider a special case that β is a constant vector, in which b n 0 , b n 1 and b n 2 are also constants. According to the above theorem, if p n > O ( ( log n / n ) 1 / 8 ) , then β ^ β = O p ( p n 2 ( log n / n ) 1 / 2 ) .

3.2. Asymptotic Normality of β ^

We establish the asymptotic distribution of β ^ by characterizing its asymptotical representation. In detail, we apply a second-order Taylor expansion to H ( β ^ ) and find that β ^ β can be represented as the sum of a main term V 1 ( a E a ) and an asymptotically neglected remainder term, where E denotes the conditional expectation conditional on { t i j : i , j = 0 , , n } . Because V 1 does not have a closed form, we use the matrix S defined in (7) to approximate it. We formally state the asymptotic normality of β ^ as follows.
Theorem 2.
Let V = H ( β ) / β and U = ( u i j ) : = Var ( a | t i j , 0 i , j n ) . If b n 2 b n 1 6 b n 0 9 p n 6 = o ( n 1 / 2 / log n ) , then for fixed k, the vector ( ( β ^ 1 β ) , , ( β ^ k β k ) ) follows a k-dimensional multivariate normal distribution with mean zero and the covariance matrix Σ = ( σ i j ) k × k , where
σ i j = δ i j u i i v i i 2 + u 00 v 00 2 .
Remark 1.
If U = V , then σ i j is equal to δ i j / v i i + 1 / v 00 . When F ( · ) belongs to the exponential family distribution (e.g., the Bradley–Terry model), U is identical to V. If U V , then the asymptotic variance of β ^ i is involved with an additional factor u i i . The asymptotic variance of β ^ i is on the order of ( n p n ) 1 / 2 if β is bounded above by a constant.

4. Application to the Thurston Model

In this section, we illustrate the unified theoretical result by the application to the Thurston model.
The original Thurston model has a variance σ 2 in the normal distribution, i.e., the probability that subject i is preferred over j is F ( ( β i β j ) / σ ) . Since the merit parameters are scale invariable, we simply set σ = 1 hereafter. Recall that ϕ ( x ) = ( 2 π ) 1 / 2 e x 2 / 2 is the standard normal density function and Φ ( x ) = x ϕ ( x ) d x is the distribution function of the standard normality. In the Thurston model, μ ( x ) = Φ ( x ) . Then,
μ ( x ) = ϕ ( x ) , μ ( x ) = x 2 π e x 2 / 2 .
Since ϕ ( x ) = ( 2 π ) 1 / 2 e x 2 / 2 is an decreasing function on | x | , we have when | x | Q n ,
1 2 π e Q n 2 / 2 ϕ ( x ) 1 2 π .
Let h ( x ) = x e x 2 / 2 . Then, h ( x ) = ( 1 x 2 ) e x 2 / 2 . Therefore, when x ( 0 , 1 ) , h ( x ) is an increasing function on its argument x; when x ( 1 , ) , h ( x ) is an decreasing function on x. As a result, h ( x ) attains its maximum value at x = 1 when x > 0 . Since h ( x ) is a symmetric function, we have | h ( x ) | e 1 / 2 0.6 . Therefore,
b n 0 = 1 2 π e ( β + ϵ n ) 2 / 2 , b n 1 = 1 2 π , b n 2 = ( 2 π e ) 1 / 2 .
In view of Theorems 1 and 2, we have the following corollary.
Corollary 1.
If b n 1 4 b n 2 / ( b n 0 6 p n 2 ) = o ( ( n / log n ) 1 / 2 ) and p n > 24 log n / n , then β ^ exists with probability approaching one and is uniformly consistent in the sense that
β ^ β = O p b n 1 2 b n 0 3 p n log n n = o p ( 1 ) .
Let V = H ( β ) / β . If b n 2 b n 1 6 b n 0 9 = o ( n 1 / 2 / log n ) , then for fixed k, the vector ( ( β ^ 1 β ) , , ( β ^ k β k ) ) follows a k-dimensional multivariate normal distribution with mean zero and the covariance matrix Σ = ( σ i j ) k × k defined at (10).

5. Extension to a Fixed Sparse Design

In some applications such as sports, the comparison graph may be fixed, not random. For example, in the regular season of the National Football League (NFL), games are scheduled in advance. More specially, there are 32 teams in the 2 conferences of the NFL that are divided into 8 divisions, each consisting of 4 teams. In the regular season, each team plays 16 matches, 6 within the division and 10 between the divisions. Motivated by the design, ref. [14] proposed a sparse condition to control the length from one subject to another subject with 2 or 3:
τ n : = min 0 i < j n # { k : t i k > 0 , t j k > 0 } t .
That is, τ n is the minimum ratio of the total number of paths between any i and j with length 2 or 3. Under the Erdös–Rényi comparison graph, there are similar sparsities. Specifically, the set of common neighbors of any two subjects i and j has at least the following size:
# { k : t i k > 0 , t j k > 0 } 1 2 ( n 1 ) p n 2 ,
with a probability of at least 1 O ( 1 / n ) if n p n 2 24 log n ; see (A12) in the proof of A2.
We assume that if two subjects have comparisons, they are compared T times, in accordance with the aforementioned setting for easy of exposition. Similar to Lemma A2, the approximate error of using S to approximate V 1 is
V 1 S max 2 T 2 b n 1 2 ρ max b n 0 3 τ n 3 n 2 ,
where ρ max = t max / n and t max = max i t i . With similar lines of argument as in the proofs of Theorems 1 and 2, we have the following theorem, whose proof is omitted.
Theorem 3.
Assume that conditions (3a), (3b) and (3c) hold. If b n 1 4 b n 2 / ( b n 0 6 τ n 2 ) = o ( ( n / log n ) 1 / 2 ) , then β ^ exists with probability approaching one and is uniformly consistent in the sense that
β ^ β = O p b n 1 2 b n 0 3 τ n log n n = o p ( 1 ) .
Let V = H ( β ) / β and U = ( u i j ) : = Var ( a | t i j , 0 i , j n ) . If b n 2 b n 1 6 b n 0 9 τ n 3 = o ( n 1 / 2 / log n ) , then for fixed k, the vector ( ( β ^ 1 β ) , , ( β ^ k β k ) ) follows a k-dimensional multivariate normal distribution with mean zero and the covariance matrix Σ = ( σ i j ) k × k , where σ i j is given in (7).

6. Numerical Studies

In this section, we evaluate the asymptotic results of the moment estimator in the Thurston model through simulation studies and a real data example.

6.1. Simulation Studies

We carry out simulations to evaluate the finite sample performance of the moment estimator in the Thurston model. We set T = 1 , which means that any pair has one comparison with probability p n and no comparison with probability 1 p n . Let c be a constant. We set the merit parameters to be a linear form, i.e., β i = i c log n / n for i = 1 , , n , where β 0 = 0 . We considered four different values for c as c = 0.3 , 0.5 , 0.8 . By allowing β to grow with n, we intended to assess the asymptotic properties under different asymptotic regimes.
In order to see how small p n could be, we first evaluate the fail frequency that the “win–loss” graph G n is strongly connected, which is the necessary condition to guarantee the moment estimator exists. We set c to be fixed with c = 0.4 . The results are shown in Table 1 with 1000. We can see that the necessary condition did not hold in each simulation when p n = log n / n , while it holds with almost 100 % frequency when p n = ( log n / n ) 1 / 2 . This shows that it is necessary to control the rate of p n tending to zero.
Based on Theorem 2, ξ ^ i j = [ β ^ i β ^ j ( β i β j ) ] / ( u ^ i i / v ^ i i 2 + u ^ 00 / v ^ 00 2 ) 1 / 2 converges in distribution to the standard normality, where u ^ i i and v ^ i , i are the estimates of u i i and v i i by replacing β with β ^ . Therefore, we assessed the asymptotic normality of ξ ^ i j via the coverage probability of the 95 % confidence interval and the length of the confidence interval. The times that β ^ failed to exist were also recorded. Two values, n = 100 and n = 200 , were considered for the number of subjects. Each simulation was repeated 10,000 times.
The simulation results are shown in Table 2. When p n = ( log n / n ) 1 / 4 , all simulated coverage probabilities are very close to the target level 95 % . On the other hand, when p n = ( log n / n ) 1 / 2 , they are a little lower than the normal level in the case n = 100 and are very close to 95 % in the case n = 200 . The length of the confidence interval decreases as n increases, which qualitatively agrees with the theory. It is also expected that the length of the confidence interval increases as p n decreases when n is fixed. Another phenomenon is that the length of the confidence interval under three distinct c has little difference when n and p n are fixed.

6.2. A Real Data Example

We use the 2018 NFL regular season data as an illustrative example, which is available from https://www.espn.com/nfl/schedule/_/year/2018 (accessed on 20 August 2025). The NFL league consists of thirty-two teams that are divided evenly into two conferences, and each conference has four divisions that have four teams each. In the regular season, each team plays with three intra-division teams (each twice) and ten games with ten inter-division teams (each once). As discussed in [14], the design of the NFL regular season satisfies the sparsity condition of the fixed comparison graph, where τ n = 1 / 16 . We removed two ties before our analysis. The fitted merits that were obtained from fitting the Thurstone model for the remaining data are given in Table 3, where we used “Arizona Cardinals” with the smallest number of wins as the baseline (with β ^ 0 = 0 ).
It is interesting to compare the ordering of six playoff seeds of the two conferences with the NFL rules with the ordering by their fitted merits in Table 3. The NFL rules are based on the regular season won–lost percentage record (PCT) and can be briefly summarized as follows: the teams in each division with the best PCT are seeded one through four; another two teams from each conference are seeded five and six based on their PCT. The six playoff seeds in the American Football Conference from No. 1 to No. 6 based on the PCT are Kansas City Chiefs, New England Patriots, Pittsburgh Steelers, Houston Texans, Los Angeles Chargers, and Indianapolis Colts, while the selected teams based on the fitted merits are Kansas City Chiefs, New England Patriots, Houston Texans, Baltimore Ravens, Los Angeles Chargers, and Pittsburgh Steelers. The corresponding six playoff seeds in the National Football Conference based on the PCT are Los Angeles Rams, New Orleans Saints, Chicago Bears, Washington Redskins, Seattle Seahawks, and Carolina Panthers, while the selected teams base on the fitted merits are Los Angeles Rams, New Orleans Saints, Chicago Bears, Dallas Cow boys, Seattle Seahawks, and Philadelphia Eagles. As we can see, the selected top three teams based on the PCT and the fitted merits in each conference are the same, and the selected teams from No. 4, No. 5 and No. 6 are not all the same.

7. Summary and Discussion

We have presented the moment estimation based on the scores of subjects in the paired comparison model under sparse comparison graphs. We have established the uniform consistency and asymptotic normality of the moment estimator. The consistency is shown by obtaining the convergence rate of the Newton iterative sequence. This leads to a condition on the sparsity parameter p n requiring that p n O ( ( log n / n ) 1 / 8 ) if β is a constant vector. We note that this condition looks much stronger than that in the Bradley–Terry model in [13]. Since we consider a general model, it would seem to be suitable that a more severe condition is imposed. On the other hand, the condition imposed on b n 0 may not be the best possible condition. In particular, the conditions for guaranteeing the asymptotic normality seem stronger than those needed for the consistency. Note that the asymptotic behavior of the moment estimator depends not only on b n 0 but also on the configuration of all parameters. It would be of interest to investigate whether these conditions could be relaxed.
In this paper, we assume that given the comparison graph, all paired comparisons are independent. Note that the moment equation holds regardless of whether comparisons are independent.
When comparisons are not independent, the moment estimation still works. The consistency result in Theorem 1 still holds as long as there is the same order of the upper bound of a i E a i in Lemma A4. In fact, the independence assumption is not directly used when checking our proofs. It is only used in Lemma A4 to derive the upper bound of a i E a i using the Hoeffding inequality. Analogously, the independence assumption is used to derive the central limit theorem of a i E a i . In the dependence case, there are also many Hoeffding-type exponential tail inequalities (e.g., [22,23,24]) and cental limit theorems for sums of a sequence of random variables (e.g., [25,26]) to apply.
Building on the theoretical and empirical findings of this study, we identify two promising avenues for further exploration, which not only address the current limitations but also extend the moment estimation framework to more complex and practical scenarios: 1. Moment estimation for paired comparison models with dependent outcomes. This paper assumes that all paired comparison outcomes are independent, which is a standard but restrictive assumption in many real-world settings (e.g., crowdsourcing evaluations where raters may have consistent biases, or sports leagues where team performance is serially correlated). A natural extension is to develop moment estimation methods for models with dependent outcomes, such as incorporating Markovian dependence or exchangeable correlation structures. Key challenges include deriving unbiased moment equations under dependence and establishing asymptotic properties (consistency, asymptotic normality) using tools from dependent random variable theory (e.g., Hoeffding-type inequalities for associated variables [22,23,24]). 2. High-dimensional sparse paired comparison models with structured merit parameters. This study focuses on unstructured merit parameters (i.e., β i are independent). However, in many applications, merit parameters often exhibit inherent structures, such as group-level homogeneity (e.g., teams in the same sports division share similar strengths) or sparsity (e.g., only a few subjects have distinct merits in large-scale crowdsourcing). Extending the moment estimation framework to incorporate such structures (e.g., group lasso-penalized moment estimation, sparse merit parameter inference) would improve estimation efficiency. Key research questions include designing computationally feasible penalized moment equations and establishing oracle properties for the structured estimators. These directions align with the core theme of sparse paired comparison inference and address practical limitations of the current work. We believe pursuing these avenues will not only extend the theoretical scope of moment estimation but also broaden its applicability to more complex real-world problems.

Author Contributions

Conceptualization, Q.W.; Data curation, Q.W.; Formal analysis, Q.W.; Funding acquisition, Q.W.; Investigation, Q.W. and L.P.; Methodology, Q.W. and T.Y.; Software, Q.W. and L.P.; Supervision, T.Y.; Validation, Q.W. and T.Y.; Visualization, Q.W. and L.P.; Writing—original draft, Q.W.; Writing—review and editing, Q.W., L.P. and T.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 12301386.

Informed Consent Statement

Not Applicable.

Data Availability Statement

We use the 2018 NFL regular season data as an example, which is available from https://www.espn.com/nfl/schedule/_/year/2018 (accessed on 20 August 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In this section, we present the proofs of the theorems.

Appendix A.1. Preliminaries

We state two preliminary results firstly, which will be used in the proofs. The first result is the optimal error bound in the Newton method in [21] under the Kantorovich conditions [27].
Lemma A1
([21]). Let X and Y be Banach spaces, D be an open convex subset of X and F: D X Y be Fréchet differentiable. Assume that, at some x 0 D , F ( x 0 ) is invertible and that
F ( x 0 ) 1 ( F ( x ) F ( y ) ) K x y , x , y D ,
F ( x 0 ) 1 F ( x 0 ) η , h = K η 1 / 2 , S ¯ ( x 0 , t ) D , t = 2 η / ( 1 + 1 2 h ) .
  • Then: (1) The Newton iterates x n + 1 = x n F ( x n ) 1 F ( x n ) , n 0 are well-defined, lie in S ¯ ( x 0 , t ) and converge to a solution x of F ( x ) = 0 .
  • (2) The solution x is unique in S ( x 0 , t ) D , t = ( 1 + 1 2 h ) / K if 2 h < 1 and in S ¯ ( x 0 , t ) if 2 h = 1 .
  • (3) x x n t if n = 0 and x x n 2 1 n ( 2 h ) 2 n 1 η if n 1 .
The second result is the approximate error of using S to approximate V 1 , whose proof is in the Appendix B.
Lemma A2.
If p n 24 log n / n , then for sufficiently large n with a probability of at least 1 O ( n 1 ) , we have
V 1 S max 12 T b n 1 2 b n 0 3 n ( n 1 ) p n 3 .

Appendix A.2. Proof of Theorem 1

We aim to show Theorem 1 by obtaining the convergence rate of the Newton iterative sequence in view of Lemma A1, which requires us to verify the Kantovorich conditions (A1) and (A2). Condition (A1) depends on the Lipschitz continuous form of H i ( β ) . Recall that t max = max i = 0 , , n t i and t min = min i = 0 , , n t i .
Lemma A3.
Let D = B ( β , ϵ n ) ( R n ) be an open convex set containing the true point β . For any given set { t i j , 0 i , j n } , if Inequality (3c) holds, then
max i = 0 , , n H i ( x ) H i ( y ) 1 4 b n 2 t max x y .
Moreover, Condition (A1) also depends on the magnitudes of | a i E ( a i | t i j , j = 0 , n ) | , i = 0 , , n , which are stated below.
Lemma A4.
With a probability of at least 1 O ( 1 / n ) , we have
max i = 0 , , n | a i E ( a i | t i j , j = 0 , , n ) | 2 log n t max .
The following results are the lower bound of t min and the upper bounds of t max and of i t i .
Lemma A5.
(1) With probability at least 1 ( n + 1 ) exp ( 1 8 n T p n ) ,
t min = min i = 0 , , n t i 1 2 n T p n .
(2) With probability at least 1 ( n + 1 ) exp ( 1 10 n T p n ) ,
t max = max i = 0 , , n t i 3 2 n T p n .
(3) With probability at least 1 exp ( 1 10 n ( n + 1 ) T p n ,
i = 0 n t i 3 n ( n + 1 ) T p n .
We are now ready to prove Theorem 1.
Proof of Theorem 1.
Note that β ^ is the solution to the equation H ( β ) = 0. We prove the consistency by obtaining the convergence rate of the Newton iterative sequence: β ( k + 1 ) = β ( k ) [ H ( β ( k ) ) ] 1 H ( β ( k ) ) , where we set β ( 0 ) : = β . To apply Lemma A1, we choose the convex set D = B ( β , ϵ n ) . The following calculations are based on the event E n :
{ t i j , 0 i , j n : max i | a i E ( a i | t i j , j = 0 , , n ) | 2 t max log n , H ( β ) > 0 , t min T 2 n p n , t max 3 2 n T p n } .
Note that b n 0 | μ i j ( β ) | b n 1 when β B ( β , ϵ n ) . Let V = ( v i j ) n × n = H ( β ) . We use S defined in (7) to approximate V 1 and let W = V 1 S . We verify the Kantovorich conditions in Lemma A1 as follows. Since i = 0 n H i ( β ) = 0 , we have
i = 1 n H i ( β ) = H 0 ( β ) .
Based on Lemma A3, we have
[ H ( β ) ] 1 [ H ( x ) H ( y ) ] S [ H ( x ) H ( y ) ] + W [ H ( x ) H ( y ) ] max i = 1 , , n 1 v i i H i ( x ) H i ( y ) 1 + 1 v 00 H 0 ( x ) H 0 ( y ) 1 + W H ( x ) H ( y ) [ 2 b n 0 t min + n · 12 b n 1 2 T n ( n 1 ) b n 0 3 p n 3 ] × 4 b n 2 t max × x y = O ( b n 1 2 b n 2 b n 0 3 p n 2 ) × x y .
Thus, we can set K = O ( b n 1 2 b n 2 b n 0 3 η n 1 ) in (A1). Again, based on the event E n , we have
η = [ H ( β ) ] 1 H ( β ) n V 1 S max H ( β ) + max i = 1 , , n | H i ( β ) | v i i + | H 0 ( β ) | v 00 O ( b n 1 2 n b n 0 3 p n 3 ) + O ( 1 b n 0 t min ) × O ( ( t max log n ) 1 / 2 ) = O b n 1 2 b n 0 3 p n 2 log n n .
If
K η = O b n 1 4 b n 2 b n 0 6 p n 4 log n n = o ( 1 ) ,
then this verifies Condition (A2). Based on Lemma A1, lim k β ( k ) exists, denoted by β ^ , and it satisfies
β ^ β = O b n 1 2 b n 0 3 p n 2 log n n .
Based on Lemmas 1, A2, A4 and A5, the event E n holds with a probability of at least 1 O ( n 1 ) if p n 24 log n / n . Note that (A3) implies p n 24 log n / n . This completes the proof. □

Appendix A.3. Proofs for Theorem 2

Write Var and E as the conditional variance and conditional expectation given t i j for 0 i , j n . Let U = ( u i j ) : = Var ( a ) . In the Bradley–Terry model, U = H ( β ) . Note that a i is a sum of t i independent Bernoulli random variables. Based on Lemma A5, we know that min i t i = O p ( n p n ) . Let σ min = min i j p i j ( 1 p i j ) . If n p n σ min , then min i u i i . Based on the central limit theorem in the bound case, as in [28] (p. 289), if n p n σ min , then u i i 1 / 2 { a i E ( a i ) } converges in distribution to the standard normal distribution. When considering the asymptotic behaviors of the vector ( a 1 , , a r ) with a fixed r, one could replace the degrees a 1 , , a r by the independent random variables a ˜ i = a i , r + 1 + + a i n , i = 1 , , r . Therefore, we have the following proposition.
Proposition A1.
If n p n σ min , then as n , for any fixed r 1 , the components of ( a 1 E ( a 1 ) , , a r E ( a r ) ) are asymptotically independent and normally distributed with variances u 11 , , u r r , respectively. Moreover, the first r rows of S ( a E ( a ) ) are asymptotically normal with covariance matrix Σ = ( σ i j ) , where
σ i j = δ i j u i i v i i 2 + u 00 v 00 2 .
Lemma A6.
Let V = H ( β ) and W = V 1 S and Cov ( · ) = Cov ( · | t i j , 0 i , j n ) . Then,
Cov ( W H ( β ) ) = O p ( b n 1 5 n 2 b n 0 6 p n 5 ) .
Further, if U = H ( β ) , then
Cov ( W H ( β ) ) = O p ( b n 1 2 n 2 b n 0 3 p n 5 ) .
Now, we are ready to prove Theorem 2.
Proof of Theorem 2.
Let π ^ i j = β ^ i β ^ j and π i j = β i β j . Based on Theorem 1, β ^ B ( β , ϵ n ) . To simplify notations, write μ i j = μ ( π i j ) . Based on a second-order Taylor expansion, we have
t i j μ ( π ^ i j ) t i j μ ( π i j ) = t i j μ i j ( β ^ i β i ) t i j μ i j ( β ^ j β j ) + g i j , i j ,
where g i j is the second-order remainder term:
g i j = t i j μ ( π ˜ i j ) [ ( β ^ i β i ) 2 + ( β ^ j β j ) 2 2 ( β ^ i β i ) ( β ^ j β j ) ] .
In the above equation, π ˜ i j lies between π i j and π ^ i j . If b n 1 4 b n 2 b n 0 6 p n 4 = o ( ( n / log n ) 1 / 2 ) , based on Theorem 1, we have
β ^ β = O p b n 1 2 b n 0 3 p n 2 log n n .
Therefore, in view of (3c), | μ i j ( π ˜ i j ) | t i j b n 2 such that
| g i j | 4 b n 2 t i j β ^ β 2 .
Let g i = j i g i j , i = 0 , , n , and g = ( g 1 , , g n ) . Then, based on Lemma A5 (2), we have
max i = 0 , , n | g i | = 4 b n 2 t max · O p b n 1 4 log n n b n 0 6 p n 4 = O p b n 1 4 b n 2 log n b n 0 6 p n 3 .
By writing the equation in (A4) into a matrix form, we have
E a a = V ( β ^ β ) + g .
Equivalently,
β ^ β = V 1 ( E a a ) + V 1 g .
Similarly, we have
E a 0 a 0 = H 0 ( β ) β ( β ^ β ) + 1 2 ( β ^ β ) 2 H 0 ( β ˜ ) β β ( β ^ β ) = i = 1 n v i 0 ( β ^ i β i ) + 1 2 v i 0 ( β ^ i β i ) 2 .
Therefore, based on Lemma A5 (2), we have
| E a 0 a 0 + i = 1 n v i 0 ( β ^ i β i ) | = O p b n 1 4 t max log n n b n 0 6 p n 4 = O p b n 1 4 log n b n 0 6 p n 3 .
Note that i = 0 n a i E ( a i ) = 0 . Multiplying both sides of (A7) by a row vector with all element 1 yields
i = 1 n g i = a 0 E a 0 + i = 1 n v i 0 ( β ^ i β i ) .
Therefore, we have
| i = 1 n g i | = O p b n 1 4 log n b n 0 6 p n 3 .
Based on (A6) and Lemma A2, we have
V 1 g S g + ( V 1 S ) g max i = 1 , , n 1 v i i | g i | + 1 v 00 | i = 1 n g i | + n V 1 S max g = O p b n 2 t min b n 0 + b n 1 2 n b n 0 3 p n 3 × O p b n 1 4 b n 2 log n b n 0 6 p n 3 = O p ( b n 2 b n 1 6 log n n b n 0 9 p n 6 ) .
If b n 2 b n 1 6 b n 0 9 p n 6 = o ( n 1 / 2 / log n ) , then we have
β ^ i β i = V 1 ( E a a ) + o p ( n 1 / 2 ) .
Consequently, in view of Lemma A6, we have
β ^ i β i = [ S ( E a a ) ] i + o p ( n 1 / 2 ) .
Therefore, Theorem 2 immediately comes from Proposition A1. □

Appendix B

In this section, we present the proofs of supported lemmas.

Appendix B.1. Proof of Lemma 1

Proof of Lemma 1.
For an arbitrarily given nonzero vector x = ( x 1 , , x n ) R n , direct calculations give
x V x = = i = 1 n x i 2 v i i + i = 1 n j = 1 , j i n x i v i j x j = i = 1 n j = 1 , j i n x i 2 v i j i = 1 n x i 2 v i 0 + i = 1 n j = 1 , j i n x i v i j x j = 1 2 i = 1 n j = 1 , j i n ( x i x j ) 2 v i j i = 1 n x i 2 v i 0 ,
where the second equality is due to that v i i = j i v i j . Therefore, x V x = 0 if and only if
x i v i 0 = 0 , i = 1 , , n , v i j ( x i x j ) = 0 , 1 i j n .
Because μ i j ( β ) 0 and v i j = t i j μ i j ( β ) for i j , the above equations are identical to
x i t i 0 = 0 , i = 1 , , n , t i j ( x i x j ) = 0 , 1 i j n .
Let E be the event
{ { t i j } i , j = 0 , i j n : x i t i 0 = 0 , i = 1 , , n , t i j ( x i x j ) = 0 , 1 i j n } .
To show Lemma 1, it is sufficient to obtain the lower bound of the probability of the event E. We will evaluate the probability of the event E under two cases: there exists some zero element in x and there are no zero elements in x.
Case I: We consider there are zero elements in x. Let { 0 , x i 1 , , x i k } be k + 1 different distinct values in { x 1 , , x n } , Ω 0 = { i : x i = 0 } and Ω j = { q : x q = x i j } , j = 1 , , k . Since x 0 , k 1 . It is clear that
| Ω 0 | > 0 , | Ω j | > 0 , j = 1 , , k , i = 0 k | Ω i | = n + 1 ,
where | Ω i | denotes the cardinality of Ω i . Therefore, we have
P ( E ) = ( 1 p n ) j = 1 k | Ω j | × 0 i < j k ( 1 p n ) T | Ω i | | Ω j | = ( 1 p n ) j = 1 k | Ω j | + 0 i < j k | Ω i | | Ω j | .
To obtain the lower bound of P ( E ) , it is sufficient to solve the minimizer of j = 1 k | Ω j | + 0 i < j k | Ω i | | Ω j | under the restricted condition (A11). Let y i = | Ω i | . Then,
j = 1 k y i + 0 i < j k y i y j = j = 1 k y i + 1 2 i = 0 n j = 0 , j i k y i y j = 1 2 i = 0 n j = 0 n y i y j 1 2 i = 0 k y i 2 + j = 1 k y i = 1 2 ( i = 0 n y i ) 2 1 2 i = 1 k ( y i 1 ) 2 1 2 y 0 2 + 1 2 k = 1 2 ( ( n + 1 ) 2 + k ) 1 2 ( i = 0 k z i 2 ) ,
where z 0 = y 0 and z i = y i 1 , i = 1 , , k . Under the restriction i z i = n + 1 k > 0 and z i 0 , the function i = 0 k z i 2 obtains its maximizer at such points z = ( 0 , , n + 1 k , 0 , , 0 ) . Therefore, we have
j = 1 k y i + 0 i < j k y i y j 1 2 ( ( n + 1 ) 2 + k ( n + 1 k ) 2 ) = 1 2 ( 2 ( n + 1 ) k + k k 2 ) = 1 2 [ ( k 2 ( n + 1 ) + 1 2 ) 2 + ( n + 1 + 1 2 ) 2 ] .
Because 1 k n , the above function obtains its minimizer at k = 1 . That is,
( k 2 ( n + 1 ) + 1 2 ) 2 + ( ( n + 1 ) + 1 2 ) 2 ( 1 ( n + 1 ) 1 / 2 ) 2 + ( n + 1 ) 2 + ( n + 1 ) + 1 / 4 = 2 ( n + 1 ) .
This shows
P ( E ) ( 1 p n ) 2 T ( n + 1 ) .
Case II: there are no zero elements in x. With the same notation Ω j as in Case I, we have that | ω 0 | = 0 and
P ( E ) = ( 1 p n ) j = 1 k | Ω j | × 1 i < j k ( 1 p n ) T | Ω i | | Ω j | = ( 1 p n ) T ( j = 1 k | Ω j | + 1 i < j k | Ω i | | Ω j | ) .
It is sufficient to obtain the minimizer of j = 1 k | Ω j | + 1 i < j k | Ω i | | Ω j | . under the restriction i | Ω i | = n and k 1 . Let y i = | Ω i | . Then,
j = 1 k y i + 1 i < j k y i y j = j = 1 k y i + 1 2 i = 1 n j = 1 , j i k y i y j = 1 2 i = 1 k j = 1 k y i y j 1 2 i = 1 k y i 2 + ( n + 1 ) = 1 2 ( i = 1 k y i ) 2 1 2 i = 1 k y i 2 + ( n + 1 ) = 1 2 ( n + 1 ) 2 + ( n + 1 ) 1 2 ( i = 1 k y i 2 ) .
Under the restriction i y i = n + 1 > 0 and z i 0 , the functions i = 1 k z i 2 obtain their maximizer at points z = ( 0 , , n + 1 , 0 , , 0 ) . Therefore, we have
j = 1 k y i + 1 i < j k y i y j 1 2 ( ( n + 1 ) 2 + ( n + 1 ) ( n + 1 ) 2 ) n + 1 .
This shows
P ( E ) ( 1 p n ) ( n + 1 ) T .
By combining the lower bounds of P ( E ) under Cases I and II, we have
P ( E c ) 1 ( 1 p n ) ( n + 1 ) T ,
where E c denotes that V is positively definite. □

Appendix B.2. Proof of Lemma A2

Proof of Lemma A2.
Based on Lemma 1, V is a positively definite matrix with a probability of at least 1 e p n T ( n + 1 ) . In what follows, we assume that V is positively definite such that its inverse exists. The proof proceeds two parts. The first part is to evaluate the cardinality of the set of the common neighbors of any two subjects i and j. That is, we establish the lower bound:
min i , j # { k : t i k > 0 , t k j > 0 } .
The second part is to show such inequality [c.f. (A16)]
a b [ { k : t α k > 0 , t k β > 0 } ( z i α z i β ) ] .
We use the method of the proof in [20] with minor modifications that simplify their proofs to show the second part.
Part I. Let 1 { · } be an indicator variable. It is equal to one when the expression in { · } is true; otherwise, it is equal to zero. For any given i j , define
ξ i j = k = 0 , k i , j n 1 { t i k > 0 , t j k > 0 } .
Note that ξ i j is the sum of n 1 independent Bernoulli random variables, and for three distinct indices i , j , k ,
E ξ i j = P ( t i k > 0 ) P ( t j k > 0 ) = ( 1 ( 1 p n ) T ) 2 : = η n .
Based on the Chernoff bound in [29], we have
P ξ i j 1 2 ( n 1 ) η n exp 1 8 ( n 1 ) η n .
It follows that
P ( min i , j ξ i j 1 2 ( n 1 ) η n ) i , j P ξ i j 1 2 ( n 1 ) η n ( n + 1 ) n 2 exp 1 8 ( n 1 ) η n .
Since T 1 ,
η n = ( 1 ( 1 p n ) T ) 2 ( 1 ( 1 p n ) ) 2 = p n 2 .
That is, with a probability of at least 1 ( n + 1 ) n 2 exp 1 8 ( n 1 ) η n , we have
min i , j ξ i j 1 2 ( n 1 ) η n 1 2 ( n 1 ) p n 2 .
Part II. For convenience, we introduce a non-negative array { q i j } i , j = 1 n , where
q i j : = v i j , i j ; q i i : = k = 1 n v i k = v i 0 , i , j = 1 , , n .
Let
m : = min ( i , j ) { ( i , j ) : q i j > 0 } q i j , M : = max i , j q i j , t max : = max i t i , t min : = min i t i .
It is clear that M m > 0 and
q i j 0 , q i j = q j i , v i j = q i j , i j , M t max v i i = k = 1 n q i k m t min .
Notice that
V 1 S = ( V 1 S ) ( I V S ) + S ( I V S ) ,
where I is a n × n identity matrix. Let X = I V S , Y = S X and Z = V 1 S ; we have the recursion formula
Z = Z X + Y .
The goal is to give an upper bound of all | z i j | .
According the definitions of S, V and X, we have
x i j = δ i j k = 1 n v i k s k j = δ i j k = 1 n v i k ( δ k j v j j + 1 v 00 ) = δ i j v i j v i i q i i v 00 = ( 1 δ i j ) q i j v j j q i i v 00 .
and
y i j = k = 1 n s i k x k j = k = 1 n ( δ i k v i i + 1 v 00 ) ( ( 1 δ k j ) q k j v j j q k k v 00 ) = k = 1 n δ i k v i i ( 1 δ k j ) q k j v j j q k k v 00 + k = 1 n 1 v 00 ( 1 δ k j ) q k j v j j q k k v 00 = ( 1 δ i j ) q i j v i i v j j q i i v i i v 00 q j j v j j v 00 .
Since
0 q i j v i i v j j M m 2 t min 2 , 0 q i j v i i v 00 M m 2 t min 2 ,
for any different i , j , k , we have
| y i j | a : = 2 M m 2 t min 2 , | y i j y i k | a .
In view of the expressions of x i j and y i j , we have
z i j = k = 1 n z i k ( 1 δ k j ) q k j v j j k = 1 n z i k q k k v 00 + y i j , i , j = 1 , , n .
Now, we fix an arbitrary i value and consider the upper bound of max j | z i j | .
Let α and β be such that
z i α = max k = 1 , , n z i k , z i β = min k = 1 , , n z i k .
Without loss of generality, we assume z i α | z i β | (otherwise, we can invert the sign of z i k and repeat the same process). Below, we will show z i β 0 . Note that this conclusion is not investigated in [20]. By multiplying v j j by both sides of (A14), we have
v j j z i j = k = 1 n z i k ( 1 δ k j ) q k j k = 1 n z i k q k k v j j v 00 + v j j y i j .
Summarizing the above equations with j = 1 , , n , we have
j = 1 n v j j z i j = k = 1 n j = 1 n z i k ( 1 δ k j ) q k j k = 1 n j = 1 n z i k q k k v j j v 00 + j = 1 n v j j ( ( 1 δ i j ) q i j v i i v j j q i i v i i v 00 q j j v j j v 00 ) .
Thus,
k = 1 n j = 1 n z i k q k k v j j v 00 + k = 1 n z i k q k k = j = 1 n v j j ( ( 1 δ i j ) q i j v i i v j j q i i v i i v 00 q j j v j j v 00 ) = j = 1 n ( 1 δ i j ) q i j v i i q i i j v j j v i i v 00 j = 1 n q j j v 00 = q i i v i i q i i j v j j v i i v 00 = q i i v i i v 00 ( v 00 + j = 1 n v j j )
Thus,
q i i v i i v 00 ( v 00 + j = 1 n v j j ) = k = 1 n z i k q k k j = 1 n v j j v 00 + k = 1 n z i k q k k z i β ( v 00 + j = 1 n v j j ) ,
This shows
z i β q i i v i i v 00 0 .
Since k = 1 n q k α / v α α = 1 , we have
k = 1 n z i α q k α v α α = k = 1 n z i k ( 1 δ k α ) q k α v α α k = 1 n z i k q k k v 00 + y i α .
In other words,
k = 1 n [ z i α z i k ( 1 δ k α ) ] q k α v α α = k = 1 n z i k q k k v 00 + y i α .
Analogously, we have
k = 1 n [ z i k ( 1 δ k β ) z i β ] q k β v β β = k = 1 n z i k q k k v 00 y i β .
Therefore,
y i α y i β = k = 1 n { [ z i α z i k ( 1 δ k α ) ] q k α v α α + [ z i k ( 1 δ k β ) z i β ] q k β v β β }   [ { k : t α k > 0 , t k β > 0 } ( z i α z i β ) ] × m M t max .
The following calculations are based on the event E n :
{ min i j ξ i j 1 2 ( n 1 ) p n 2 , t min 1 2 n T p n , t max 3 2 n T p n }
In view of (A15) and (A16), we have
z i α z i α z i β M t max m × [ min i j ξ i j ] 1 × 2 M m 2 t min 2   M · 3 2 n T p n m × 1 1 2 ( n 1 ) p n 2 × 2 M m 2 ( 1 2 n T p n ) 2   = 24 M 2 n ( n 1 ) m 3 T p n 3
Note that M = T b n 1 and m = b n 0 . Based on Lemma 6 in the main text, we have
P ( t min 1 2 n T p n , t max 3 2 n T p n ) 1 2 ( n + 1 ) exp ( 1 8 n T p n ) .
In view of inequality (A12), we have
P ( E n ) 1 ( n + 1 ) n 2 e 1 8 ( n 1 ) p n 2 ( n + 1 ) e 1 8 n T p n .
If p n 24 log n / n , then
( n + 1 ) n 2 e 1 8 ( n 1 ) p n O ( 1 n ) , ( n + 1 ) e 1 10 n p n = o ( 1 n 1.4 ) ,
such as
P ( E n ) 1 O ( 1 n ) .
Let F n be the event that V 1 exists. Based on Lemma 1, if p n 24 log n / n , then
P ( F n ) 1 exp ( p n T ( n + 1 ) ) 1 1 n 24 .
Therefore,
P ( E n F n ) 1 O ( 1 n ) .
Substituting M = T b n 1 and m = b n 0 into (A17) shows Lemma 2. □

Appendix B.3. Proof of Lemma A3

Proof of Lemma A3.
Recall that π i j = β i β j and
H i ( β ) = j i t i j μ ( π i j ) a i , i = 1 , , n .
The Jacobian matrix H ( β ) of H ( β ) can be calculated as follows. By finding the partial derivative of H i with respect to β for i j , we have
H i ( β ) β j = t i j μ ( π i j ) , H i ( β ) β i = j i t i j μ ( π i j ) ,
2 H i ( β ) β i β j = t i j μ ( π i j ) , 2 H i ( β ) β i 2 = j i t i j μ ( π i j ) .
When β B ( β , ϵ n ) , based on Condition (3c), we have
| 2 H i ( β ) β i β j | b n 2 t i j , i j .
Let
g i j ( β ) = ( 2 H i ( β ) β 1 β j , , 2 H i ( β ) β n β j ) .
Therefore,
| 2 H i ( β ) β i 2 | t i b n 2 , | 2 H i ( β ) β j β i | b n 2 t i j .
This demonstrates that g i i ( β ) 1 2 t i b n 2 . Note that when i j and k i , j ,
2 H i ( β ) β k β j = 0 .
Therefore, we have g i j ( β ) 1 2 t i j b n 2 for j i . Consequently, for vectors x , y D , we have
max i = 0 , , n H i ( x ) H i ( y ) 1 max i = 0 , , n j = 1 n | H i ( x ) x j H i ( y ) y j | = max i = 0 , , n j = 1 n | 0 1 [ g i j ( t x + ( 1 t ) y ) ] ( x y ) d t | max i = 0 , , n 4 b n 2 t i x y = 4 b n 2 t max x y .
This completes the proof. □

Appendix B.4. Proof of Lemma A4

Proof of Lemma A4.
Recall that t i = j i t i j , and a i is the number of wins of subject i out of t i comparisons. Since all comparisons are mutually independent, a i is the sum of m i independent Bernoulli random variables given t i j = m i j for j = 0 , , n , where m i = j i m i j . Based on [30]’s (1963) inequality, we have
P | a i E ( a i j | t i j , j = 0 , , n ) | 2 m i log n | t i j = m i j , j = 0 , , n 2 exp { 2 m i log n m i } = 2 n 2 .
where E ( a i j | t i j , j = 0 , , n ) is the conditional expectation given t i j for 0 j n . Note that the upper bound of the above probability does not depend on m i j . With the law of total probability, for fixed i,
P | a i j E ( a i j | t i j , j = 0 , , n ) | 2 t i log n | = m i 0 = 0 T m i n = 0 T P ( t i j = m i j , j = 0 , , n ) × P | a i j E ( a i j | t i j , j = 0 , , n ) | 2 m i log n | t i j = m i j , j = 0 , , n 2 n 2 m i 0 = 0 T m i n = 0 T P ( t i j = m i j , j = 0 , , n ) 2 exp { 2 m i log n m i } = 2 n 2 .
Therefore,
P max i = 1 , , n | a i j E ( a i j | t i j , j = 0 , , n ) | 2 t max log n P i | a i E a i | 2 t i log n i = 1 n P | a i E a i | 2 t i log n n × 1 n 2 = 1 n .
This completes the proof. □

Appendix B.5. Proof of Lemma A5

Proof of Lemma A5.
We first evaluate the uniform lower bound of t i , i = 0 , , n . Note that t i is the sum of n T independent and identically distributed (i.i.d.) binomial random variables, B i n ( T , p n ) . It can be also regarded as the sum of T n , i.i.d., Bernoulli random variables. With the use of the Chernoff bound ([29]), we have
P ( min i = 0 , , n t i < T 2 n p n ) i = 0 n P ( t i < T 2 n p n ) ( n + 1 ) exp ( T 8 n p n ) .
Thus, with a probability of at least 1 ( n + 1 ) exp ( T 8 n p n ) ,
min i = 0 , , n t i T 2 n p n .
Analogously, with the use of the Chernoff bound ([29]), we have
P ( max i = 0 , , n t i > 3 2 n T p n ) i = 0 n P ( t i > 3 2 n T p n ) ( n + 1 ) exp ( 1 10 n T p n ) ,
and
P ( 1 2 i t i > 3 2 ( n + 1 ) n T p n ) exp ( 1 10 ( n + 1 ) n T p n ) .

Appendix B.6. Proof of Lemma A6

Proof of Lemma A6.
Write
H = H ( β ) , V 1 = H ( β ) , E ( · ) = E ( · | t i j , 0 i , j n ) , Cov ( · ) = Cov ( · | t i j , 0 i , j n ) .
Then,
H = E ( a ) a .
Let W = V 1 S . Note that U = Cov ( H ) . Via direct calculations, we have
Cov ( W H ) = ( V 1 S ) U ( V 1 S ) : = I 1 + I 2
where I 1 = V 1 S + S V S S and I 2 = ( V 1 S ) ( U V ) ( V 1 S ) . It is easy to verify
( S V S S ) i j = v i 0 v i i v 00 + v 0 j v j j v 00 ( 1 δ i j ) v i j v i i v j j .
Therefore,
max i , j | ( S V S S ) i j | 3 b n 1 t min 2 b n 0 2 .
Based on Lemmas 3 and 5 in the main text, we have
I 1 = O p ( b n 1 2 n 2 b n 0 3 p n 5 ) .
Now, we evaluate I 2 . Direct calculations give
[ ( V 1 S ) ( U V ) ( V 1 S ) ] i j = k , s ( V 1 S ) i k ( U V ) k s ( V 1 S ) s j = O ( ( b n 1 2 n 2 b n 0 3 p n η n ) 2 ) k , s | ( U V ) k s | = O ( ( b n 1 2 n 2 b n 0 3 p n η n ) 2 ) × 2 ( b n 1 + 1 / 4 ) i t i = O ( b n 1 5 n 2 b n 0 6 p n 5 ) .
where the second inequality is due to Lemma 3 and the third inequality is due to p i j ( 1 p i j ) 1 / 4 and μ i j ( β ) b n 1 . Therefore, we have
I 1 + I 2 = O p ( b n 1 5 n 2 b n 0 6 p n 5 ) .

References

  1. Han, R.; Xu, Y.; Chen, K. A general pairwise comparison model for extremely sparse networks. J. Am. Stat. Assoc. 2023, 118, 2422–2432. [Google Scholar] [CrossRef]
  2. Stigler, S.M. Citation patterns in the journals of statistics and probability. Stat. Sci. 1994, 9, 94–108. [Google Scholar] [CrossRef]
  3. Varin, C.; Cattelan, M.; Firth, D. Statistical modelling of citation exchange between statistics journals. J. R. Stat. Soc. Ser. A-Stat. Soc. 2016, 179, 1–63. [Google Scholar] [CrossRef]
  4. Radlinski, F.; Joachims, T. Active exploration for learning rankings from clickthrough data. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; ACM: San Jose, CA, USA, 2007; pp. 570–579. [Google Scholar]
  5. Chen, B.; Escalera, S.; Guyon, I.; Ponce-López, V.; Shah, N.B.; Simon, M.O. Overcoming calibration problems in pattern labeling with pairwise ratings: Application to personality traits. In European Conference on Computer Vision (ECCV 2016) Workshops; Springer: Cham, Switzerland, 2016; Volume 9915, pp. 419–432. [Google Scholar]
  6. David, H.A. The Method of Paired Comparisons, 2nd ed.; Oxford University Press: Oxford, UK, 1988. [Google Scholar]
  7. Bradley, R.A.; Terry, M.E. Rank analysis of incomplete block designs the method of paired comparisons. Biometrika 1952, 39, 324–345. [Google Scholar] [CrossRef]
  8. Zermelo, E. Die berechnung der turnier-ergebnisse als ein maximumproblem der wahrscheinlichkeitsrechnung. Math. Z. 1929, 29, 436–460. [Google Scholar] [CrossRef]
  9. Thurstone, L.L. A law of comparative judgment. Psychol. Rev. 1927, 34, 273–286. [Google Scholar] [CrossRef]
  10. Simons, G.; Yao, Y.C. Asymptotics when the number of parameters tends to infinity in the bradley-terry model for paired comparisons. Ann. Stat. 1999, 27, 1041–1060. [Google Scholar] [CrossRef]
  11. Chen, Y.; Suh, C. Spectral mle: Top-k rank aggregation from pairwise comparisons. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015); Bach, F., Blei, D., Eds.; International Machine Learning Society (IMLS): Stroudsburg, PA, USA, 2015; pp. 371–380. [Google Scholar]
  12. Shah, N.B.; Wainwright, M.J. Simple, robust and optimal ranking from pairwise comparisons. J. Mach. Learn. Res. 2018, 18, 1–38. [Google Scholar]
  13. Han, R.; Ye, R.; Tan, C.; Chen, K. Asymptotic theory of sparse bradley-terry model. Ann. Appl. Probab. 2020, 30, 2491–2515. [Google Scholar] [CrossRef]
  14. Yan, T.; Yang, Y.; Xu, J. Sparse paired comparisons in the bradley-terry model. Stat. Sin. 2012, 22, 1035–1318. [Google Scholar] [CrossRef]
  15. Agarwal, A.; Patil, P.; Agarwal, S. Accelerated spectral ranking. In Proceedings of the Machine Learning Research, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018; PMLR: Cambridge, MA, USA, 2018; Volume 80, pp. 70–79. [Google Scholar]
  16. Hendrickx, J.; Olshevsky, A.; Saligrama, V. Graph resistance and learning from pairwise comparisons. In Proceedings of the Conference on Neural Information Processing Systems; NIPS: San Diego, CA, USA, 1999; pp. 2702–2711. [Google Scholar]
  17. Vojnovic, M.; Yun, S. Parameter estimation for thurstone choice models. arXiv 2017, arXiv:1705.00136. [Google Scholar] [CrossRef]
  18. Wang, J.; Shah, N.; Ravi, R. Stretching the effectiveness of mle from accuracy to bias for pairwise comparisons. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (AISTATS 2020), Online, 26–28 August 2020; Chiappa, S., Calandra, R., Eds.; Proceedings of Machine Learning Research; PMLR: Cambridge, MA, USA, 2020; Volume 108, pp. 66–76. [Google Scholar]
  19. Ford, L.R. Solution of a ranking problem from binary comparisons. Am. Math. Mon. 1957, 64, 28–33. [Google Scholar] [CrossRef]
  20. Simons, G.; Yao, Y.C. Approximating the inverse of a symmetric positive definite matrix. Linear Algebra Appl. 1998, 281, 97–103. [Google Scholar] [CrossRef][Green Version]
  21. Yamamoto, T. Error bounds for newton s iterates derived from the kantorovich theorem. Numer. Math. 1986, 48, 91–98. [Google Scholar] [CrossRef]
  22. Delyon, B. Exponential inequalities for sums of weakly dependent variables. Electron. J. Probab. 2009, 752–779. [Google Scholar] [CrossRef]
  23. Roussas, G.G. Exponential probability inequalities with some applications. Lect. Notes-Monogr. Ser. 1996, 30, 303–319. [Google Scholar]
  24. Ioannides, D.A.; Roussas, G.G. Exponential inequality for associated random variables. Stat. Probab. Lett. 1999, 42, 423–431. [Google Scholar] [CrossRef]
  25. Cocke, W.J. Central limit theorems for sums of dependent vector variables. Ann. Math. Statist. 1972, 43, 968–976. [Google Scholar] [CrossRef]
  26. Cox, J.T.; Grimmett, G. Central limit theorems for associated random variables and the percolation model. Ann. Probab. 1984, 12, 514–528. [Google Scholar] [CrossRef]
  27. Kantorovich, L.V. Functional analysis and applied mathematics. Uspekhi Mat. Nauk 1948, 3, 89–185. [Google Scholar]
  28. Loève, M. Probability Theory I, 4th ed.; Springer: New York, NY, USA, 1977. [Google Scholar]
  29. Chernoff, H. A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Statist. 1952, 23, 493–507. [Google Scholar] [CrossRef]
  30. Hoeffding, W. Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 1963, 58, 13–30. [Google Scholar] [CrossRef]
Table 1. The fail frequency ( × 100 % ).
Table 1. The fail frequency ( × 100 % ).
p n n = 100 n = 500 n = 1000
( log n / n ) 1 / 2 0.5 00
( log n / n ) 2 / 3 25.8 0.2 0
log n / n 100100100
Table 2. The reported values are the coverage frequency ( × 100 % ) for β i β j for a pair ( i , j ) /length of the confidence interval/fail probabilities ( × 100 % ).
Table 2. The reported values are the coverage frequency ( × 100 % ) for β i β j for a pair ( i , j ) /length of the confidence interval/fail probabilities ( × 100 % ).
p n = ( log n / n ) 1 / 4
n i c = 0.2 c = 0.5 c = 0.8
100 ( 1 , 2 ) 94.5 / 1.04 / 0 94.62 / 1.06 / 0 94.62 / 1.06 / 0
( 49 , 50 ) 94.97 / 1.04 / 0 94.34 / 1.04 / 0 94.34 / 1.04 / 0
( 99 , 100 ) 95.12 / 1.04 / 0 94.99 / 1.06 / 0 94.99 / 1.06 / 0
200 ( 1 , 2 ) 95.18 / 0.78 / 0 94.71 / 0.79 / 0 94.71 / 0.79 / 0
( 99 , 100 ) 95.14 / 0.78 / 0 94.49 / 0.78 / 0 94.49 / 0.78 / 0
( 199 , 200 ) 94.82 / 0.78 / 0 94.64 / 0.79 / 0 94.64 / 0.79 / 0
p n = ( log n / n ) 1 / 2
n i c = 0.2 c = 0.4 c = 0.6
100 ( 1 , 2 ) 93.28 / 1.58 / 0.19 93.48 / 1.60 / 0.49 93.48 / 1.60 / 1.15
( 49 , 50 ) 93.85 / 1.58 / 0.19 93.94 / 1.58 / 0.49 93.94 / 1.58 / 1.15
( 99 , 100 ) 93.80 / 1.58 / 0.19 93.96 / 1.61 / 0.49 93.96 / 1.61 / 1.15
200 ( 1 , 2 ) 94.04 / 1.26 / 0 93.89 / 1.28 / 0 93.89 / 1.28 / 0
( 99 , 100 ) 94.14 / 1.26 / 0 94.22 / 1.26 / 0 94.22 / 1.26 / 0
( 199 , 200 ) 94.38 / 1.26 / 0 93.98 / 1.28 / 0 93.98 / 1.28 / 0
Table 3. The fitted merit β ^ i , the number of wins a i , and the standard error σ ^ i .
Table 3. The fitted merit β ^ i , the number of wins a i , and the standard error σ ^ i .
American Football ConferenceNational Football Conference
Division Team β ^ i a i σ ^ i Division Team β ^ i a i σ ^ i
EastNew England Patriots 1.452 11 0.519 EastDallas Cow boys 1.284 10 0.512
New York Jets 0.338 4 0.530 Philadelphia Eagles 1.193 9 0.511
Miami Dophins 0.718 7 0.509 New York Giants 0.423 5 0.514
Buffalo Bills 0.673 6 0.514 Washington Redskins 0.756 7 0.511
NorthBaltimore Ravens 1.382 10 0.514 NorthChicago Bears 1.494 12 0.532
Cincinnati Bengals 0.769 6 0.514 Green Bay Packers 0.595 6 0.522
Pittsburgh Steelers 1.337 9 0.52 Minnesota Vikings 1.059 8 0.523
Cleveland Browns 0.968 7 0.519 Detroit Lions 0.579 6 0.516
SouthIndianapolis Colts 1.244 10 0.512 SouthNew Orleans Saints 1.908 13 0.544
Houston Texans 1.430 11 0.516 Atlanta Falcons 0.767 7 0.512
Tennessee Titans 1.205 9 0.51 Carolina Panthers 0.840 7 0.511
Jacksonville Jaguars 0.591 5 0.517 Tampa Bay Buccaneers 0.506 5 0.52
WestKansas City Chiefs 1.762 12 0.537 WestLos Angeles Rams 1.963 13 0.559
Denver Broncos 0.713 6 0.523 San Francisco 49ers 0.166 4 0.54
Oakland Raiders 0.365 4 0.537 Seattle Seahawks 1.305 10 0.525
Los Angeles Chargers 1.748 12 0.536 Arizona Cardinals03 0.555
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.; Pan, L.; Yan, T. Moment Estimation in Paired Comparison Models with a Growing Number of Subjects. Entropy 2026, 28, 314. https://doi.org/10.3390/e28030314

AMA Style

Wang Q, Pan L, Yan T. Moment Estimation in Paired Comparison Models with a Growing Number of Subjects. Entropy. 2026; 28(3):314. https://doi.org/10.3390/e28030314

Chicago/Turabian Style

Wang, Qiuping, Lu Pan, and Ting Yan. 2026. "Moment Estimation in Paired Comparison Models with a Growing Number of Subjects" Entropy 28, no. 3: 314. https://doi.org/10.3390/e28030314

APA Style

Wang, Q., Pan, L., & Yan, T. (2026). Moment Estimation in Paired Comparison Models with a Growing Number of Subjects. Entropy, 28(3), 314. https://doi.org/10.3390/e28030314

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop